text
stringlengths
67
1.03M
metadata
dict
# Notebook from Gabriel19-00477/ITBA-3207-Team-Typhoon-Analysts Path: Data Sets Coding Analysis/Data Analysis and Coding for Both Data Sets.ipynb EDA to Typhoon Mitigation and Response Framework (TMRF)_____no_output_____“Experience is a master teacher, even when it’s not our own.” ― Gina Greenlee_____no_output_____The Philippines' apparent vulnerability to natural disasters emerges from its geographic location within the Pacific Ring of Fire. The country is surrounded by large bodies of water and faces the Pacific Ocean, which produces 60% of the world's typhoons. Approximately twenty tropical cyclones pass through the Philippine area of responsibility each year, ten of which are typhoons and five of which are catastrophic (Brown, 2013). Due to a lack of preparedness and response, families in rural areas are more likely to be hit. According to the Weather Underground (n.d.), hurricanes are becoming a global threat as they solidify and more super tropical storms emerge. As a result, every municipality should have a high level of safety and security. However, government agencies and non-governmental organizations in the Philippines promote emergency preparedness, but they have yet to acquire the public's general attention like in Yolanda’s storm surge disaster where there is insufficient public awareness of storm surges, higher casualties have occurred (Commision on Audit, n.d.). The Commission on Audit also reported that the mayor of Tacloban City had stated that more lives may have been saved if storm surges were labeled as tsunami-like in nature. According to the National Research Council et al. (n.d.), preparedness is indeed the way of transforming a community's awareness of potential natural hazards into actions that strengthen its ability to respond to and recover from disasters and proposals for preparedness must address the immediate response and all the longer-term recovery and rehabilitation. _____no_output_____The objective of this analysis is to construct an Exploratory Data Analysis to Typhoons from the year 2019 that prompted the most casualty rates in the country and data on the municipal governments that had the least number of affected families’ individuals per typhoon in the Philippines. Moreover, a global dataset from 2000-2022 about hurricanes in the U.S. from the Centre for Research on the Epidemiology of Disasters' Emergency Events Database (EM-DAT) will be utilized in the same manner as mentioned in the Philippines Data set to know which Location in the United States had the most successful response and mitigation plan for typhoons. This information will be used to construct a Typhoon Mitigation and Response Plan that may help the Philippines deal with hurricanes. Integrating various programs from other countries will increase the likelihood of Filipinos' survival and recovery from typhoons. _____no_output_____<pre> <b>Contents of the Notebook:</b> P. Philippines Data set 2019 p1. Analysis of the features and X variables. p2. Selection of X variables to be used for the analysis. p3. Converting the data from the excel sheet data set to a pandas data frame. p4. Data Cleaning p5. Statistical Overview and Correlation Analysis of the featured X variables.' p6. Data Analysis Format for every objectives: Objective Codes Outputs Analysis and Observation Recommendations a. American Data set 2000-2022 a1. Analysis of the features and X variables. a2. Converting the data from the excel sheet data set to a pandas data frame. a3. Selection of X variables to be used for the analysis. a4. Dataframe Normalization a5. Data Cleaning a6. Statistical Overview and Correlation Analysis of the featured X variables.' a7. Data Analysis Format for every objectives: Objective Codes Outputs Analysis and Observation Recommendations </pre>_____no_output_____Humanitarian Data Exchange Data set about Philippines (2019)_____no_output_____ <code> #These scripts would be coded to call all python modules who will be of used for the data analysis. import pandas as pd #Importing the matplotlib library and renaming it as plt. import numpy as np import matplotlib.pyplot as plt #Importing pandas library import seaborn as sns from matplotlib import cm from pandas.plotting import scatter_matrix import plotly.express as px from IPython.display import display, Markdown_____no_output_____ </code> p1. Analysis of the features and X variables. _____no_output_____ <code> data=pd.read_excel(r'200204_philippines-2019-events-data.xlsx_3FAWSAccessKeyId=AKIAXYC32WNARK756OUG_Expires=1644193427_Signature=hFTPcWroN6S3M2pX40ObWvu24p8=.xlsx', sheet_name="Tropical Cyclones") data.info() # info() function was used to get an initial reading from the excel sheet data. <class 'pandas.core.frame.DataFrame'> RangeIndex: 687 entries, 0 to 686 Data columns (total 24 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Region 686 non-null object 1 Region code 686 non-null object 2 Province 686 non-null object 3 Province code 686 non-null object 4 City_Mun 686 non-null object 5 City_Mun code 686 non-null object 6 Year 687 non-null int64 7 Incident 687 non-null object 8 Date Occurred 687 non-null datetime64[ns] 9 2015 Population 687 non-null int64 10 Affected_FAM 687 non-null int64 11 Affected_PERs2 687 non-null object 12 Affected_PERs 686 non-null float64 13 Inside_EC_Fam_Cum 659 non-null float64 14 Inside_EC_Fam_Now 687 non-null int64 15 Inside_EC_Per_Cum 659 non-null float64 16 Inside_EC_Per_Now 686 non-null float64 17 Outside_EC_Fam_Cum 659 non-null float64 18 Outside_EC_Fam_Now 659 non-null float64 19 Outside_EC_Pers_Cum 659 non-null float64 20 Outside_EC_Per_Now 687 non-null int64 21 Totally_damaged_houses 687 non-null int64 22 Partially_damaged_houses 686 non-null float64 23 IDP_Cum 687 non-null int64 dtypes: datetime64[ns](1), float64(8), int64(7), object(8) memory usage: 128.9+ KB </code> p2. Converting the data from the excel sheet data set to a pandas data frame._____no_output_____ <code> df=pd.DataFrame(data) #convert dataset excel into dataframe display(df)_____no_output_____ </code> According to the information provided about the Philippines dataset, there are 53 columns. And the analysis for this EDA project does not need the inclusion of these rows. Consequently, the.loc function was employed to choose variables. Only the following columns were required for the analysis: <pre> 0 Region 686 non-null object 2 Province 686 non-null object 4 City_Mun 686 non-null object 6 Year 687 non-null int64 7 Incident 687 non-null object 8 Date Occurred 687 non-null datetime64[ns] 9 2015 Population 687 non-null int64 10 Affected_FAM 687 non-null int64 12 Affected_PERs 686 non-null float64 13 Inside_EC_Fam_Cum 659 non-null float64 14 Inside_EC_Fam_Now 687 non-null int64 15 Inside_EC_Per_Cum 659 non-null float64 16 Inside_EC_Per_Now 686 non-null float64 17 Outside_EC_Fam_Cum 659 non-null float64 18 Outside_EC_Fam_Now 659 non-null float64 19 Outside_EC_Pers_Cum 659 non-null float64 20 Outside_EC_Per_Now 687 non-null int64 21 Totally_damaged_houses 687 non-null int64 22 Partially_damaged_houses 686 non-null float64 23 IDP_Cum 687 non-null int64 </pre> _____no_output_____ p3. Selection of X variables to be used for the analysis._____no_output_____ <code> #selecting all needed and specific columns from original dataframe/dataset and creating new dataframe named new_df new_df = df.iloc[:,[0,2,4,6,7,8,9,10,12,13,14,15,16,17,18,19,20,21,22,23]].copy()_____no_output_____ </code> p4. Data Cleaning_____no_output_____ <code> df.info() # info() function was used to get an understanding of which aspects of the dataset need cleaning.<class 'pandas.core.frame.DataFrame'> RangeIndex: 687 entries, 0 to 686 Data columns (total 24 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Region 686 non-null object 1 Region code 686 non-null object 2 Province 686 non-null object 3 Province code 686 non-null object 4 City_Mun 686 non-null object 5 City_Mun code 686 non-null object 6 Year 687 non-null int64 7 Incident 687 non-null object 8 Date Occurred 687 non-null datetime64[ns] 9 2015 Population 687 non-null int64 10 Affected_FAM 687 non-null int64 11 Affected_PERs2 687 non-null object 12 Affected_PERs 686 non-null float64 13 Inside_EC_Fam_Cum 659 non-null float64 14 Inside_EC_Fam_Now 687 non-null int64 15 Inside_EC_Per_Cum 659 non-null float64 16 Inside_EC_Per_Now 686 non-null float64 17 Outside_EC_Fam_Cum 659 non-null float64 18 Outside_EC_Fam_Now 659 non-null float64 19 Outside_EC_Pers_Cum 659 non-null float64 20 Outside_EC_Per_Now 687 non-null int64 21 Totally_damaged_houses 687 non-null int64 22 Partially_damaged_houses 686 non-null float64 23 IDP_Cum 687 non-null int64 dtypes: datetime64[ns](1), float64(8), int64(7), object(8) memory usage: 128.9+ KB df.isnull().sum() #checking for total null values_____no_output_____#To clean the dataframe and remove the object data string columns (Region, Province, City Mun) which had three null values, dropna() function was used. new_df = df.dropna(subset=['Region', 'Province', 'City_Mun'])_____no_output_____# There are a lot of null values to be removed from the above findings. In addition, pandas can only clean the dataframe if each row has a value, namely integers. All object data types are already devoid of null values, as their data type is string. Therefore, the fillna() method was used to replace null values with zero in order to conduct a smooth data analysis. new_df = new_df.fillna(0) # replace NaN with zero value new_df.isnull().sum() #checking for total null values #After completing all of these data cleansing procedures, the final dataframe for analysis was constructed and renamed "new df." According to the output of the isnull() function, there are no longer any null values in the data frame. Thus, data analysis would go without hiccups and there would be no room for error in the later portion of this EDA._____no_output_____ </code> p5. Statistical Overview and Correlation Analysis of the featured X variables.'_____no_output_____ <code> new_df.describe()_____no_output_____new_df.describe(include =['O'])_____no_output_____#Calculating the correlation between the variables in the dataframe. corr = new_df.corr()**2 corr.Affected_PERs.sort_values(ascending=False)_____no_output_____sns.heatmap(new_df.corr(),annot=True,cmap='RdYlGn',linewidths=0.2) #data.corr()-->correlation matrix fig=plt.gcf() fig.set_size_inches(10,8) plt.show()_____no_output_____ </code> Analysis of the Heatmap Observe first that only numeric or int data type x variables are compared, as it is obvious that alphabets and strings cannot be associated. Before we can interpret the graph, correlation must be defined. If an increase in feature A leads to an increase in feature B, then the two features are related. A value 1 implies 100 percent positive association. If an increase in feature A causes a decrease in feature B, then the two features are inversely related. A correlation value of -1 represents an optimal negative correlation. Assume that two traits are highly or completely correlated, so that a change in one induces a change in the other. This means that the content of both features is equivalent, with minimum or no deviations. This is known as MultiColinearity because both variables contain almost identical data. Do you think we should use both of these, even though one is redundant? While creating or training models, we should attempt to avoid duplicate features, as doing so reduces training time and gives a number of additional advantages. However, if the data related to people or the number of individuals affected, such as Total fatalities, Number of injured, etc., it would be an exception. Even if they have a strong link, they may still give useful information for the study. The previous heatmap reveals that several of the characteristics are connected. There is a relationship between Affected FAM and Affected PERs with a correlation of 0.99. Even if they have a significant association, we may still use them for analysis if they provide information about a country in the data set._____no_output_____p6. Data Analysis_____no_output_____A. Determine the top 5 typhoons from 2019 that brought the greatest number of infrastructure casualties to the Provinces in the Philippines based from Totally Damaged Houses x variable. _____no_output_____ <code> cyclone=new_df.groupby("Incident") dhouse=cyclone["Totally_damaged_houses"].sum() typ=pd.DataFrame(dhouse) cycph=typ.sort_values(by="Totally_damaged_houses", ascending=False) tdh=cycph.head(5) display(tdh) #graphing of data fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.axis('equal') storms = ['TY Tisoy', 'TY Ursula', 'TS Quiel', 'TS Hanna', 'TD Marilyn'] damages = [68104,60483,59,56,44] ax.pie(damages, labels = storms,autopct='%1.2f%%') plt.show() #data graphing totalaffected = [68104, 60483, 59, 56, 44] index = ['TY Tisoy', 'TY Ursula', 'TS Quiel', 'TS Hanna', 'TD Marilyn'] df = pd.DataFrame({'totalaffected': totalaffected, 'Province': index}, index=index) ax = df.plot.barh(y='totalaffected')_____no_output_____ </code> Typhoon Tisoy has the most desctructive impact on the houses in every region in the Philippines from 2019 with a 68,104 in number. WhileTY Ursula has the second highest number with 60,483 housing damaged. Why did this happened? A recent Harvard research found that despite the fact that the Philippines is one of the most disaster-prone nations in the world, the majority of Filipino households feel unprepared for catastrophes and natural hazards due to a lack of financial resources. In 2017, Harvard Humanitarian Initiative (HHI) DisasterNet Philippines conducted the first survey of its type to gauge household preparation for disasters, reaching 4,368 families around the country (Enano, 2019). According to the survey report, just 36% of respondents felt completely prepared for disasters, while 33% reported being moderately prepared. The other third of respondents reported to be just minimally or not at all prepared for natural disasters such as typhoons, earthquakes, floods, and landslides. Over nine million Filipinos have been impacted by a natural catastrophe in the previous five years, according to the HHI. However, over 47 percent of respondents indicated they had made no preparations for these disasters. Despite the fact that the majority of respondents claimed to have discussed emergency preparations with family members, the majority do not have "go bags" or emergency bags or even first aid kits, according to the survey._____no_output_____B. Acquire the data about the Provinces who had the greatest number of affected individuals per typhoon (Affected_Pers). _____no_output_____ <code> #top 5 typhoons from 2019 that brought the greatest number of infrastructure casualties to the Provinces in the Philippines ByProvince= new_df.groupby('Province') TotalData = ByProvince['Affected_PERs'].sum() data= pd.DataFrame(TotalData) SortedData= data.sort_values(by='Affected_PERs',ascending=False) result= SortedData.head(5) display(result) #data graphing totalaffected = [772162, 622951, 602234, 504447, 483308] index = ['Leyte', 'Capiz', 'Northern Samar', 'Aklan', 'Western Samar'] df = pd.DataFrame({'totalaffected': totalaffected, 'Province': index}, index=index) ax = df.plot.barh(y='totalaffected')_____no_output_____ </code> Based on the results reported, the impacted population by province in the Philippines in 2019 is as follows: <pre> LEYTE 772, 162 CAPIZ 622951.0 Northern Samar 602234.0 AKLAN 504447.0 SAMAR (WESTERN SAMAR) 483308.0 </pre> What do these provinces lack in terms of disaster preparedness, particularly for Typhoons? According to the (2017), a large proportion of individuals impacted by typhoons are due to the inability of line agencies and LGUs to conduct DRRM responsibilities . The inability of line agencies and LGUs to adopt DRRM responsibilities is an often stated problem in Philippine disaster management. Insufficient personnel, lack of technical expertise and comprehension, limited financial resources, and lack of technology, such as a multihazard early warning system, are among the causes. The LGUs lack the technical expertise and resources necessary to fulfill their statutory responsibilities. The DILG-Bureau of Local Government Supervision's 2013 national table assessment on LGU compliance with RA No. 10121 revealed that just 23 percent of LGUs in flood-prone regions are prepared for catastrophes in terms of knowledge, institutional capacity, and coordination._____no_output_____C. Get the information that shows the top 5 municipalities who were most affected by typhoons from the year 2019 based from the Affected_PERs x variable. _____no_output_____ <code> ByMuni= new_df.groupby('City_Mun') TotalData=ByMuni['Affected_PERs'].sum() data = pd.DataFrame(TotalData) SortedData = data.sort_values(by='Affected_PERs',ascending=False) result= SortedData.head(5) display(result) data={'Municipality':['Roxas', 'Daraga', 'Catbalogan', 'City of Tacloban', 'Catarman'], 'Affected Person':[ 168580, 126595, 122572, 119918, 106424]} # Load data into DataFrame df = pd.DataFrame(data = data); #Graphing of Data df.plot.scatter(x = 'Municipality', y = 'Affected Person', s = 100, c = 'red'); _____no_output_____ </code> From the top 5 municipalities in the Philippines who had the most affected individuals in the year 2019, the following results were as follows; <pre> CITY OF ROXAS (CAPITAL) 168580.0 Daraga (Locsin) 126595.0 CITY OF CATBALOGAN (CAPITAL) 122572.0 CITY OF TACLOBAN (CAPITAL) 119918.0 Catarman (capital) 106424.0 </pre> The ineffective execution of rules and regulations in these municipalities contributed to their severe typhoon damage. In low-lying and high-risk regions, a proliferation of establishments and informal settlers has resulted from a lack of administration and lax implementation of disaster-related rules (no building zones). According to the 2009 Annual Report of the Global Facility for Disaster Reduction and Recovery (GFDRR), as stated by Policy Brief - Senate Economic Planning Office (2017), several constructions do not comply with the Building Code and Environmental Compliance Certificates (ECCs). In certain local government units, building regulations and standards are weakened to minimize construction costs. In disaster-prone locations, a lack of control in the construction of buildings and other physical structures contributes to an increase in community risk._____no_output_____ <code> PhilData = new_df PhilData_____no_output_____ </code> The Centre for Research on the Epidemiology of Disasters' Data set about the American Typhoons (2000-2022) _____no_output_____a1. Analysis of the features and X variables._____no_output_____ <code> # casualties of storms in America based in EMDAT datasets data = pd.read_excel(r'2000-2022-emdat_public_2022_04_24_query_uid-XuKaJG.xlsx', sheet_name="emdat data") data.info() # info() function was used to get an initial reading from the excel sheet data.<class 'pandas.core.frame.DataFrame'> RangeIndex: 722 entries, 0 to 721 Data columns (total 53 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Dis No 722 non-null object 1 Year 722 non-null int64 2 Seq 722 non-null int64 3 Glide 143 non-null object 4 Disaster Group 722 non-null object 5 Disaster Subgroup 722 non-null object 6 Disaster Type 722 non-null object 7 Disaster Subtype 685 non-null object 8 Disaster Subsubtype 280 non-null object 9 Event_Name 391 non-null object 10 Country 722 non-null object 11 ISO 722 non-null object 12 Region 722 non-null object 13 Continent 722 non-null object 14 Location 700 non-null object 15 Origin 10 non-null object 16 Associated Dis 401 non-null object 17 Associated Dis2 153 non-null object 18 OFDA Response 58 non-null object 19 Appeal 4 non-null object 20 Declaration 140 non-null object 21 Aid Contribution 16 non-null float64 22 Dis Mag Value 216 non-null float64 23 Dis Mag Scale 722 non-null object 24 Latitude 40 non-null float64 25 Longitude 40 non-null float64 26 Local Time 1 non-null object 27 River Basin 12 non-null object 28 Start Year 722 non-null int64 29 Start Month 722 non-null int64 30 Start Day 714 non-null float64 31 End Year 722 non-null int64 32 End Month 722 non-null int64 33 End Day 714 non-null float64 34 Total_Deaths 544 non-null float64 35 No_Injured 188 non-null float64 36 No_Affected 361 non-null float64 37 No_Homeless 87 non-null float64 38 Total_Affected 476 non-null float64 39 Reconstruction_Costs_('000_US$) 2 non-null float64 40 Reconstruction_Costs,_Adjusted_('000_US$) 2 non-null float64 41 Insured_Damages_('000_US$) 236 non-null float64 42 Insured_Damages,_Adjusted_('000_US$) 236 non-null float64 43 Total_Damages_('000_US$) 447 non-null float64 44 Total_Damages,_Adjusted_('000_US$) 447 non-null float64 45 CPI 722 non-null float64 46 Adm Level 711 non-null object 47 Admin1 Code 579 non-null object 48 Admin2 Code 214 non-null object 49 Geo Locations 711 non-null object 50 Source: 4 non-null object 51 EM-DAT, CRED / UCLouvain, Brussels, Belgium 5 non-null object 52 sdf 0 non-null float64 dtypes: float64(19), int64(6), object(28) memory usage: 299.1+ KB </code> <pre> As what have seen from the information about the American continent's dataset, there are 53 columns. And this EDA project do not need to include of these rows for the analysis. Thus, selection of variables using .loc function was used for. The only columns that were needed for the analysis is the following: Dis No --> object data type Year --> int64 data type Disaster Type --> object data type Disaster Subtype--> object data type Event_Name --> object data type Country --> object data type Region --> object data type Total_Deaths --> int64 data type No_Injured --> int64 data type No_Affected --> int64 data type No_Homeless --> int64 data type Total_Affected --> int64 data type Reconstruction_Costs,_Adjusted_('000_US$)--> int64 data type Insured_Damages_('000_US$) --> int64 data type Insured_Damages,_Adjusted_('000_US$) --> int64 data type Total_Damages_('000_US$) --> int64 data type Total_Damages,_Adjusted_('000_US$) --> int64 data type Geo Locations --> int64 data type Dis Mag Value --> int64 data type </pre>_____no_output_____a2. Selection of X variables to be used for the analysis._____no_output_____ <code> #*************************NEW DATAFRAME*************************************** #selecting all needed and specific columns and creating new dataframe named data data = data.iloc[:,[0,1,6,7,9,10, 12,34, 35, 36, 37,38, 40, 41, 42, 43, 44, 49, 22]].copy()_____no_output_____ </code> a3. Converting the data from the excel sheet data set to a pandas data frame._____no_output_____ <code> df = pd.DataFrame(data) #convert dataset excel into dataframe display(df) _____no_output_____ </code> a4. Dataframe Normalization_____no_output_____ <code> #From the dataframe above, there are numerous disaster types such as; Convective storm, Tropical cyclone, Extra-tropical storm, and many other classifications. The focus of this analysis is directed only for the Tropical cyclone subtype as it provide specific names from every typhoons that hit the American continent. To achieve this result, a new dataframe named 'new_df' was created based from the rows of 'Disaster Subtype' with the string value of 'Tropical cyclone'. new_df = df.loc[df['Disaster Subtype'] == 'Tropical cyclone']_____no_output_____ </code> a5. Data Cleaning_____no_output_____ <code> new_df.info() # info() function was used to get an understanding of which aspects of the dataset need cleaning.<class 'pandas.core.frame.DataFrame'> Int64Index: 386 entries, 2 to 721 Data columns (total 19 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Dis No 386 non-null object 1 Year 386 non-null int64 2 Disaster Type 386 non-null object 3 Disaster Subtype 386 non-null object 4 Event_Name 386 non-null object 5 Country 386 non-null object 6 Region 386 non-null object 7 Total_Deaths 279 non-null float64 8 No_Injured 52 non-null float64 9 No_Affected 255 non-null float64 10 No_Homeless 44 non-null float64 11 Total_Affected 276 non-null float64 12 Reconstruction_Costs,_Adjusted_('000_US$) 2 non-null float64 13 Insured_Damages_('000_US$) 77 non-null float64 14 Insured_Damages,_Adjusted_('000_US$) 77 non-null float64 15 Total_Damages_('000_US$) 212 non-null float64 16 Total_Damages,_Adjusted_('000_US$) 212 non-null float64 17 Geo Locations 375 non-null object 18 Dis Mag Value 127 non-null float64 dtypes: float64(11), int64(1), object(7) memory usage: 60.3+ KB new_df.isnull().sum() #checking for total null values _____no_output_____# From the results above, there are a number of null values that need to be cleaned. And pandas can only clean the dataframe if all rows contain values specially for integers. All object data types are already contain no null values, in which they are all strings in data type. Thus, fillna() function was used to replace null values to zero for smooth data analysis. new_df = new_df.fillna(0) # replace NaN with zero value_____no_output_____ #After changing the null values in every in valued columns on the dataframe, astype() function was used to change data types of specific columns with dictionary indexing. convert_datatypes = {"Total_Deaths":int, "No_Injured":int, "No_Affected":int, "No_Homeless":int, "Total_Affected":int, "Reconstruction_Costs,_Adjusted_('000_US$)": int, "Insured_Damages_('000_US$)": int, "Insured_Damages,_Adjusted_('000_US$)": int, "Total_Damages_('000_US$)":int, "Total_Damages,_Adjusted_('000_US$)": int} new_df= new_df.astype(convert_datatypes) #converting columns datatypes new_df.isnull().sum() #checking for total null values #After all of these data cleaning processes, the final dataframe for analysis were created and named "new_df" again. And from the results below from using isnull() function, there are now no null values from the data frame. Thus, data analysis would be smooth and no errors can occur on the latter part of this EDA. _____no_output_____# First, drop duplicates function was used to find the exact names and nubmer of countries that were included on the record without duplicates. There are 38 countries in total. Country_names = new_df["Country"].drop_duplicates() #Creating a new dataframe called `Country_names` that contains the unique values of the `Country` column in the `new_df` dataframe data = pd.DataFrame(Country_names) #Creating a new dataframe called `data` that contains the unique values of the `Country` column in the `new_df` dataframe. data.info() # info() function was used to get an understanding of which aspects of the dataset need cleaning. Event_names = new_df["Event_Name"].drop_duplicates() data = pd.DataFrame(Event_names) data.info()<class 'pandas.core.frame.DataFrame'> Int64Index: 38 entries, 2 to 597 Data columns (total 1 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Country 38 non-null object dtypes: object(1) memory usage: 608.0+ bytes <class 'pandas.core.frame.DataFrame'> Int64Index: 146 entries, 2 to 714 Data columns (total 1 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Event_Name 146 non-null object dtypes: object(1) memory usage: 2.3+ KB </code> a6. Statistical Overview and Correlation Analysis of the featured X variables.' _____no_output_____ <code> new_df.describe() _____no_output_____new_df.describe(include =['O']) _____no_output_____#Calculating the correlation between the variables in the dataframe. corr = new_df.corr()**2 corr.Total_Affected.sort_values(ascending=False) _____no_output_____ </code> From the correlation list above, the 'No_Affected' or the number of affected individuals by typhoons has the highest correlation with the Total_Affected variable._____no_output_____ <code> sns.heatmap(new_df.corr(),annot=True,cmap='RdYlGn',linewidths=0.2) #data.corr()-->correlation matrix fig=plt.gcf() fig.set_size_inches(10,8) plt.show()_____no_output_____ </code> Interpreting the Heatmap First, observe that only numeric characteristics or int data type x variables are compared, as it is evident that alphabets and strings cannot be correlated. Before we can comprehend the plot, we must first define correlation. If a rise in feature A causes an increase in feature B, then the two features are positively connected. A value 1 signifies 100% positive correlation. If a rise in feature A leads to a reduction in feature B, then the two features are negatively linked. A value of -1 indicates an ideal negative correlation. Let's assume that two characteristics are highly or perfectly connected, so that a rise in one causes an increase in the other. This indicates that both features include information that is quite comparable, with minimal or no differences. This is referred to as MultiColinearity since both variables contain almost identical information. Do you believe that we should utilize both of them, even though one of them is redundant? While creating or training models, we should strive to eliminate duplicate features, as doing so decreases training time and provides several other benefits. However, if the data pertains to persons or the number of impacted individuals, such as Total fatalities, Number of injured, etc., it would be exempt. Even if they have a strong correlation, they might nevertheless provide significant insight for the investigation. Now we can see from the preceding heatmap that some of the features are correlated. One association exists between Total Affected, No.Homeless, and No.Affected, which is the strongest having 1 correlation. Even if they have a strong correlation, we may still utilize them for analysis since they can provide us with information about a nation in the data set. _____no_output_____a7. Data Analysis_____no_output_____ For the year 2000-2021, a total of 146 hurricanes were being recorded that hit the whole American continents, having Hurricane Katrina, Harvey, Maria, Irma, and Ida as top 5 strongest hurricanes in that timeline in terms of the total damage. To determine the mitigation plans of affected countries that may help in formulating new mitigation plan for the Philippines, our group decided to look at the countries ranking with the : -highest number of total affected individuals -lowest total deaths -and lowest total cost of damage. _____no_output_____A. Determine the top 5 typhoons from 2000-2022 that brought the greatest number of casualties to the in America as a whole based from the Total Affected and Total Damages, Adjusted ('000 US$) x variables. _____no_output_____ <code> from IPython.display import display_html import matplotlib.pyplot as plt import numpy as np # Acquiring the top 5 typhoons from 2000-2022 that brought the greatest number of casualties in America as a whole based from the Total Affected. TopTyphoonAffected = new_df.groupby('Event_Name') TotalAff = TopTyphoonAffected['Total_Affected'].sum() data_TotalAff = pd.DataFrame(TotalAff) TotalAff_SortedData = data_TotalAff.sort_values(by='Total_Affected',ascending=False) TotalAff_result = TotalAff_SortedData.head(5) TotalAff_result #graphing of data of top 5 typhoons from 2000-2022 that brought the greatest number of casualties in America as a whole based from the Total Affected. TotalAff_result.plot(kind="bar",color="gray") plt.xticks(rotation="45", horizontalalignment="center") plt.xlabel("Event_Name") plt.ylabel("Total Number of Affected") plt.title("\nTop 5 Typhoons that Brought Greatest \n Total Numbers of Affected People in Year 2000-2022\n\n") plt.show() #Saving visualizations in TIFF file with 300 dpi #plt.savefig('Top5TypNoAff.tiff', transparent=True, dpi=300) _____no_output_____#size of the bar graph import matplotlib.pyplot as plt Irma = new_df.loc[new_df['Event_Name'] == "Hurricane 'Irma'"] Irma = Irma.sort_values(by='Total_Affected',ascending=False) Irma = Irma.head(10) Irma = Irma.iloc[:,[5, 11 ]].copy() AffectedCountries = pd.DataFrame(Irma) display(AffectedCountries) AffectedCountries.plot(x="Country",y="Total_Affected",color="black",figsize=(15,6), linewidth="4",marker='h', markersize=12, markerfacecolor="red") plt.xticks(rotation="45", horizontalalignment="center") plt.xlabel("Country") plt.ylabel("Total Number of Affected") plt.title("\nTop 10 Countries that has Greatest Total Numbers of Affected People in by Hurricane Irma\n\n") plt.show()_____no_output_____ </code> #### For the analysis of the graph that were shown above. The graph showed the top 5 hurricanes that brought the greatest number of total affected individual in different countries in Northern and Southern America. Hurricane Irma tops the list with the greatest number of total affected individual when it landed on year 2017. _____no_output_____ <code> # Acquiring the top 5 typhoons from 2000-2022 that brought the greatest number of casualties to the in America as a whole Total Damages, Adjusted ('000 US$) TopTyphoonDamage = new_df.groupby('Event_Name') TotalDam = TopTyphoonDamage["Total_Damages,_Adjusted_('000_US$)"].sum() data_TotalDam = pd.DataFrame(TotalDam) TotalDam_SortedData = data_TotalDam.sort_values(by="Total_Damages,_Adjusted_('000_US$)",ascending=False) TotalDam_result = TotalDam_SortedData.head(5) TotalDam_result _____no_output_____#graphing of data of Top 5 Typhoons that Brought Greatest Total Numbers of Affected People in Year 2000-2022 TotalAff_result.plot(kind="bar",color="gray") plt.xlabel("Event_Name") plt.ylabel("Total Number of Affected") plt.title("\nTop 5 Typhoons that Brought Greatest \n Total Numbers of Affected People in Year 2000-2022\n\n") plt.show() #Saving visualizations in TIFF file with 300 dpi #plt.savefig('Top5TypNoAff.tiff', transparent=True, dpi=300) _____no_output_____#graphing of data of Top 5 Typhoons that Brought Greatest Total Damages, Adjusted ('000 US$) in Year 2000-2022. TotalDam_result.plot(kind="bar",color="magenta") plt.xlabel("Event_Name") plt.ylabel("Total Damages, Adjusted ('000 US$) ") plt.title("\nTop 5 Typhoons that Brought Greatest \n Total Damages, Adjusted ('000 US$) in Year 2000-2022\n\n") plt.show()_____no_output_____ </code> B. Acquire the data about the top 5 countries who had the greatest number of deaths, injured, and affected individuals from the typhoons on the year 2000-2022. _____no_output_____ <code> #Acquiring the top 5 countries with the highest number of total deaths caused by typhoons from 2000-2022 using groupby and sum function of pandas ByCountryDeath = new_df.groupby('Country') TotalData = ByCountryDeath['Total_Deaths'].sum() data = pd.DataFrame(TotalData) SortedData = data.sort_values(by='Total_Deaths',ascending=False) result = SortedData.head(5) display(result) std = new_df['Total_Deaths'].std() mean = new_df['Total_Deaths'].mean() print("Standard Deviation of Total Deaths Columns is:", std) print("Mean of Total Deaths Columns is:", mean)_____no_output_____#Acquiring the top 5 countries with the highest number of total injured individuals caused by typhoons from 2000-2022 using groupby and sum function of pandas ByCountryInjured = new_df.groupby('Country') TotalData = ByCountryDeath['No_Injured'].sum() data = pd.DataFrame(TotalData) SortedData = data.sort_values(by='No_Injured',ascending=False) result = SortedData.head(5) display(result) _____no_output_____#Acquiring the top 5 countries with the highest number of total affected individuals caused by typhoons from 2000-2022 using groupby and sum function of pandas ByCountry = new_df.groupby('Country') TotalData = ByCountry['Total_Affected'].sum() data = pd.DataFrame(TotalData) SortedData = data.sort_values(by='Total_Affected',ascending=False) result = SortedData.head(5) display(result) #graphing the Country and Total Affected data = {'Cuba':20202593,'USA':11279675, 'Mexico':6176551, 'Honduras':5380420, 'Guatemala':3841847} Country_name = list(data.keys()) Affected_values = list(data.values()) plt.bar(Country_name, Affected_values, color ='blue', width = 0.4) plt.xlabel("Country Name") plt.ylabel("Total No. of Affected") plt.title("\nTop 5 countries with the highest number of total affected individuals") plt.show()_____no_output_____ </code> Following on the graph is another graph that shows the total cost damage on the countries where it landed. Irma affected Cuba the most having 10 million people affected, followed by the USA having 70,000 people affected. For the countries with highest total number of affected individuals for the entire 22years, Cuba ranks top 1 at the list. According to United Nation (2004), Cuba is now considered as model in hurricane risk management in developing countries. Part of their preparation is education as disaster preparedness, prevention and response are part of the general education curriculum. People in schools, universities and workplaces are continuously informed and trained to cope with natural hazards. From their early age, all Cubans are taught how to behave as hurricanes approach the island. They also have, every year, a two-day training session in risk reduction for hurricanes, complete with simulation exercises and concrete preparation actions. they close schools to keep families together; use ‘community evacuation’ in especially isolated areas — where specific buildings or homes have been reinforced just for that purpose — rather than having people and their household goods traveling miles to shelters. Finally, the Cuban Civil Defense, a small organization at the top, involves virtually everybody at the municipal level; together with public health and Red Cross participation, local government and institutions are well prepared with risk assessments and disaster planning.The success of Cuba’s disaster preparation and mitigation strategies shows up in the results: just 35 deaths were caused by the 16 hurricanes and tropical storms that have torn through the island since 2001 — and 17 of those from Hurricane Dennis in 2005. They always learn from their mistakes. In 2008, Cuba planned to improved their underground line to electricity and water as they are always badly affected _____no_output_____C. Acquire the data about the name of countries who had the least number of deaths, injured, and affected individuals from the typhoons on 2000-2022._____no_output_____ <code> #Acquiring the top 11 countries with the lowest number of total deaths caused by typhoons from 2000-2022 using groupby and sum function of pandas ByCountryDeath = new_df.groupby('Country') TotalData = ByCountryDeath['Total_Deaths'].sum() data = pd.DataFrame(TotalData) SortedData = data.sort_values(by='Total_Deaths',ascending=True) result = SortedData.head(11) display(result) _____no_output_____ </code> The countries with lowest total number of affected individuals from the year 2000-2021 are not frequently hit by hurricanes, sometimes have zero hurricane record in year._____no_output_____For the countries with the lowest total deaths for the entire 22 years, Saint Kitts amd Nevis ranks 1. In their Hurricane Emergency Plan, they have ALERT, WATCH, WARNING, and ALL CLEAR phase. During the alert phase, 72 hours before the experiencing hurricane winds, the government meet and asses the country's state of preparednes for hurricane. They advice the public to listen to all weather advisories and make contact with relevant departments to ensure that all are conservant with the plan. During the WARNING PHASE, 24 Hours before Experiencing Hurricane Winds, they notify heads of missions, regional and international embassies/missions and agencies with agreements for assistance.All relevant persons are to be notified of declaration of a Warning.Together with the Police in their District make use of the Police public address system to warn residents of the Districts of the expected time the hurricane will strike and repeat Radio messages on the location of shelters. During the blow of the hurricane, the need to monitor and report the situations as possible. When the event passed, they will proceed to ALL CLEAR phase and will assess the damage. This well-formulated mitigation plan shows success as the countries have zero deaths from year 2000-2021._____no_output_____ <code> #Acquiring the top 19 countries with the lowest number of total injured individuals caused by typhoons from 2000-2022 using groupby and sum function of pandas ByCountryInjured = new_df.groupby('Country') TotalData = ByCountryDeath['No_Injured'].sum() data = pd.DataFrame(TotalData) SortedData = data.sort_values(by='No_Injured',ascending=True) result = SortedData.head(20) display(result) _____no_output_____#Acquiring the top 5 countries with the lowest number of total affected individuals caused by typhoons from 2000-2022 using groupby and sum function of pandas ByCountry = new_df.groupby('Country') TotalData = ByCountry['Total_Affected'].sum() data = pd.DataFrame(TotalData) SortedData = data.sort_values(by='Total_Affected',ascending=True) result = SortedData.head(5) display(result) _____no_output_____ </code> D. Get the information that shows the top 5 countries who were most affected in terms of economy (dollars) by typhoons from the year 2000-2022. _____no_output_____ <code> #sorting data for most affected in terms of economy countrynames = new_df.groupby('Country') totaldollars = countrynames["Total_Damages,_Adjusted_('000_US$)"].sum() td_frame = pd.DataFrame(totaldollars) SortedData_dollars = td_frame.sort_values(by="Total_Damages,_Adjusted_('000_US$)",ascending=False) result_dollars = SortedData_dollars.head(5) display(result_dollars) #graphing of data result_dollars.plot(kind="bar",color="magenta") plt.xlabel("Country") plt.ylabel("Total Damages, Adjusted ('000 US$)") plt.title("\nTop 5 Most Affected Countries in Terms of Economy (dollars) \n by typhoons from the year 2000-2022\n") plt.show() #sorting data for least affected in terms of Economy SortedData_dollars = td_frame.sort_values(by="Total_Damages,_Adjusted_('000_US$)",ascending=True) less_dollars = SortedData_dollars.head(5) display(less_dollars) #graphing of data less_dollars.plot(kind="bar",color="green", figsize=(11,6)) plt.xlabel("Country") plt.ylabel("Total Damages, Adjusted ('000 US$)") plt.title("\nTop 5 Least Affected Countries in Terms of Economy (dollars) \n by typhoons from the year 2000-2022\n") plt.show()_____no_output_____ </code> ## INTERPRETATION The first graph shows the countries that were most affected in terms of econom since 2000 to 2022. It shows that USA is the most affected having more that 700 billion total cost of damages. It is followed by Puerto Rico in the list having more than 70 billion total cost of damages. Mexico is third on the list having more than 34 billion cost of damages in total for 22 years. Fourth is Cuba having more than 13 billion cost of total damages for 22 years. And lastly, the Bahamas having more that 7 billion dollars of total cost of damages. For the second graph, it depicts the countries having the lowest total cost of damages in the entire 22 years. Virgin Islands, Venezuela, and Saint Barthelemy shows no record of total cost of damages, it was followed by Trinidad and Tobago having just more that a million for the entire 22 years. Lastly, Barbados is the fifth country having the lowest total cost of damages with a total of more than 7 million._____no_output_____E. Determine which top 5 typhoons are the strongest based from the x variable ‘Dis Mag Scale’ or the magnitude of the disaster at its epicenter with the values in kph (kilometer per Hour)._____no_output_____ <code> ByTyphoons = new_df.iloc[:,[4, 18]] data = pd.DataFrame(ByTyphoons) SortedData = data.sort_values(by='Dis Mag Value',ascending=False) result = SortedData.head(5) display(result) _____no_output_____ </code>
{ "repository": "Gabriel19-00477/ITBA-3207-Team-Typhoon-Analysts", "path": "Data Sets Coding Analysis/Data Analysis and Coding for Both Data Sets.ipynb", "matched_keywords": [ "epidemiology" ], "stars": null, "size": 688797, "hexsha": "480120556c4e590cb61599a2b49f48be35e512ec", "max_line_length": 149670, "avg_line_length": 147.4940042827, "alphanum_fraction": 0.8219083416 }
# Notebook from faisaladnanpeltops/spark-nlp-workshop Path: jupyter/enterprise/healthcare/EntityResolution_ICDO_SNOMED.ipynb [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/jupyter/enterprise/healthcare/EntityResolution_ICDO_SNOMED.ipynb)_____no_output_____<img src="https://nlp.johnsnowlabs.com/assets/images/logo.png" width="180" height="50" style="float: left;">_____no_output_____# COLAB ENVIRONMENT SETUP_____no_output_____ <code> import json with open('keys.json') as f: license_keys = json.load(f) license_keys.keys() _____no_output_____import os # Install java ! apt-get update ! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"] ! java -version secret = license_keys.get("secret",license_keys.get('SPARK_NLP_SECRET', "")) os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE'] os.environ['JSL_OCR_LICENSE'] = license_keys['JSL_OCR_LICENSE'] os.environ['AWS_ACCESS_KEY_ID']= license_keys['AWS_ACCESS_KEY_ID'] os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY'] version = license_keys.get("version",license_keys.get('SPARK_NLP_PUBLIC_VERSION', "")) jsl_version = license_keys.get("jsl_version",license_keys.get('SPARK_NLP_VERSION', "")) ! python -m pip install pyspark ! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret -qq import sparknlp import sparknlp_jsl from sparknlp.base import * from sparknlp.annotator import * from sparknlp_jsl.annotator import * import pyspark from pyspark.ml import Pipeline from pyspark.sql import SparkSession print (sparknlp.version()) print (sparknlp_jsl.version()) spark = sparknlp_jsl.start(secret, gpu=False, spark23=False)_____no_output_____from pyspark.sql import functions as F import pandas as pd pd.set_option("display.max_colwidth", 1000)_____no_output_____ </code> # ICD-O - SNOMED Entity Resolution - version 2.4.6 ## Example for ICD-O Entity Resolution Pipeline A common NLP problem in medical applications is to identify histology behaviour in documented cancer studies. In this example we will use Spark-NLP to identify and resolve histology behavior expressions and resolve them to an ICD-O code. Some cancer related clinical notes (taken from https://www.cancernetwork.com/case-studies): https://www.cancernetwork.com/case-studies/large-scrotal-mass-multifocal-intra-abdominal-retroperitoneal-and-pelvic-metastases https://oncology.medicinematters.com/lymphoma/chronic-lymphocytic-leukemia/case-study-small-b-cell-lymphocytic-lymphoma-and-chronic-lymphoc/12133054 https://oncology.medicinematters.com/lymphoma/epidemiology/central-nervous-system-lymphoma/12124056 https://oncology.medicinematters.com/lymphoma/case-study-cutaneous-t-cell-lymphoma/12129416 Note 1: Desmoplastic small round cell tumor <div style="border:2px solid #747474; background-color: #e3e3e3; margin: 5px; padding: 10px"> A 35-year-old African-American man was referred to our urology clinic by his primary care physician for consultation about a large left scrotal mass. The patient reported a 3-month history of left scrotal swelling that had progressively increased in size and was associated with mild left scrotal pain. He also had complaints of mild constipation, with hard stools every other day. He denied any urinary complaints. On physical examination, a hard paratesticular mass could be palpated in the left hemiscrotum extending into the left groin, separate from the left testicle, and measuring approximately 10 × 7 cm in size. A hard, lower abdominal mass in the suprapubic region could also be palpated in the midline. The patient was admitted urgently to the hospital for further evaluation with cross-sectional imaging and blood work. Laboratory results, including results of a complete blood cell count with differential, liver function tests, coagulation panel, and basic chemistry panel, were unremarkable except for a serum creatinine level of 2.6 mg/dL. Typical markers for a testicular germ cell tumor were within normal limits: the beta–human chorionic gonadotropin level was less than 1 mIU/mL and the alpha fetoprotein level was less than 2.8 ng/mL. A CT scan of the chest, abdomen, and pelvis with intravenous contrast was obtained, and it showed large multifocal intra-abdominal, retroperitoneal, and pelvic masses (Figure 1). On cross-sectional imaging, a 7.8-cm para-aortic mass was visualized compressing the proximal portion of the left ureter, creating moderate left hydroureteronephrosis. Additionally, three separate pelvic masses were present in the retrovesical space, each measuring approximately 5 to 10 cm at their largest diameter; these displaced the bladder anteriorly and the rectum posteriorly. The patient underwent ultrasound-guided needle biopsy of one of the pelvic masses on hospital day 3 for definitive diagnosis. Microscopic examination of the tissue by our pathologist revealed cellular islands with oval to elongated, irregular, and hyperchromatic nuclei; scant cytoplasm; and invading fibrous tissue—as well as three mitoses per high-powered field (Figure 2). Immunohistochemical staining demonstrated strong positivity for cytokeratin AE1/AE3, vimentin, and desmin. Further mutational analysis of the cells detected the presence of an EWS-WT1 fusion transcript consistent with a diagnosis of desmoplastic small round cell tumor. </div> Note 2: SLL and CLL <div style="border:2px solid #747474; background-color: #e3e3e3; margin: 5px; padding: 10px"> A 72-year-old man with a history of diabetes mellitus, hypertension, and hypercholesterolemia self-palpated a left submandibular lump in 2012. Complete blood count (CBC) in his internist’s office showed solitary leukocytosis (white count 22) with predominant lymphocytes for which he was referred to a hematologist. Peripheral blood flow cytometry on 04/11/12 confirmed chronic lymphocytic leukemia (CLL)/small lymphocytic lymphoma (SLL): abnormal cell population comprising 63% of CD45 positive leukocytes, co-expressing CD5 and CD23 in CD19-positive B cells. CD38 was negative but other prognostic markers were not assessed at that time. The patient was observed regularly for the next 3 years and his white count trend was as follows: 22.8 (4/2012) --> 28.5 (07/2012) --> 32.2 (12/2012) --> 36.5 (02/2013) --> 42 (09/2013) --> 44.9 (01/2014) --> 75.8 (2/2015). His other counts stayed normal until early 2015 when he also developed anemia (hemoglobin [HGB] 10.9) although platelets remained normal at 215. He had been noticing enlargement of his cervical, submandibular, supraclavicular, and axillary lymphadenopathy for several months since 2014 and a positron emission tomography (PET)/computed tomography (CT) scan done in 12/2014 had shown extensive diffuse lymphadenopathy within the neck, chest, abdomen, and pelvis. Maximum standardized uptake value (SUV max) was similar to low baseline activity within the vasculature of the neck and chest. In the abdomen and pelvis, however, there was mild to moderately hypermetabolic adenopathy measuring up to SUV of 4. The largest right neck nodes measured up to 2.3 x 3 cm and left neck nodes measured up to 2.3 x 1.5 cm. His right axillary lymphadenopathy measured up to 5.5 x 2.6 cm and on the left measured up to 4.8 x 3.4 cm. Lymph nodes on the right abdomen and pelvis measured up to 6.7 cm and seemed to have some mass effect with compression on the urinary bladder without symptoms. He underwent a bone marrow biopsy on 02/03/15, which revealed hypercellular marrow (60%) with involvement by CLL (30%); flow cytometry showed CD38 and ZAP-70 positivity; fluorescence in situ hybridization (FISH) analysis showed 13q deletion/monosomy 13; IgVH was unmutated; karyotype was 46XY. </div> Note 3: CNS lymphoma <div style="border:2px solid #747474; background-color: #e3e3e3; margin: 5px; padding: 10px"> A 56-year-old woman began to experience vertigo, headaches, and frequent falls. A computed tomography (CT) scan of the brain revealed the presence of a 1.6 x 1.6 x 2.1 cm mass involving the fourth ventricle (Figure 14.1). A gadolinium-enhanced magnetic resonance imaging (MRI) scan confirmed the presence of the mass, and a stereotactic biopsy was performed that demonstrated a primary central nervous system lymphoma (PCNSL) with a diffuse large B-cell histology. Complete blood count (CBC), lactate dehydrogenase (LDH), and beta-2-microglobulin were normal. Systemic staging with a positron emission tomography (PET)/CT scan and bone marrow biopsy showed no evidence of lymphomatous involvement outside the CNS. An eye exam and lumbar puncture showed no evidence of either ocular or leptomeningeal involvement. </div> Note 4: Cutaneous T-cell lymphoma <div style="border:2px solid #747474; background-color: #e3e3e3; margin: 5px; padding: 10px"> An 83-year-old female presented with a progressing pruritic cutaneous rash that started 8 years ago. On clinical exam there were numerous coalescing, infiltrated, scaly, and partially crusted erythematous plaques distributed over her trunk and extremities and a large fungating ulcerated nodule on her right thigh covering 75% of her total body surface area (Figure 10.1). Lymphoma associated alopecia and a left axillary lymphadenopathy were also noted. For the past 3–4 months she reported fatigue, severe pruritus, night sweats, 20 pounds of weight loss, and loss of appetite. </div>_____no_output_____Let's create a dataset with all four case studies_____no_output_____ <code> notes = [] notes.append("""A 35-year-old African-American man was referred to our urology clinic by his primary care physician for consultation about a large left scrotal mass. The patient reported a 3-month history of left scrotal swelling that had progressively increased in size and was associated with mild left scrotal pain. He also had complaints of mild constipation, with hard stools every other day. He denied any urinary complaints. On physical examination, a hard paratesticular mass could be palpated in the left hemiscrotum extending into the left groin, separate from the left testicle, and measuring approximately 10 × 7 cm in size. A hard, lower abdominal mass in the suprapubic region could also be palpated in the midline. The patient was admitted urgently to the hospital for further evaluation with cross-sectional imaging and blood work. Laboratory results, including results of a complete blood cell count with differential, liver function tests, coagulation panel, and basic chemistry panel, were unremarkable except for a serum creatinine level of 2.6 mg/dL. Typical markers for a testicular germ cell tumor were within normal limits: the beta–human chorionic gonadotropin level was less than 1 mIU/mL and the alpha fetoprotein level was less than 2.8 ng/mL. A CT scan of the chest, abdomen, and pelvis with intravenous contrast was obtained, and it showed large multifocal intra-abdominal, retroperitoneal, and pelvic masses (Figure 1). On cross-sectional imaging, a 7.8-cm para-aortic mass was visualized compressing the proximal portion of the left ureter, creating moderate left hydroureteronephrosis. Additionally, three separate pelvic masses were present in the retrovesical space, each measuring approximately 5 to 10 cm at their largest diameter; these displaced the bladder anteriorly and the rectum posteriorly. The patient underwent ultrasound-guided needle biopsy of one of the pelvic masses on hospital day 3 for definitive diagnosis. Microscopic examination of the tissue by our pathologist revealed cellular islands with oval to elongated, irregular, and hyperchromatic nuclei; scant cytoplasm; and invading fibrous tissue—as well as three mitoses per high-powered field (Figure 2). Immunohistochemical staining demonstrated strong positivity for cytokeratin AE1/AE3, vimentin, and desmin. Further mutational analysis of the cells detected the presence of an EWS-WT1 fusion transcript consistent with a diagnosis of desmoplastic small round cell tumor.""") notes.append("""A 72-year-old man with a history of diabetes mellitus, hypertension, and hypercholesterolemia self-palpated a left submandibular lump in 2012. Complete blood count (CBC) in his internist’s office showed solitary leukocytosis (white count 22) with predominant lymphocytes for which he was referred to a hematologist. Peripheral blood flow cytometry on 04/11/12 confirmed chronic lymphocytic leukemia (CLL)/small lymphocytic lymphoma (SLL): abnormal cell population comprising 63% of CD45 positive leukocytes, co-expressing CD5 and CD23 in CD19-positive B cells. CD38 was negative but other prognostic markers were not assessed at that time. The patient was observed regularly for the next 3 years and his white count trend was as follows: 22.8 (4/2012) --> 28.5 (07/2012) --> 32.2 (12/2012) --> 36.5 (02/2013) --> 42 (09/2013) --> 44.9 (01/2014) --> 75.8 (2/2015). His other counts stayed normal until early 2015 when he also developed anemia (hemoglobin [HGB] 10.9) although platelets remained normal at 215. He had been noticing enlargement of his cervical, submandibular, supraclavicular, and axillary lymphadenopathy for several months since 2014 and a positron emission tomography (PET)/computed tomography (CT) scan done in 12/2014 had shown extensive diffuse lymphadenopathy within the neck, chest, abdomen, and pelvis. Maximum standardized uptake value (SUV max) was similar to low baseline activity within the vasculature of the neck and chest. In the abdomen and pelvis, however, there was mild to moderately hypermetabolic adenopathy measuring up to SUV of 4. The largest right neck nodes measured up to 2.3 x 3 cm and left neck nodes measured up to 2.3 x 1.5 cm. His right axillary lymphadenopathy measured up to 5.5 x 2.6 cm and on the left measured up to 4.8 x 3.4 cm. Lymph nodes on the right abdomen and pelvis measured up to 6.7 cm and seemed to have some mass effect with compression on the urinary bladder without symptoms. He underwent a bone marrow biopsy on 02/03/15, which revealed hypercellular marrow (60%) with involvement by CLL (30%); flow cytometry showed CD38 and ZAP-70 positivity; fluorescence in situ hybridization (FISH) analysis showed 13q deletion/monosomy 13; IgVH was unmutated; karyotype was 46XY.""") notes.append("A 56-year-old woman began to experience vertigo, headaches, and frequent falls. A computed tomography (CT) scan of the brain revealed the presence of a 1.6 x 1.6 x 2.1 cm mass involving the fourth ventricle (Figure 14.1). A gadolinium-enhanced magnetic resonance imaging (MRI) scan confirmed the presence of the mass, and a stereotactic biopsy was performed that demonstrated a primary central nervous system lymphoma (PCNSL) with a diffuse large B-cell histology. Complete blood count (CBC), lactate dehydrogenase (LDH), and beta-2-microglobulin were normal. Systemic staging with a positron emission tomography (PET)/CT scan and bone marrow biopsy showed no evidence of lymphomatous involvement outside the CNS. An eye exam and lumbar puncture showed no evidence of either ocular or leptomeningeal involvement.") notes.append("An 83-year-old female presented with a progressing pruritic cutaneous rash that started 8 years ago. On clinical exam there were numerous coalescing, infiltrated, scaly, and partially crusted erythematous plaques distributed over her trunk and extremities and a large fungating ulcerated nodule on her right thigh covering 75% of her total body surface area (Figure 10.1). Lymphoma associated alopecia and a left axillary lymphadenopathy were also noted. For the past 3–4 months she reported fatigue, severe pruritus, night sweats, 20 pounds of weight loss, and loss of appetite.") # Notes column names docid_col = "doc_id" note_col = "text_feed" data = spark.createDataFrame([(i,n.lower()) for i,n in enumerate(notes)]).toDF(docid_col, note_col)_____no_output_____ </code> And let's build a SparkNLP pipeline with the following stages: - DocumentAssembler: Entry annotator for our pipelines; it creates the data structure for the Annotation Framework - SentenceDetector: Annotator to pragmatically separate complete sentences inside each document - Tokenizer: Annotator to separate sentences in tokens (generally words) - WordEmbeddings: Vectorization of word tokens, in this case using word embeddings trained from PubMed, ICD10 and other clinical resources. - EntityResolver: Annotator that performs search for the KNNs, in this case trained from ICDO Histology Behavior._____no_output_____In order to find cancer related chunks, we are going to use a pretrained Search Trie wrapped up in our TextMatcher Annotator; and to identify treatments/procedures we are going to use our good old NER. - NerDLModel: TensorFlow based Named Entity Recognizer, trained to extract PROBLEMS, TREATMENTS and TESTS - NerConverter: Chunk builder out of tokens tagged by the Ner Model_____no_output_____ <code> docAssembler = DocumentAssembler().setInputCol(note_col).setOutputCol("document") sentenceDetector = SentenceDetector().setInputCols("document").setOutputCol("sentence") tokenizer = Tokenizer().setInputCols("sentence").setOutputCol("token") #Working on adjusting WordEmbeddingsModel to work with the subset of matched tokens word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\ .setInputCols("sentence", "token")\ .setOutputCol("word_embeddings")embeddings_clinical download started this may take some time. Approximate size to download 1.6 GB [OK!] icdo_ner = NerDLModel.pretrained("ner_bionlp", "en", "clinical/models")\ .setInputCols("sentence", "token", "word_embeddings")\ .setOutputCol("icdo_ner") icdo_chunk = NerConverter().setInputCols("sentence","token","icdo_ner").setOutputCol("icdo_chunk").setWhiteList(["Cancer"]) icdo_chunk_embeddings = ChunkEmbeddings()\ .setInputCols("icdo_chunk", "word_embeddings")\ .setOutputCol("icdo_chunk_embeddings") icdo_chunk_resolver = ChunkEntityResolverModel.pretrained("chunkresolve_icdo_clinical", "en", "clinical/models")\ .setInputCols("token","icdo_chunk_embeddings")\ .setOutputCol("tm_icdo_code")ner_bionlp download started this may take some time. Approximate size to download 13.9 MB [OK!] chunkresolve_icdo_clinical download started this may take some time. Approximate size to download 8.2 MB [OK!] clinical_ner = NerDLModel.pretrained("ner_clinical", "en", "clinical/models") \ .setInputCols(["sentence", "token", "word_embeddings"]) \ .setOutputCol("ner") ner_converter = NerConverter() \ .setInputCols(["sentence", "token", "ner"]) \ .setOutputCol("ner_chunk") ner_chunk_tokenizer = ChunkTokenizer()\ .setInputCols("ner_chunk")\ .setOutputCol("ner_token") ner_chunk_embeddings = ChunkEmbeddings()\ .setInputCols("ner_chunk", "word_embeddings")\ .setOutputCol("ner_chunk_embeddings")ner_clinical download started this may take some time. Approximate size to download 13.8 MB [OK!] #SNOMED Resolution ner_snomed_resolver = \ ChunkEntityResolverModel.pretrained("chunkresolve_snomed_findings_clinical","en","clinical/models")\ .setInputCols("ner_token","ner_chunk_embeddings").setOutputCol("snomed_result")ensembleresolve_snomed_clinical download started this may take some time. Approximate size to download 592.9 MB [OK!] pipelineFull = Pipeline().setStages([ docAssembler, sentenceDetector, tokenizer, word_embeddings, clinical_ner, ner_converter, ner_chunk_embeddings, ner_chunk_tokenizer, ner_snomed_resolver, icdo_ner, icdo_chunk, icdo_chunk_embeddings, icdo_chunk_resolver ])_____no_output_____ </code> Let's train our Pipeline and make it ready to start transforming_____no_output_____ <code> pipelineModelFull = pipelineFull.fit(data)_____no_output_____output = pipelineModelFull.transform(data).cache()_____no_output_____ </code> ### EntityResolver: Trained on an augmented ICDO Dataset from JSL Data Market it provides histology codes resolution for the matched expressions. Other than providing the code in the "result" field it provides more metadata about the matching process: - target_text -> Text to resolve - resolved_text -> Best match text - confidence -> Relative confidence for the top match (distance to probability) - confidence_ratio -> Relative confidence for the top match. TopMatchConfidence / SecondMatchConfidence - alternative_codes -> List of other plausible codes (in the KNN neighborhood) - alternative_confidence_ratios -> Rest of confidence ratios - all_k_results -> All resolved codes for metrics calculation purposes - sentence -> SentenceId - chunk -> ChunkId_____no_output_____ <code> def quick_metadata_analysis(df, doc_field, chunk_field, code_fields): code_res_meta = ", ".join([f"{cf}.metadata" for cf in code_fields]) expression = f"explode(arrays_zip({chunk_field}.begin, {chunk_field}.end, {chunk_field}.result, {chunk_field}.metadata, "+code_res_meta+")) as a" top_n_rest = [(f"float(a['{i+4}'].confidence) as {(cf.split('_')[0])}_conf", f"arrays_zip(split(a['{i+4}'].all_k_results,':::'),split(a['{i+4}'].all_k_resolutions,':::')) as {cf.split('_')[0]+'_opts'}") for i, cf in enumerate(code_fields)] top_n_rest_args = [] for tr in top_n_rest: for t in tr: top_n_rest_args.append(t) return df.selectExpr(doc_field, expression) \ .orderBy(docid_col, F.expr("a['0']"), F.expr("a['1']"))\ .selectExpr(f"concat_ws('::',{doc_field},a['0'],a['1']) as coords", "a['2'] as chunk","a['3'].entity as entity", *top_n_rest_args)_____no_output_____icdo = \ quick_metadata_analysis(output, docid_col, "icdo_chunk",["tm_icdo_code"]).toPandas()_____no_output_____snomed = \ quick_metadata_analysis(output, docid_col, "ner_chunk",["snomed_result"]).toPandas()_____no_output_____icdo_____no_output_____snomed_____no_output_____ </code>
{ "repository": "faisaladnanpeltops/spark-nlp-workshop", "path": "jupyter/enterprise/healthcare/EntityResolution_ICDO_SNOMED.ipynb", "matched_keywords": [ "epidemiology" ], "stars": null, "size": 48214, "hexsha": "4801aaa101dc715f0c64143662e5d4e5e039cda9", "max_line_length": 2270, "avg_line_length": 51.4557097118, "alphanum_fraction": 0.5635292654 }
# Notebook from Hadryan/course-content Path: tutorials/W1D4_GeneralizedLinearModels/W1D4_Tutorial2.ipynb <a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D4_GeneralizedLinearModels/W1D4_Tutorial2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____# Tutorial 2: Classifiers and regularizers **Week 1, Day 4: Generalized Linear Models** **By Neuromatch Academy** __Content creators:__ Pierre-Etienne H. Fiquet, Ari Benjamin, Jakob Macke __Content reviewers:__ Davide Valeriani, Alish Dipani, Michael Waskom, Ella Batty _____no_output_____**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** <p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>_____no_output_____# Tutorial Objectives *Estimated timing of tutorial: 1 hour, 35 minutes* This is part 2 of a 2-part series about Generalized Linear Models (GLMs), which are a fundamental framework for supervised learning. In part 1, we learned about and implemented GLMs. In this tutorial, we’ll implement logistic regression, a special case of GLMs used to model binary outcomes. Oftentimes the variable you would like to predict takes only one of two possible values. Left or right? Awake or asleep? Car or bus? In this tutorial, we will decode a mouse's left/right decisions from spike train data. Our objectives are to: 1. Learn about logistic regression, how it is derived within the GLM theory, and how it is implemented in scikit-learn 2. Apply logistic regression to decode choies from neural responses 3. Learn about regularization, including the different approaches and the influence of hyperparameters --- We would like to acknowledge [Steinmetz _et al._ (2019)](https://www.nature.com/articles/s41586-019-1787-x) for sharing their data, a subset of which is used here. _____no_output_____ <code> # @title Tutorial slides # @markdown These are the slides for the videos in all tutorials today from IPython.display import IFrame IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/upyjz/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)_____no_output_____ </code> # Setup _____no_output_____ <code> # Imports import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import LogisticRegression from sklearn.model_selection import cross_val_score_____no_output_____#@title Figure settings import ipywidgets as widgets %matplotlib inline %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")_____no_output_____# @title Plotting Functions def plot_weights(models, sharey=True): """Draw a stem plot of weights for each model in models dict.""" n = len(models) f = plt.figure(figsize=(10, 2.5 * n)) axs = f.subplots(n, sharex=True, sharey=sharey) axs = np.atleast_1d(axs) for ax, (title, model) in zip(axs, models.items()): ax.margins(x=.02) stem = ax.stem(model.coef_.squeeze(), use_line_collection=True) stem[0].set_marker(".") stem[0].set_color(".2") stem[1].set_linewidths(.5) stem[1].set_color(".2") stem[2].set_visible(False) ax.axhline(0, color="C3", lw=3) ax.set(ylabel="Weight", title=title) ax.set(xlabel="Neuron (a.k.a. feature)") f.tight_layout() def plot_function(f, name, var, points=(-10, 10)): """Evaluate f() on linear space between points and plot. Args: f (callable): function that maps scalar -> scalar name (string): Function name for axis labels var (string): Variable name for axis labels. points (tuple): Args for np.linspace to create eval grid. """ x = np.linspace(*points) ax = plt.figure().subplots() ax.plot(x, f(x)) ax.set( xlabel=f'${var}$', ylabel=f'${name}({var})$' ) def plot_model_selection(C_values, accuracies): """Plot the accuracy curve over log-spaced C values.""" ax = plt.figure().subplots() ax.set_xscale("log") ax.plot(C_values, accuracies, marker="o") best_C = C_values[np.argmax(accuracies)] ax.set( xticks=C_values, xlabel="$C$", ylabel="Cross-validated accuracy", title=f"Best C: {best_C:1g} ({np.max(accuracies):.2%})", ) def plot_non_zero_coefs(C_values, non_zero_l1, n_voxels): """Plot the accuracy curve over log-spaced C values.""" ax = plt.figure().subplots() ax.set_xscale("log") ax.plot(C_values, non_zero_l1, marker="o") ax.set( xticks=C_values, xlabel="$C$", ylabel="Number of non-zero coefficients", ) ax.axhline(n_voxels, color=".1", linestyle=":") ax.annotate("Total\n# Neurons", (C_values[0], n_voxels * .98), va="top")_____no_output_____#@title Data retrieval and loading import os import requests import hashlib url = "https://osf.io/r9gh8/download" fname = "W1D4_steinmetz_data.npz" expected_md5 = "d19716354fed0981267456b80db07ea8" if not os.path.isfile(fname): try: r = requests.get(url) except requests.ConnectionError: print("!!! Failed to download data !!!") else: if r.status_code != requests.codes.ok: print("!!! Failed to download data !!!") elif hashlib.md5(r.content).hexdigest() != expected_md5: print("!!! Data download appears corrupted !!!") else: with open(fname, "wb") as fid: fid.write(r.content) def load_steinmetz_data(data_fname=fname): with np.load(data_fname) as dobj: data = dict(**dobj) return data_____no_output_____ </code> --- #Section 1: Logistic regression_____no_output_____ <code> # @title Video 1: Logistic regression from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1P54y1q7Qn", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="qfXFrUnLU0o", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out)_____no_output_____ </code> Logistic Regression is a binary classification model. It is a GLM with a *logistic* link function and a *Bernoulli* (i.e. coinflip) noise model. Like in the last notebook, logistic regression invokes a standard procedure: 1. Define a *model* of how inputs relate to outputs. 2. Adjust the parameters to maximize (log) probability of your data given your model ## Section 1.1: The logistic regression model *Estimated timing to here from start of tutorial: 8 min* <details> <summary> <font color='blue'>Click here for text recap of relevant part of video </font></summary> The fundamental input/output equation of logistic regression is: \begin{align} \hat{y} \equiv p(y=1|x,\theta) = \sigma(\theta^Tx) \end{align} Note that we interpret the output of logistic regression, $\hat{y}$, as the **probability that y = 1** given inputs $x$ and parameters $\theta$. Here $\sigma()$ is a "squashing" function called the **sigmoid function** or **logistic function**. Its output is in the range $0 \leq y \leq 1$. It looks like this: \begin{align} \sigma(z) = \frac{1}{1 + \textrm{exp}(-z)} \end{align} Recall that $z = \theta^T x$. The parameters decide whether $\theta^T x$ will be very negative, in which case $\sigma(\theta^T x)\approx 0$, or very positive, meaning $\sigma(\theta^T x)\approx 1$. _____no_output_____### Coding Exercise 1.1: Implement the sigmoid function _____no_output_____ <code> def sigmoid(z): """Return the logistic transform of z.""" ############################################################################## # TODO for students: Fill in the missing code (...) and remove the error raise NotImplementedError("Student exercise: implement the sigmoid function") ############################################################################## sigmoid = 1 / (1 + np.exp(-z)) return sigmoid # Visualize plot_function(sigmoid, "\sigma", "z", (-10, 10))_____no_output_____# to_remove solution def sigmoid(z): """Return the logistic transform of z.""" sigmoid = 1 / (1 + np.exp(-z)) return sigmoid # Visualize with plt.xkcd(): plot_function(sigmoid, "\sigma", "z", (-10, 10))_____no_output_____ </code> ## Section 1.2: Using scikit-learn *Estimated timing to here from start of tutorial: 13 min* Unlike the previous notebook, we're not going to write the code that implements all of the Logistic Regression model itself. Instead, we're going to use the implementation in [scikit-learn](https://scikit-learn.org/stable/), a very popular library for Machine Learning. The goal of this next section is to introduce `scikit-learn` classifiers and understand how to apply it to real neural data._____no_output_____--- # Section 2: Decoding neural data with logistic regression_____no_output_____## Section 2.1: Setting up the data *Estimated timing to here from start of tutorial: 15 min* In this notebook we'll use the Steinmetz dataset that you have seen previously. Recall that this dataset includes recordings of neurons as mice perform a decision task. Mice had the task of turning a wheel to indicate whether they perceived a Gabor stimulus to the left, to the right, or not at all. Neuropixel probes measured spikes across the cortex. Check out the following task schematic below from the BiorXiv preprint. _____no_output_____ <code> # @markdown Execute to see schematic import IPython IPython.display.Image("http://kordinglab.com/images/others/steinmetz-task.png")_____no_output_____ </code> Today we're going to **decode the decision from neural data** using Logistic Regression. We will only consider trials where the mouse chose "Left" or "Right" and ignore NoGo trials. ### Data format In the hidden `Data retrieval and loading` cell, there is a function that loads the data: - `spikes`: an array of normalized spike rates with shape `(n_trials, n_neurons)` - `choices`: a vector of 0s and 1s, indicating the animal's behavioral response, with length `n_trials`._____no_output_____ <code> data = load_steinmetz_data() for key, val in data.items(): print(key, val.shape)_____no_output_____ </code> As with the GLMs you've seen in the previous tutorial (Linear and Poisson Regression), we will need two data structures: - an `X` matrix with shape `(n_samples, n_features)` - a `y` vector with length `n_samples`. In the previous notebook, `y` corresponded to the neural data, and `X` corresponded to something about the experiment. Here, we are going to invert those relationships. That's what makes this a *decoding* model: we are going to predict behavior (`y`) from the neural responses (`X`):_____no_output_____ <code> y = data["choices"] X = data["spikes"]_____no_output_____ </code> ## Section 2.2: Fitting the model *Estimated timing to here from start of tutorial: 25 min* Using a Logistic Regression model within `scikit-learn` is very simple. _____no_output_____ <code> # Define the model log_reg = LogisticRegression(penalty="none") # Fit it to data log_reg.fit(X, y)_____no_output_____ </code> There's two steps here: - We *initialized* the model with a hyperparameter, telling it what penalty to use (we'll focus on this in the second part of the notebook) - We *fit* the model by passing it the `X` and `y` objects. _____no_output_____## Section 2.3: Classifying the training data *Estimated timing to here from start of tutorial: 27 min* Fitting the model performs maximum likelihood optimization, learning a set of *feature weights*. We can use those learned weights to *classify* new data, or predict the labels for each sample:_____no_output_____ <code> y_pred = log_reg.predict(X)_____no_output_____ </code> ## Section 2.4: Evaluating the model *Estimated timing to here from start of tutorial: 30 min* Now we need to evaluate the model's predictions. We'll do that with an *accuracy* score. The accuracy of the classifier is the proportion of trials where the predicted label matches the true label. _____no_output_____### Coding Exercise 2.4: Classifier accuracy For the first exercise, implement a function to evaluate a classifier using the accuracy score. Use it to get the accuracy of the classifier on the *training* data._____no_output_____ <code> def compute_accuracy(X, y, model): """Compute accuracy of classifier predictions. Args: X (2D array): Data matrix y (1D array): Label vector model (sklearn estimator): Classifier with trained weights. Returns: accuracy (float): Proportion of correct predictions. """ ############################################################################# # TODO Complete the function, then remove the next line to test it raise NotImplementedError("Implement the compute_accuracy function") ############################################################################# y_pred = model.predict(X) accuracy = ... return accuracy # Compute train accurcy train_accuracy = compute_accuracy(X, y, log_reg) print(f"Accuracy on the training data: {train_accuracy:.2%}")_____no_output_____# to_remove solution def compute_accuracy(X, y, model): """Compute accuracy of classifier predictions. Args: X (2D array): Data matrix y (1D array): Label vector model (sklearn estimator): Classifier with trained weights. Returns: accuracy (float): Proportion of correct predictions. """ y_pred = model.predict(X) accuracy = (y == y_pred).mean() return accuracy # Compute train accurcy train_accuracy = compute_accuracy(X, y, log_reg) print(f"Accuracy on the training data: {train_accuracy:.2%}")_____no_output_____ </code> ## Section 2.5: Cross-validating the classifer *Estimated timing to here from start of tutorial: 40 min* Classification accuracy on the training data is 100%! That might sound impressive, but you should recall from yesterday the concept of *overfitting*: the classifier may have learned something idiosyncratic about the training data. If that's the case, it won't have really learned the underlying data->decision function, and thus won't generalize well to new data. To check this, we can evaluate the *cross-validated* accuracy. _____no_output_____ <code> # @markdown Execute to see schematic import IPython IPython.display.Image("http://kordinglab.com/images/others/justCV-01.png")_____no_output_____ </code> ### Cross-validating using `scikit-learn` helper functions Yesterday, we asked you to write your own functions for implementing cross-validation. In practice, this won't be necessary, because `scikit-learn` offers a number of [helpful functions](https://scikit-learn.org/stable/model_selection.html) that will do this for you. For example, you can cross-validate a classifer using `cross_val_score`. `cross_val_score` takes a `sklearn` model like `LogisticRegression`, as well as your `X` and `y` data. It then retrains your model on test/train splits of `X` and `y`, and returns the test accuracy on each of the test sets._____no_output_____ <code> accuracies = cross_val_score(LogisticRegression(penalty='none'), X, y, cv=8) # k=8 crossvalidation_____no_output_____#@title #@markdown Run to plot out these `k=8` accuracy scores. f, ax = plt.subplots(figsize=(8, 3)) ax.boxplot(accuracies, vert=False, widths=.7) ax.scatter(accuracies, np.ones(8)) ax.set( xlabel="Accuracy", yticks=[], title=f"Average test accuracy: {accuracies.mean():.2%}" ) ax.spines["left"].set_visible(False)_____no_output_____ </code> The lower cross-validated accuracy compared to the training accuracy (100%) suggests that the model is being *overfit*. Is this surprising? Think about the shape of the $X$ matrix:_____no_output_____ <code> X.shape_____no_output_____ </code> The model has almost three times as many features as samples. This is a situation where overfitting is very likely (almost guaranteed). **Link to neuroscience**: Neuro data commonly has more features than samples. Having more neurons than independent trials is one example. In fMRI data, there are commonly more measured voxels than independent trials. _____no_output_____### Why more features than samples leads to overfitting In brief, the variance of model estimation increases when there are more features than samples. That is, you would get a very different model every time you get new data and run `.fit()`. This is very related to the *bias/variance tradeoff* you learned about on day 1. Why does this happen? Here's a tiny example to get your intuition going. Imagine trying to find a best-fit line in 2D when you only have 1 datapoint. There are simply a infinite number of lines that pass through that point. This is the situation we find ourselves in with more features than samples. ### What we can do about it As you learned on day 1, you can decrease model variance if you don't mind increasing its bias. Here, we will increase bias by assuming that the correct parameters are all small. In our 2D example, this is like prefering the horizontal line to all others. This is one example of *regularization*. _____no_output_____----- #Section 3: Regularization *Estimated timing to here from start of tutorial: 50 min* _____no_output_____ <code> # @title Video 2: Regularization from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1Tg4y1i773", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="b2IaUCZ91bo", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out)_____no_output_____ </code> <details> <summary> <font color='blue'>Click here for text recap of video </font></summary> Regularization forces a model to learn a set solutions you *a priori* believe to be more correct, which reduces overfitting because it doesn't have as much flexibility to fit idiosyncracies in the training data. This adds model bias, but it's a good bias because you know (maybe) that parameters should be small or mostly 0. In a GLM, a common form of regularization is to *shrink* the classifier weights. In a linear model, you can see its effect by plotting the weights. We've defined a helper function, `plot_weights`, that we'll use extensively in this section. </details> Here is what the weights look like for a Logistic Regression model with no regularization:_____no_output_____ <code> log_reg = LogisticRegression(penalty="none").fit(X, y) plot_weights({"No regularization": log_reg})_____no_output_____ </code> It's important to understand this plot. Each dot visualizes a value in our parameter vector $\theta$. (It's the same style of plot as the one showing $\theta$ in the video). Since each feature is the time-averaged response of a neuron, each dot shows how the model uses each neuron to estimate a decision. Note the scale of the y-axis. Some neurons have values of about $20$, whereas others scale to $-20$._____no_output_____## Section 3.1: $L_2$ regularization *Estimated timing to here from start of tutorial: 53 min* Regularization comes in different flavors. A very common one uses an $L_2$ or "ridge" penalty. This changes the objective function to \begin{align} -\log\mathcal{L}'(\theta | X, y)= -\log\mathcal{L}(\theta | X, y) +\frac\beta2\sum_i\theta_i^2, \end{align} where $\beta$ is a *hyperparameter* that sets the *strength* of the regularization. You can use regularization in `scikit-learn` by changing the `penalty`, and you can set the strength of the regularization with the `C` hyperparameter ($C = \frac{1}{\beta}$, so this sets the *inverse* regularization). Let's compare the unregularized classifier weights with the classifier weights when we use the default `C = 1`:_____no_output_____ <code> log_reg_l2 = LogisticRegression(penalty="l2", C=1).fit(X, y) # now show the two models models = { "No regularization": log_reg, "$L_2$ (C = 1)": log_reg_l2, } plot_weights(models)_____no_output_____ </code> Using the same scale for the two y axes, it's almost impossible to see the $L_2$ weights. Let's allow the y axis scales to adjust to each set of weights:_____no_output_____ <code> plot_weights(models, sharey=False)_____no_output_____ </code> Now you can see that the weights have the same basic pattern, but the regularized weights are an order-of-magnitude smaller. ### Interactive Demo 3.1: The effect of varying C on parameter size We can use this same approach to see how the weights depend on the *strength* of the regularization:_____no_output_____ <code> # @markdown Execute this cell to enable the widget! # Precompute the models so the widget is responsive log_C_steps = 1, 11, 1 penalized_models = {} for log_C in np.arange(*log_C_steps, dtype=int): m = LogisticRegression("l2", C=10 ** log_C, max_iter=5000) penalized_models[log_C] = m.fit(X, y) @widgets.interact def plot_observed(log_C = widgets.FloatSlider(value=1, min=1, max=10, step=1)): models = { "No regularization": log_reg, f"$L_2$ (C = $10^{log_C}$)": penalized_models[log_C] } plot_weights(models)_____no_output_____ </code> Recall from above that $C=\frac1\beta$ so larger `C` is less regularization. The top panel corresponds to $C=\infty$._____no_output_____## Section 3.2: $L_1$ regularization *Estimated timing to here from start of tutorial: 1 hr, 3 min* $L_2$ is not the only option for regularization. There is also the $L_1$, or "Lasso" penalty. This changes the objective function to \begin{align} -\log\mathcal{L}'(\theta | X, y)= -\log\mathcal{L}(\theta | X, y) +\frac\beta2\sum_i|\theta_i| \end{align} In practice, using the summed absolute values of the weights causes *sparsity*: instead of just getting smaller, some of the weights will get forced to $0$:_____no_output_____ <code> log_reg_l1 = LogisticRegression(penalty="l1", C=1, solver="saga", max_iter=5000) log_reg_l1.fit(X, y) models = { "$L_2$ (C = 1)": log_reg_l2, "$L_1$ (C = 1)": log_reg_l1, } plot_weights(models)_____no_output_____ </code> Note: You'll notice that we added two additional parameters: `solver="saga"` and `max_iter=5000`. The `LogisticRegression` class can use several different optimization algorithms ("solvers"), and not all of them support the $L_1$ penalty. At a certain point, the solver will give up if it hasn't found a minimum value. The `max_iter` parameter tells it to make more attempts; otherwise, we'd see an ugly warning about "convergence"._____no_output_____## Section 3.3: The key difference between $L_1$ and $L_2$ regularization: sparsity *Estimated timing to here from start of tutorial: 1 hr, 10 min* When should you use $L_1$ vs. $L_2$ regularization? Both penalties shrink parameters, and both will help reduce overfitting. However, the models they lead to are different. In particular, the $L_1$ penalty encourages *sparse* solutions in which most parameters are 0. Let's unpack the notion of sparsity. A "dense" vector has mostly nonzero elements: $\begin{bmatrix} 0.1 \\ -0.6\\-9.1\\0.07 \end{bmatrix}$. A "sparse" vector has mostly zero elements: $\begin{bmatrix} 0 \\ -0.7\\ 0\\0 \end{bmatrix}$. The same is true of matrices:_____no_output_____ <code> # @markdown Execute to plot a dense and a sparse matrix np.random.seed(50) n = 5 M = np.random.random((n, n)) M_sparse = np.random.choice([0,1], size=(n, n), p=[0.8, 0.2]) fig, axs = plt.subplots(1, 2, sharey=True, figsize=(10,5)) axs[0].imshow(M) axs[1].imshow(M_sparse) axs[0].axis('off') axs[1].axis('off') axs[0].set_title("A dense matrix", fontsize=15) axs[1].set_title("A sparse matrix", fontsize=15) text_kws = dict(ha="center", va="center") for i in range(n): for j in range(n): iter_parts = axs, [M, M_sparse], ["{:.1f}", "{:d}"] for ax, mat, fmt in zip(*iter_parts): val = mat[i, j] color = ".1" if val > .7 else "w" ax.text(j, i, fmt.format(val), c=color, **text_kws)_____no_output_____ </code> ### Coding Exercise 3.3: The effect of $L_1$ regularization on parameter sparsity Please complete the following function to fit a regularized `LogisticRegression` model and return **the number of coefficients in the parameter vector that are equal to 0**. Don't forget to check out the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html). _____no_output_____ <code> def count_non_zero_coefs(X, y, C_values): """Fit models with different L1 penalty values and count non-zero coefficients. Args: X (2D array): Data matrix y (1D array): Label vector C_values (1D array): List of hyperparameter values Returns: non_zero_coefs (list): number of coefficients in each model that are nonzero """ ############################################################################# # TODO Complete the function and remove the error raise NotImplementedError("Implement the count_non_zero_coefs function") ############################################################################# non_zero_coefs = [] for C in C_values: # Initialize and fit the model # (Hint, you may need to set max_iter) model = ... ... # Get the coefs of the fit model (in sklearn, we can do this using model.coef_) coefs = ... # Count the number of non-zero elements in coefs non_zero = ... non_zero_coefs.append(non_zero) return non_zero_coefs # Use log-spaced values for C C_values = np.logspace(-4, 4, 5) # Count non zero coefficients non_zero_l1 = count_non_zero_coefs(X, y, C_values) # Visualize plot_non_zero_coefs(C_values, non_zero_l1, n_voxels=X.shape[1])_____no_output_____# to_remove solution def count_non_zero_coefs(X, y, C_values): """Fit models with different L1 penalty values and count non-zero coefficients. Args: X (2D array): Data matrix y (1D array): Label vector C_values (1D array): List of hyperparameter values Returns: non_zero_coefs (list): number of coefficients in each model that are nonzero """ non_zero_coefs = [] for C in C_values: # Initialize and fit the model # (Hint, you may need to set max_iter) model = LogisticRegression(penalty="l1", C=C, solver="saga", max_iter=5000) model.fit(X,y) # Get the coefs of the fit model (in sklearn, we can do this using model.coef_) coefs = model.coef_ # Count the number of non-zero elements in coefs non_zero = np.sum(coefs != 0) non_zero_coefs.append(non_zero) return non_zero_coefs # Use log-spaced values for C C_values = np.logspace(-4, 4, 5) # Count non zero coefficients non_zero_l1 = count_non_zero_coefs(X, y, C_values) # Visualize with plt.xkcd(): plot_non_zero_coefs(C_values, non_zero_l1, n_voxels=X.shape[1])_____no_output_____ </code> Smaller `C` (bigger $\beta$) leads to sparser solutions. **Link to neuroscience**: When is it OK to assume that the parameter vector is sparse? Whenever it is true that most features don't affect the outcome. One use-case might be decoding low-level visual features from whole-brain fMRI: we may expect only voxels in V1 and thalamus should be used in the prediction. **WARNING**: be careful when interpreting $\theta$. Never interpret the nonzero coefficients as *evidence* that only those voxels/neurons/features carry information about the outcome. This is a product of our regularization scheme, and thus *our prior assumption that the solution is sparse*. Other regularization types or models may find very distributed relationships across the brain. Never use a model as evidence for a phenomena when that phenomena is encoded in the assumptions of the model. _____no_output_____## Section 3.4: Choosing the regularization penalty *Estimated timing to here from start of tutorial: 1 hr, 25 min* In the examples above, we just picked arbitrary numbers for the strength of regularization. How do you know what value of the hyperparameter to use? The answer is the same as when you want to know whether you have learned good parameter values: use cross-validation. The best hyperparameter will be the one that allows the model to generalize best to unseen data._____no_output_____### Coding Exercise 3.4: Model selection In the final exercise, we will use cross-validation to evaluate a set of models, each with a different $L_2$ penalty. Your `model_selection` function should have a for-loop that gets the mean cross-validated accuracy for each penalty value (use the `cross_val_score` function that we introduced above)._____no_output_____ <code> def model_selection(X, y, C_values): """Compute CV accuracy for each C value. Args: X (2D array): Data matrix y (1D array): Label vector C_values (1D array): Array of hyperparameter values Returns: accuracies (1D array): CV accuracy with each value of C """ ############################################################################# # TODO Complete the function and remove the error raise NotImplementedError("Implement the model_selection function") ############################################################################# accuracies = [] for C in C_values: # Initialize and fit the model # (Hint, you may need to set max_iter) model = ... # Get the accuracy for each test split using cross-validation accs = ... # Store the average test accuracy for this value of C accuracies.append(...) return accuracies # Use log-spaced values for C C_values = np.logspace(-4, 4, 9) # Compute accuracies accuracies = model_selection(X, y, C_values) # Visualize plot_model_selection(C_values, accuracies)_____no_output_____# to_remove solution def model_selection(X, y, C_values): """Compute CV accuracy for each C value. Args: X (2D array): Data matrix y (1D array): Label vector C_values (1D array): Array of hyperparameter values. Returns: accuracies (1D array): CV accuracy with each value of C. """ accuracies = [] for C in C_values: # Initialize and fit the model # (Hint, you may need to set max_iter) model = LogisticRegression(penalty="l2", C=C, max_iter=5000) # Get the accuracy for each test split using cross-validation accs = cross_val_score(model, X, y, cv=8) # Store the average test accuracy for this value of C accuracies.append(accs.mean()) return accuracies # Use log-spaced values for C C_values = np.logspace(-4, 4, 9) # Compute accuracies accuracies = model_selection(X, y, C_values) # Visualize with plt.xkcd(): plot_model_selection(C_values, accuracies)_____no_output_____ </code> This plot suggests that the right value of $C$ does matter — up to a point. Remember that C is the *inverse* regularization. The plot shows that models where the regularization was too strong (small C values) performed very poorly. For $C > 10^{-2}$, the differences are marginal, but the best performance was obtained with an intermediate value ($C \approx 10^1$)._____no_output_____--- # Summary *Estimated timing of tutorial: 1 hour, 35 minutes* In this notebook, we learned about Logistic Regression, a fundamental algorithm for *classification*. We applied the algorithm to a *neural decoding* problem: we tried to predict an animal's behavioral choice from its neural activity. We saw again how important it is to use *cross-validation* to evaluate complex models that are at risk for *overfitting*, and we learned how *regularization* can be used to fit models that generalize better. Finally, we learned about some of the different options for regularization, and we saw how cross-validation can be useful for *model selection*._____no_output_____--- # Notation \begin{align} x &\quad \text{input}\\ y &\quad \text{measurement, response}\\ \theta &\quad \text{parameter}\\ \sigma(z) &\quad \text{logistic function}\\ C &\quad \text{inverse regularization strength parameter}\\ \beta &\quad \text{regularization strength parameter}\\ \hat{y} &\quad \text{estimated output}\\ \mathcal{L}(\theta| y_i, x_i) &\quad \text{likelihood of that parameter } \theta \text{ producing response } y_i \text{ from input } x_i\\ L_1 &\quad \text{Lasso regularization}\\ L_2 &\quad \text{ridge regularization}\\ \end{align}_____no_output_____--- # Bonus _____no_output_____--- ## Bonus Section 1: The Logistic Regression model in full The fundamental input/output equation of logistic regression is: \begin{align} p(y_i = 1 |x_i, \theta) = \sigma(\theta^Tx_i) \end{align} **The logistic link function** You've seen $\theta^T x_i$ before, but the $\sigma$ is new. It's the *sigmoidal* or *logistic* link function that "squashes" $\theta^T x_i$ to keep it between $0$ and $1$: \begin{align} \sigma(z) = \frac{1}{1 + \textrm{exp}(-z)} \end{align} **The Bernoulli likelihood** You might have noticed that the output of the sigmoid, $\hat{y}$ is not a binary value (0 or 1), even though the true data $y$ is! Instead, we interpret the value of $\hat{y}$ as the *probability that y = 1*: \begin{align} \hat{y_i} \equiv p(y_i=1|x_i,\theta) = \frac{1}{{1 + \textrm{exp}(-\theta^Tx_i)}} \end{align} To get the likelihood of the parameters, we need to define *the probability of seeing $y$ given $\hat{y}$*. In logistic regression, we do this using the Bernoulli distribution: \begin{align} P(y_i\ |\ \hat{y}_i) = \hat{y}_i^{y_i}(1 - \hat{y}_i)^{(1 - y_i)} \end{align} So plugging in the regression model: \begin{align} P(y_i\ |\ \theta, x_i) = \sigma(\theta^Tx_i)^{y_i}\left(1 - \sigma(\theta^Tx_i)\right)^{(1 - y_i)}. \end{align} This expression effectively measures how good our parameters $\theta$ are. We can also write it as the likelihood of the parameters given the data: \begin{align} \mathcal{L}(\theta\ |\ y_i, x_i) = P(y_i\ |\ \theta, x_i), \end{align} and then use this as a target of optimization, considering all of the trials independently: \begin{align} \log\mathcal{L}(\theta | X, y) = \sum_{i=1}^Ny_i\log\left(\sigma(\theta^Tx_i)\right)\ +\ (1-y_i)\log\left(1 - \sigma(\theta^Tx_i)\right). \end{align}_____no_output_____--- ## Bonus Section 2: More detail about model selection In the final exercise, we used all of the data to choose the hyperparameters. That means we don't have any fresh data left over to evaluate the performance of the selected model. In practice, you would want to have two *nested* layers of cross-validation, where the final evaluation is performed on data that played no role in selecting or training the model. Indeed, the proper method for splitting your data to choose hyperparameters can get confusing. Here's a guide that the authors of this notebook developed while writing a tutorial on using machine learning for neural decoding (https://arxiv.org/abs/1708.00909). _____no_output_____ <code> # @markdown Execute to see schematic import IPython IPython.display.Image("http://kordinglab.com/images/others/CV-01.png")_____no_output_____ </code>
{ "repository": "Hadryan/course-content", "path": "tutorials/W1D4_GeneralizedLinearModels/W1D4_Tutorial2.ipynb", "matched_keywords": [ "neuroscience" ], "stars": 1, "size": 57710, "hexsha": "4803a1ea9882b8bfe99842d2c9f36671a94a89df", "max_line_length": 593, "avg_line_length": 36.31843927, "alphanum_fraction": 0.5868653613 }
# Notebook from michaelsilverstein/programming-workshops Path: source/workshops/05_visualization/files/workshop.ipynb # Introduction ## The Data Set In today's workshop, we will revisit the data set you worked with in the Machine Learning workshop. As a refresher: this data set is from the GSE53987 dataset on Bipolar disorder (BD) and major depressive disorder (MDD) and schizophrenia: Lanz TA, Joshi JJ, Reinhart V, Johnson K et al. STEP levels are unchanged in pre-frontal cortex and associative striatum in post-mortem human brain samples from subjects with schizophrenia, bipolar disorder and major depressive disorder. PLoS One 2015;10(3):e0121744. PMID: 25786133 This is a microarray data on platform GPL570 (HG-U133_Plus_2, Affymetrix Human Genome U133 Plus 2.0 Array) consisting of 54675 probes. The raw CEL files of the GEO series were downloaded, frozen-RMA normalized, and the probes have been converted to HUGO gene symbols using the annotate package averaging on genes. The sample clinical data (meta-data) was parsed from the series matrix file. You can download it **here**. In total there are 205 rows consisting of 19 individuals diagnosed with BPD, 19 with MDD, 19 schizophrenia and 19 controls. Each sample has gene expression from 3 tissues (post-mortem brain). There are a total of 13768 genes (numeric features) and 10 meta features and 1 ID (GEO sample accession): - Age - Race (W for white and B for black) - Gender (F for female and M for male) - Ph: pH of the brain tissue - Pmi: post mortal interval - Rin: RNA integrity number - Patient: Unique ID for each patient. Each patient has up to 3 tissue samples. The patient ID is written as disease followed by a number from 1 to 19 - Tissue: tissue the expression was obtained from. - Disease.state: class of disease the patient belongs to: bipolar, schizophrenia, depression or control. - source.name: combination of the tissue and disease.state ## Workshop Goals This workshop will walk you through an analysis of the GSE53987 microarray data set. This workshop has the following three tasks: 1. Visualize the demographics of the data set 2. Cluster gene expression data and appropriately visualize the cluster results 3. Compute differential gene expression and visualize the differential expression Each task has a __required__ section and a __bonus__ section. Focus on completing the three __required__ sections first, then if you have time at the end, revisit the __bonus__ sections. Finally, as this is your final workshop, we hope that you will this as an opportunity to integrate the different concepts that you have learned in previous workshops. ## Workshop Logistics As mentioned in the pre-workshop documentation, you can do this workshop either in a Jupyter Notebook, or in a python script. Please make sure you have set-up the appropriate environment for youself. This workshop will be completed using "paired-programming" and the "driver" will switch every 15 minutes. Also, we will be using the python plotting libraries matplotlib and seaborn. _____no_output_____## TASK 0: Import Libraries and Data - Download the data set (above) as a .csv file - Initialize your script by loading the following libraries._____no_output_____ <code> # Import Necessary Libraries import pandas as pd import numpy as np import seaborn as sns from sklearn import cluster, metrics, decomposition from matplotlib import pyplot as plt import itertools data = pd.read_csv('GSE53987_combined.csv', index_col=0) genes = data.columns[10:]_____no_output_____ </code> ## TASK 1: Visualize Dataset Demographics ### Required Workshop Task: ##### Use the skeleton code to write 3 plotting functions: 1. plot_distribution() - Returns a distribution plot object given a dataframe and one observation 2. plot_relational() - Returns a distribution plot object given a dataframe and (x,y) observations 3. plot_categorical() - Returns a categorical plot object given a dataframe and (x,y) observations ##### Use these functions to produce the following plots: 1. Histogram of patient ages 2. Histogram of gene expression for 1 gene 3. Scatter plot of gene expression for 1 gene by ages 4. Scatter plot of gene expression for 1 gene by disease state Your plots should satisfy the following critical components: - Axis titles - Figure title - Legend (if applicable) - Be readable ### Bonus Task: 1. Return to these functions and include functionality to customize color palettes, axis legends, etc. You can choose to define your own plotting "style" and keep that consistent for all of your plotting functions. 2. Faceting your plots. Modify your functions to take in a "facet" argument that when facet is an observation, the function will create a facet grid and facet on that observation. Read more about faceting here: Faceting generates multi-plot grids by __mapping a dataset onto multiple axes arrayed in a grid of rows and columns that correspond to levels of variables in the dataset.__ - In order to use facteting, your data __must be__ in a Pandas DataFrame and it must take the form of what Hadley Whickam calls “tidy” data. - In brief, that means your dataframe should be structured such that each column is a variable and each row is an observation. There are figure-level functions (e.g. relplot() or catplot()) that will create facet grids automatically and can be used in place of things like distplot() or scatterplot(). _____no_output_____ <code> # Function to Plot a Distribtion def plot_distribution(df, obs1, obs2=''): """ Create a distribution plot for at least one observation Arguments: df (pandas data frame): data frame containing at least 1 column of numerical values obs1 (string): observation to plot distribution on obs2 (string, optional) Returns: axes object """ fig, ax = plt.subplots() return ax # Function to Plot Relational (x,y) Plots def plot_relational(df, x, y, hue=None, kind=None): """ Create a plot for an x,y relationship (default = scatter plot) Optional functionality for additional observations. Arguments: df (pandas data frame): data frame containing at least 2 columns of numerical values x (string): observation for the independent variable y (string): observation for the dependent variable hue (string, optional): additional observation to color the plot on kind (string, optional): type of plot to create [scatter, line] Returns: axes object """ fig, ax = plt.subplots() return ax def plot_categorical(df, x, y, hue=None, kind=None): """ Create a plot for an x,y relationship where x is categorical (not numerical) Arguments: df (pandas data frame): data frame containing at least 2 columns of numerical values x (string): observation for the independent variable (categorical) y (string): observation for the dependent variable hue (string, optional): additional observation to color the plot on kind (string, optional): type of plot to create. Options should include at least: strip (default), box, and violin """ fig, ax = plt.subplot() return ax def main(): """ Generate the following plots: 1. Histogram of patient ages 2. Histogram of gene expression for 1 gene 3. Scatter plot of gene expression for 1 gene by ages 4. Scatter plot of gene expression for 1 gene by disease state """ _____no_output_____ </code> ## TASK 2: Differential Expression Analysis Differential expression analysis is a fancy way of saying, "We want to find which genes exhibit increased or decreased expression compared to a control group". Neat. Because the dataset we're working with is MicroArray data -- which is mostly normally distributed -- we'll be using a simple One-Way ANOVA. If, however, you were working with sequence data -- which follows a Negative Binomial distribution -- you would need more specialized tools. A helper function is provided below._____no_output_____ <code> def differential_expression(data, group_col, features, reference=None): """ Perform a one-way ANOVA across all provided features for a given grouping. Arguments --------- data : (pandas.DataFrame) DataFrame containing group information and feature values. group_col : (str) Column in `data` containing sample group labels. features : (list, numpy.ndarray): Columns in `data` to test for differential expression. Having them be gene names would make sense. :thinking: reference : (str, optional) Value in `group_col` to use as the reference group. Default is None, and the value will be chosen. Returns ------- pandas.DataFrame A DataFrame of differential expression results with columns for fold changes between groups, maximum fold change from reference, f values, p values, and adjusted p-values by Bonferroni correction. """ if group_col not in data.columns: raise ValueError("`group_col` {} not found in data".format(group_col)) if any([x not in data.columns for x in features]): raise ValueError("Not all provided features found in data.") if reference is None: reference = data[group_col].unique()[0] print("No reference group provided. Using {}".format(reference)) elif reference not in data[group_col].unique(): raise ValueError("Reference value {} not found in column {}.".format( reference, group_col)) by_group = data.groupby(group_col) reference_avg = by_group.get_group(reference).loc[:,features].mean() values = [] results = {} for each, index in by_group.groups.items(): values.append(data.loc[index, features]) if each != reference: key = "{}.FoldChange".format(each) results[key] = data.loc[index, features].mean()\ / reference_avg fold_change_cols = list(results.keys()) fvalues, pvalues = stats.f_oneway(*values) results['f.value'] = fvalues results['p.value'] = pvalues results['p.value.adj'] = pvalues * len(features) results_df = pd.DataFrame(results) def largest_deviation(x): i = np.where(abs(x) == max(abs(x)))[0][0] return x[i] results_df['Max.FoldChange'] = results_df[fold_change_cols].apply( lambda x: largest_deviation(x.values), axis=1) return results_df _____no_output_____# Here's some pre-subsetted data hippocampus = data[data["Tissue"] == "hippocampus"] pf_cortex = data[data["Tissue"] == "Pre-frontal cortex (BA46)"] as_striatum = data[data["Tissue"] == "Associative striatum"] # Here's how we can subset a dataset by two conditions. # You might find it useful :thinking: data[(data["Tissue"] == 'hippocampus') & (data['Disease.state'] == 'control')]_____no_output_____ </code> ### Task 2a: Volcano Plots Volcano plots are ways to showcase the number of differentially expressed genes found during high throughput sequencing analysis. Log fold changes are plotted along the x-axis, while p-values are plotted along the y-axis. Genes are marked significant if they exceed some absolute Log fold change theshold **as well** some p-value level for significance. This can be seen in the plot below. ![](https://galaxyproject.github.io/training-material/topics/transcriptomics/images/rna-seq-viz-with-volcanoplot/volcanoplot.png) Your first task will be to generate some Volcano plots: **Requirments** 1. Use the provided function to perform an ANOVA (analysis of variance) between control and experimental groups in each tissue. - Perform a separate analysis for each tissue. 2. Implement the skeleton function to create a volcano plot to visualize both the log fold change in expression values and the adjusted p-values from the ANOVA 3. Highlight significant genes with distinct colors_____no_output_____ <code> def volcano_plot(data, sig_col, fc_col, sig_thresh, fc_thresh): """ Generate a volcano plot to showcasing differentially expressed genes. Parameters ---------- data : (pandas.DataFrame) A data frame containing differential expression results sig_col : str Column in `data` with adjusted p-values. fc_col : str Column in `data` with fold changes. sig_thresh : str Threshold for statistical significance. fc_thresh """ return ax_____no_output_____ </code> ### Task 2b: Plot the Top 1000 Differentially Expressed Genes Clustered heatmaps are hugely popular for displaying differences in gene expression values. To reference such a plot, look back at the introductory material. Here we will be plotting the 1000 most differentially expressed genes for each of the analysis performed before. **Requirements** - Implement the skeleton function below - Z normalize gene values - Use a diverging and perceptually uniform colormap - Generate plots for each of the DE results above **Hint**: Look over all the options for [sns.clustermap()](https://seaborn.pydata.org/generated/seaborn.clustermap.html). It might make things easier._____no_output_____ <code> def heatmap(data, genes, group_col): """[summary] Parameters ---------- data : pd.DataFrame A (sample x gene) data matrix containing gene expression values for each sample. genes : list, str List of genes to plot """ return ax_____no_output_____ </code> **Bonus** There's nothing denoting which samples belong to which experimental group. Fix it. *Bonus hint*: Look real close at the documentation._____no_output_____## TASK 3: Clustering Analysis You've seen clustering in the previous machine learning workshop. Some basic plots were generated for you, including plotting the clusters on the principle componets. While we can certainly do more of that, we will also be introducing two new plots: elbow plots and silhouette plots. ### Elbow Plots Elbow plots are plots that are used to help diagnose the perennial question of K-means clustering: how do I chose K? To create the graph, you plot the number of clusters on the x-axis and some evaluation of "cluster goodness" on the y-axis. Looking at the name of the plot, you might guess that we're looking for an "elbow". This is the point in the graph when we start getting diminished returns in performance, and specifying more clusters may lead to over-clustering the data. An example plot is shown below. ![](https://upload.wikimedia.org/wikipedia/commons/c/cd/DataClustering_ElbowCriterion.JPG) You can see the K selected (K = 3), is right before diminishing returns start to kick in. Mathematically, this point is defined as the point in which curvature is maximized. However, the inflection point is also a decent -- though more conservative -- estimate. However, we'll just stick to eye-balling it for this workshop. If you would like to know how to automatically find the elbow point, more information can be found [here](https://raghavan.usc.edu/papers/kneedle-simplex11.pdf) ### Task 2a: Implement a function that creates an elbow plot Skeleton code is provided below. The function expects a list of k-values and their associated scores. An optional "ax" parameter is also provided. This parameter should be an axes object and can be created by issueing the following command: ```ax = plt.subplot()``` While we won't need the parameter right now, we'll likely use it in the future. **Function Requirements** - Generate plot data by clustering the entire dataset on the first 50 principle components. Vary K values from 2 - 10. - While you've been supplied a helper function for clustering, you'll need to supply the principle components yourself. Refer to your machine learning workshop along with the scikit-learn [documentation](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) - Plots each k and it's associated value. - Plots lines connecting each data point. - Produces a plot with correctly labelled axes. **Hint:** Working with an axis object is similar to base matplotlib, except `plt.scatter()` might become something like `ax.scatter()`. #### Helper Function_____no_output_____ <code> def cluster_data(X, k): """ Cluster data using K-Means. Parameters ---------- X : (numpy.ndarray) Data matrix to cluster samples on. Should be (samples x features). k : int Number of clusters to find. Returns ------- tuple (numpy.ndarray, float) A tuple where the first value is the assigned cluster labels for each sample, and the second value is the score associated with the particular clustering. """ model = cluster.KMeans(n_clusters=k).fit(X) score = model.score(X) return (model.labels_, score)_____no_output_____ </code> #### Task 2a Implementation_____no_output_____ <code> def elbow_plot(ks, scores, best=None, ax=None): """ Create a scatter plot to aid in choosing the number of clusters using K-means. Arguments --------- ks : (numpy.ndarray) Tested values for the number of clusters. scores: (numpy.ndarray) Cluster scores associated with each number K. ax: plt.Axes Object, optional """ if ax is None: ax = plt.subplot() return ax_____no_output_____ </code> Once you've created the base plotting function, you'll probably realize we have no indivation of where the elbow point is. Fix this by adding another optional parameter (`best`) to your function. The parameter `best` should be the K value that produces the elbow point. **Function Requirements** - Add an optional parameter `best` that if supplied denotes the elbow point with a vertical, dashed line. - If `best` is not supplied, the plot should still be produced but without denoting the elbow point. **Hint**: `plt.axvline` and `plt.axhline` can be used to produce vertical and horizontal lines, respectively. More information [here](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.axvline.html) **Note**: You are not required to have the line end at the associated score value._____no_output_____ <code> def elbow_plot(ks, scores, best=None, ax=None): """ Create a scatter plot to aid in choosing the number of clusters using K-means. Arguments --------- ks : (numpy.ndarray) Tested values for the number of clusters. scores: (numpy.ndarray) Cluster scores associated with each number K. best: int, optional The best value for K. Determined by the K that falls at the elbow. If passed, a black dashed line will be plotted to indicate the best. Default is no line. ax: plt.Axes Object, optional """ if ax is None: fig, ax = plt.subplots() return ax_____no_output_____ </code> ### Silhouette Plots Silhouette plots are another way to visually diagnose cluster performance. They are created by finding the [silhouette coefficient](https://en.wikipedia.org/wiki/Silhouette_(clustering)) for each sample in the data, and plotting an area graph for each cluster. The silhouette coefficient measures how well-separated clusters are from each other. The value ranges from $[-1 , 1]$, where 1 indicates good separation, 0 indicates randomness, and -1 indicates mixing of clusters. An example is posted below. ![](https://scikit-plot.readthedocs.io/en/stable/_images/plot_silhouette.png) As you can see, each sample in each cluster has the area filled from some minimal point (usually 0 or the minimum score in the dataset) and clusters are separated to produce distinct [silhouettes](https://www.youtube.com/watch?v=-TcUvXzgwMY). ### Task 3b: Implement a function to plot silhouette coefficients Because the code for create a silhouette plot can be a little bit involved, we've created both a skeleton function with documentation, and provided the following pseudo-code: ``` - Calculate scores for each sample. - Get a set of unique sample labels. - Set a score minimum - Initialize variables y_lower, and y_step - y_lower is the lower bound on the x-axis for the first cluster's silhouette - y_step is the distance between cluster silhouettes - Initialize variable, breaks - breaks are the middle point of each cluster silhouette and will be used to position the axis label - Interate through each cluster label, for each cluster: - Calcaluate the variable y_upper by adding the number of samples - Fill the area between y_lower and y_upper using the silhoutte scores for each sample - Calculate middle point of y distance. Append the variable break. - Calculate new y_lower value - Label axes with appropriate names and tick marks - Create dashed line at the average silhouette score over all samples ``` **Hint**: you might find [ax.fill_betweenx()](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.axes.Axes.fill_betweenx.html) and [ax.set_yticks()](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.axes.Axes.set_yticks.html?highlight=set_yticks#matplotlib.axes.Axes.set_yticks)/ [ax.set_yticklabels()](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.axes.Axes.set_yticklabels.html?highlight=set_yticklabels#matplotlib.axes.Axes.set_yticklabels) useful._____no_output_____ <code> def silhouette_plot(X, y, ax=None): """ Plot silhouette scores for all samples across clusters. Parameters ---------- X : numpy.ndarray Numerical data used to cluster the data. y : numpy.ndarray Cluster labels assigned to each sample. ax : matplotlib.Axes Axis object to plot scores onto. Default is None, and a new axis will be created. Returns ------- matplotlib.Axes """ if ax is None: ax = plt.subplot() scores = metrics.silhouette_samples(X, y) clusters = sorted(np.unique(y)) score_min = 0 y_lower, y_step = 5, 5 props = plt.rcParams['axes.prop_cycle'] colors = itertools.cycle(props.by_key()['color']) breaks = [] for each, color in zip(clusters, colors): # Aggregate the silhouette scores for samples, sort scores for # area filling return ax_____no_output_____ </code> ### Task 3C: Put it all together! **Requirements** - Create a function `cluster_and_plot` that will cluster a provided dataset for a range of k-values - The function should return a single figure with two subplots: - An elbow plot with the "best" K value distinguished - A silhouette plot associated with clustering determined by the provided K value. - Appropriate axes labels **Hint**: You will likely find [plt.subplots()](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.subplots.html?highlight=subplots#matplotlib.pyplot.subplots) useful._____no_output_____ <code> def cluster_and_plot(X, best=3, kmax=10): """ Cluster samples using KMeans and display the results. Results are displayed in a (1 x 2) figure, where the first subplot is an elbow plot and the second subplot is a silhouette plot. Parameters ---------- X : (numpy.ndarray) A (sample x features) data matrix used to cluster samples. best : int, optional Final value of K to use for K-Means clustering. Default is 3. kmax : int, optional Maximum number of clusters to plot in the elbow plot. Default is 10. Returns ------- matplotlib.Figure Clustering results. """ fig, axes = plt.subplots(nrows=1, ncols=2) return fig_____no_output_____ </code>
{ "repository": "michaelsilverstein/programming-workshops", "path": "source/workshops/05_visualization/files/workshop.ipynb", "matched_keywords": [ "RNA-seq" ], "stars": 8, "size": 52255, "hexsha": "4805d4f1e6a1193fd83eb71241d4c4f5931544a6", "max_line_length": 524, "avg_line_length": 43.2216708023, "alphanum_fraction": 0.5099990432 }
# Notebook from luglilab/SP018-scRNAseq-Pelosi Path: APE_POS.ipynb # APE_POS analysis_____no_output_____### 0) Library upload_____no_output_____ <code> import os import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import sklearn as sk import scipy as sp import csv import scanpy as sc import seaborn as sb import copy import re from collections import Counter from igraph import * import warnings warnings.simplefilter(action='ignore', category=FutureWarning) os.environ['PYTHONHASHSEED'] = '0' import tensorflow as tf import keras import desc_____no_output_____import rpy2.rinterface_lib.callbacks import logging from rpy2.robjects import pandas2ri import anndata2ri_____no_output_____# Ignore R warning messages #Note: this can be commented out to get more verbose R output rpy2.rinterface_lib.callbacks.logger.setLevel(logging.ERROR) # Automatically convert rpy2 outputs to pandas dataframes pandas2ri.activate() anndata2ri.activate() %load_ext rpy2.ipython plt.rcParams['figure.figsize']=(8,8) #rescale figures sc.settings.verbosity = 3 #sc.set_figure_params(dpi=200, dpi_save=300) sc.logging.print_header()scanpy==1.6.0 anndata==0.7.4 umap==0.4.6 numpy==1.19.1 scipy==1.5.2 pandas==0.25.3 scikit-learn==0.23.2 statsmodels==0.12.0 python-igraph==0.8.2 louvain==0.6.1 leidenalg==0.8.2 %%R # Load libraries from correct lib Paths for my environment - ignore this! .libPaths(.libPaths()[c(2,1)]) # Load all the R libraries we will be using in the notebook library(scran) library(RColorBrewer) library(slingshot) library(monocle) library(gam) library(clusterExperiment) library(ggplot2) library(plyr) library(MAST)_____no_output_____ </code> ### 1) Import data_____no_output_____ <code> adata = sc.read_10x_mtx("/home/spuccio/homeserver2/SP018_Pelosi/Lib_APE-POS/outs/filtered_feature_bc_matrix",var_names='gene_symbols',make_unique=True,)--> This might be very slow. Consider passing `cache=True`, which enables much faster reading from a cache file. adata.shape_____no_output_____ </code> ### APE_POS has 1441 cells detected from CELL-Ranger _____no_output_____ <code> adata.layers['UMIs'] = adata.X.copy()_____no_output_____adata.var['MT'] = adata.var_names.str.startswith('MT-')_____no_output_____sc.pp.calculate_qc_metrics(adata,qc_vars=['MT'], percent_top=None, log1p=False, inplace=True)_____no_output_____adata = adata[:,adata.X.sum(axis=0) > 0]_____no_output_____nCountsPerGene = np.sum(adata.X, axis=0) nCellsPerGene = np.sum(adata.X>0, axis=0)_____no_output_____# Show info print("Number of counts (in the dataset units) per gene:", nCountsPerGene.min(), " - " ,nCountsPerGene.max()) print("Number of cells in which each gene is detected:", nCellsPerGene.min(), " - " ,nCellsPerGene.max())Number of counts (in the dataset units) per gene: 1.0 - 575486.0 Number of cells in which each gene is detected: 1 - 1434 </code> ### 2) Filtering low expressed genes_____no_output_____ <code> # simply compute the number of genes per cell (computers 'n_genes' column) sc.pp.filter_cells(adata, min_genes=0) # mito and genes/counts cuts mito_genes = adata.var_names.str.startswith('MT-') ribo_genes = adata.var_names.str.startswith(("RPS","RPL")) # for each cell compute fraction of counts in mito genes vs. all genes adata.obs['percent_mito'] = np.sum( adata[:, mito_genes].X, axis=1) / np.sum(adata.X, axis=1) # add the total counts per cell as observations-annotation to adata adata.obs['n_counts'] = adata.X.sum(axis=1) # adata.obs['percent_ribo'] = np.sum( adata[:, ribo_genes].X, axis=1) / np.sum(adata.X, axis=1)Trying to set attribute `.obs` of view, copying. adata.shape_____no_output_____import seaborn as sns fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(12, 4), dpi=150, sharey=True) x = adata.obs['n_genes'] x_lowerbound = 1500 x_upperbound = 2000 nbins=100 sns.distplot(x, ax=ax1, norm_hist=True, bins=nbins) sns.distplot(x, ax=ax2, norm_hist=True, bins=nbins) sns.distplot(x, ax=ax3, norm_hist=True, bins=nbins) ax2.set_xlim(0,x_lowerbound) ax3.set_xlim(x_upperbound, adata.obs['n_genes'].max() ) for ax in (ax1,ax2,ax3): ax.set_xlabel('') ax1.title.set_text('n_genes') ax2.title.set_text('n_genes, lower bound') ax3.title.set_text('n_genes, upper bound') fig.text(-0.01, 0.5, 'Frequency', ha='center', va='center', rotation='vertical', size='x-large') fig.text(0.5, 0.0, 'Genes expressed per cell', ha='center', va='center', size='x-large') fig.tight_layout()_____no_output_____fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(12, 4), dpi=150, sharey=True) x = adata.obs['percent_mito'] x_lowerbound = [0.0, 0.07 ] x_upperbound = [ 0.10, 0.3 ] nbins=100 sns.distplot(x, ax=ax1, norm_hist=True, bins=nbins) sns.distplot(x, ax=ax2, norm_hist=True, bins=int(nbins/(x_lowerbound[1]-x_lowerbound[0])) ) sns.distplot(x, ax=ax3, norm_hist=True, bins=int(nbins/(x_upperbound[1]-x_upperbound[0])) ) ax2.set_xlim(x_lowerbound[0], x_lowerbound[1]) ax3.set_xlim(x_upperbound[0], x_upperbound[1] ) for ax in (ax1,ax2,ax3): ax.set_xlabel('') ax1.title.set_text('percent_mito') ax2.title.set_text('percent_mito, lower bound') ax3.title.set_text('percent_mito, upper bound') fig.text(-0.01, 0.5, 'Frequency', ha='center', va='center', rotation='vertical', size='x-large') fig.text(0.5, 0.0, 'Mitochondrial read fraction per cell', ha='center', va='center', size='x-large') fig.tight_layout()_____no_output_____fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(12, 4), dpi=150, sharey=False) sns.distplot( adata.obs['n_genes'], ax=ax1, norm_hist=True, bins=100) sns.distplot( adata.obs['n_counts'], ax=ax2, norm_hist=True, bins=100) sns.distplot( adata.obs['percent_mito'], ax=ax3, norm_hist=True, bins=100) ax1.title.set_text('Number of genes expressed per cell') ax2.title.set_text('Counts per cell') ax3.title.set_text('Mitochondrial read fraction per cell') fig.text(-0.01, 0.5, 'Frequency', ha='center', va='center', rotation='vertical', size='x-large') fig.tight_layout() #fig.savefig('filtering_panel_prefilter.pdf', dpi=600, bbox_inches='tight')_____no_output_____# initial cuts sc.pp.filter_cells(adata, min_genes=200 ) sc.pp.filter_genes(adata, min_cells=3 )filtered out 59 cells that have less than 200 genes expressed filtered out 3321 genes that are detected in less than 3 cells malat1 = adata.var_names.str.startswith('MALAT1') # we need to redefine the mito_genes since they were first # calculated on the full object before removing low expressed genes. mito_genes = adata.var_names.str.startswith("MT-") remove = np.add(mito_genes, malat1) keep = np.invert(remove) adata = adata[:,keep] print(adata.n_obs, adata.n_vars) 1382 15680 malat1 = adata.var_names.str.startswith('MALAT1') # we need to redefine the mito_genes since they were first # calculated on the full object before removing low expressed genes. mito_genes = adata.var_names.str.startswith("MT") remove = np.add(mito_genes, malat1) keep = np.invert(remove) adata = adata[:,keep] print(adata.n_obs, adata.n_vars) 1382 15608 malat1 = adata.var_names.str.startswith('MALAT1') # we need to redefine the mito_genes since they were first # calculated on the full object before removing low expressed genes. mito_genes = adata.var_names.str.startswith("RB") remove = np.add(mito_genes, malat1) keep = np.invert(remove) adata = adata[:,keep] print(adata.n_obs, adata.n_vars) 1382 15539 malat1 = adata.var_names.str.startswith('MALAT1') # we need to redefine the mito_genes since they were first # calculated on the full object before removing low expressed genes. mito_genes = adata.var_names.str.startswith("RPS") remove = np.add(mito_genes, malat1) keep = np.invert(remove) adata = adata[:,keep] print(adata.n_obs, adata.n_vars)1382 15494 malat1 = adata.var_names.str.startswith('MALAT1') # we need to redefine the mito_genes since they were first # calculated on the full object before removing low expressed genes. mito_genes = adata.var_names.str.startswith("RPL") remove = np.add(mito_genes, malat1) keep = np.invert(remove) adata = adata[:,keep] print(adata.n_obs, adata.n_vars)1382 15440 </code> ### 3) Normalization and Logarithm transformation _____no_output_____ <code> sc.pp.normalize_per_cell(adata,counts_per_cell_after=1e4) sc.pp.log1p(adata)normalizing by total count per cell adata.raw=adata_____no_output_____ </code> ### 4) Selection of highly variable genes_____no_output_____ <code> plt.rcParams['figure.figsize'] = [12, 6] # Select and plot genes that have high dispersion (>0.5) and mean expression (>0.0125) # This selection is similar to "mean.var.plot" feature selection called with # the FindVariableFeatures function in Seurat # Note how the highly_variable_genes() function in scanpy's preprocessing (pp) library # has a matching function with its plotting (pl) library sc.pp.highly_variable_genes(adata,flavor='cell_ranger', n_top_genes=4000) print('\n','Number of highly variable genes: {:d}'.format(np.sum(adata.var['highly_variable'])))If you pass `n_top_genes`, all cutoffs are ignored. extracting highly variable genes finished (0:00:00) --> added 'highly_variable', boolean vector (adata.var) 'means', float vector (adata.var) 'dispersions', float vector (adata.var) 'dispersions_norm', float vector (adata.var) Number of highly variable genes: 3999 sc.pl.highly_variable_genes(adata)_____no_output_____ </code> ### 5) Scaling_____no_output_____ <code> desc.scale(adata, zero_center=True, max_value=6)... as `zero_center=True`, sparse input is densified and may lead to large memory consumption </code> ### 6) Visualization_____no_output_____ <code> # Run principle component analysis # Calculate the visualizations sc.pp.pca(adata, n_comps=50, use_highly_variable=True, svd_solver='arpack') sc.pp.neighbors(adata)computing PCA on highly variable genes with n_comps=50 finished (0:00:00) computing neighbors using 'X_pca' with n_pcs = 50 finished: added to `.uns['neighbors']` `.obsp['distances']`, distances for each pair of neighbors `.obsp['connectivities']`, weighted adjacency matrix (0:00:00) sc.tl.umap(adata)computing UMAP finished: added 'X_umap', UMAP coordinates (adata.obsm) (0:00:03) plt.rcParams['figure.figsize']=(4,4) #sc.pl.pca_scatter(adata, color='n_counts',s=20) sc.pl.umap(adata, color='n_counts',s=20)_____no_output_____adata=desc.train(adata, dims=[adata.shape[1],64,32], tol=0.005, n_neighbors=15, batch_size=256, louvain_resolution=[0.3,0.4,0.5,0.6,0.7,0.8],# not necessarily a list, you can only set one value, like, louvain_resolution=1.0 do_tsne=False, learning_rate=200, # the parameter of tsne use_GPU=False, num_Cores=1, #for reproducible, only use 1 cpu num_Cores_tsne=30, save_encoder_weights=False, save_encoder_step=3,# save_encoder_weights is False, this parameter is not used use_ae_weights=False, do_umap=True, verbose=False)_____no_output_____#data_clus = adata.obs[['louvain_r9_clusters','louvain_r8_clusters','louvain_r7_clusters','louvain_r6_clusters','louvain_r5_clusters','louvain_r4_clusters','louvain_r3_clusters','louvain_r1_clusters']]_____no_output_____data_clus = adata.obs[['desc_0.6','desc_0.3','desc_0.4','desc_0.5','desc_0.7','desc_0.8']]_____no_output_____data_clus.columns = ['louvain_r6_clusters','louvain_r3_clusters','louvain_r4_clusters','louvain_r5_clusters','louvain_r7_clusters','louvain_r8_clusters']_____no_output_____data_clus.to_csv("/home/spuccio/homeserver2/SP018_Pelosi/APE_pos_analysis/temp.csv",index=True,header=True)_____no_output_____data_clus = pd.read_csv("/home/spuccio/homeserver2/SP018_Pelosi/APE_pos_analysis/temp.csv",index_col=0,header=0)_____no_output_____# these are the defaults we want to set: default_units = 'in' # inch, to make it more easily comparable to matpplotlib default_res = 100 # dpi, same as default in matplotlib default_width = 10 default_height = 9 # try monkey-patching a function in rpy2, so we effectively get these # default settings for the width, height, and units arguments of the %R magic command import rpy2 old_setup_graphics = rpy2.ipython.rmagic.RMagics.setup_graphics def new_setup_graphics(self, args): if getattr(args, 'units') is not None: if args.units != default_units: # a different units argument was passed, do not apply defaults return old_setup_graphics(self, args) args.units = default_units if getattr(args, 'res') is None: args.res = default_res if getattr(args, 'width') is None: args.width = default_width if getattr(args, 'height') is None: args.height = default_height return old_setup_graphics(self, args) rpy2.ipython.rmagic.RMagics.setup_graphics = new_setup_graphics_____no_output_____%%R -i data_clus library(clustree) clustree(data_clus,prefix="louvain_r",suffix = "_clusters") _____no_output_____plt.rcParams['figure.figsize']=(8,8) sc.pl.umap(adata, color='desc_0.5')_____no_output_____adata.obs['desc_0.5'].value_counts()_____no_output_____#sc.pp.scale(adata,max_value=10) sc.tl.rank_genes_groups(adata, 'desc_0.5', method='t-test', key_added = "t-test",use_raw=True)ranking genes finished: added to `.uns['t-test']` 'names', sorted np.recarray to be indexed by group ids 'scores', sorted np.recarray to be indexed by group ids 'logfoldchanges', sorted np.recarray to be indexed by group ids 'pvals', sorted np.recarray to be indexed by group ids 'pvals_adj', sorted np.recarray to be indexed by group ids (0:00:00) pd.DataFrame(adata.uns['t-test']['names']).head(20)_____no_output_____sc.tl.dendrogram(adata,groupby="desc_0.5") sc.pl.correlation_matrix(adata,'desc_0.5') using 'X_pca' with n_pcs = 50 Storing dendrogram info using `.uns["dendrogram_['desc_0.5']"]` WARNING: dendrogram data not found (using key=dendrogram_desc_0.5). Running `sc.tl.dendrogram` with default parameters. For fine tuning it is recommended to run `sc.tl.dendrogram` independently. using 'X_pca' with n_pcs = 50 Storing dendrogram info using `.uns['dendrogram_desc_0.5']` result = adata.uns['t-test'] groups = result['names'].dtype.names pd.DataFrame( {group + '_' + key[:1]: result[key][group] for group in groups for key in ['names', 'pvals','logfoldchanges']}).head(300).to_csv("/home/spuccio/homeserver2/SP018_Pelosi/APE_pos_analysis/DEGS_Louvain_clustering.tsv",sep="\t",header=True,index=False)_____no_output_____adata.write('/home/spuccio/homeserver2/SP018_Pelosi/APE_pos_analysis/adata.h5ad')... storing 'feature_types' as categorical sc.pp.neighbors(adata)computing neighbors using 'X_pca' with n_pcs = 50 finished: added to `.uns['neighbors']` `.obsp['distances']`, distances for each pair of neighbors `.obsp['connectivities']`, weighted adjacency matrix (0:00:00) sc.external.exporting.spring_project(adata=adata,project_dir="/home/spuccio/homeserver2/SP018_Pelosi/APE_pos_analysis",embedding_method="X_umap",overwrite=True)WARNING:root:Overwriting the files in /home/spuccio/homeserver2/SP018_Pelosi/APE_pos_analysis. adata.uns['desc_0.5_colors']_____no_output_____for i in range(len(adata.uns['desc_0.5_colors'])): adata.obs['desc_0.5_colors'] print(i)0 1 2 3 4 5 6 7 8 adata.obs['desc_0.5'] _____no_output_____ </code>
{ "repository": "luglilab/SP018-scRNAseq-Pelosi", "path": "APE_POS.ipynb", "matched_keywords": [ "Monocle", "Scanpy", "Seurat" ], "stars": null, "size": 602345, "hexsha": "4805e5c636f8a0dd5cd8a603dd4cb91ee3f4825c", "max_line_length": 113500, "avg_line_length": 390.3726506805, "alphanum_fraction": 0.9311358109 }
# Notebook from genepattern/TCGAImporter-notebooks Path: TCGA_HTSeq_counts/OV/Ovarian Serous Cystadenocarcinoma (OV).ipynb # Ovarian Serous Cystadenocarcinoma (OV) [Jump to the urls to download the GCT and CLS files](#Downloads)_____no_output_____**Authors:** Alejandra Ramos, Marylu Villa and Edwin Juarez **Is this what you want your scientific identity to be?** **Contact info:** Email Edwin at [[email protected]](mailto:[email protected]) or post a question in http://www.genepattern.org/help_____no_output_____This notebook provides the steps to download all the OV samples from The Cancer Genome Atlas (TCGA) contained in the Genomic Data Commons (GDC) Data portal. These samples can be downloaded as a GCT file and phenotype labels (primary tumor vs normal samples) can be downloaded as a CLS file. These files are compatible with other GenePattern Analyses._____no_output_____![image.png](attachment:image.png)_____no_output_____# Overview _____no_output_____Ovarian serous cystadenocarcinoma is an ovarian epithelial tumour at the malignant end of the spectrum of ovarian serous tumours. _____no_output_____<p><img alt="Resultado de imagen para Ovarian serous cystadenocarcinoma" src="https://image1.slideserve.com/2123163/o-varian-serous-cystadenocarcinoma-n.jpg" style="width: 667px; height: 500px;" /></p> _____no_output_____# OV Statistics_____no_output_____<p>In 2010, 21,888 women were estimated to have been diagnosed with ovarian cancer in the United States and 13,850 women were estimated to have died of this disease.<sup>1</sup>&nbsp;Ovarian serous cystadenocarcinoma, the cancer being studied by TCGA, is a type of epitherial ovarian cancer&nbsp;and accounts for about 90 percent of all ovarian cancers.<sup>2</sup>&nbsp;&nbsp;Women aged 65 and older are most affected by ovarian cancer. As a result of the lack of effective screening tests, most women are diagnosed with advanced cancer</p> _____no_output_____<p><img alt="Resultado de imagen para ovarian serous cystadenocarcinoma statistics female male" src="https://www.cancerresearchuk.org/sites/default/files/cancer-stats/cases_crude_f_ovarian_i15/cases_crude_f_ovarian_i15.png" style="width: 809px; height: 500px;" /></p> https://www.cancerresearchuk.org/health-professional/cancer-statistics/statistics-by-cancer-type/ovarian-cancer/incidence_____no_output_____# Dataset's Demographic information_____no_output_____<p>TCGA contained 379 OV&nbsp;samples &nbsp;(374&nbsp;primary cancer samples, and 0&nbsp;normal tissue samples and the rest are ignored)&nbsp;from 376&nbsp;people. Below is a summary of the demographic information represented in this dataset. If you are interested in viewing the complete study, as well as the files on the GDC Data Portal, you can follow&nbsp;<a href="https://portal.gdc.cancer.gov/repository?facetTab=cases&amp;filters=%7B%22op%22%3A%22and%22%2C%22content%22%3A%5B%7B%22op%22%3A%22in%22%2C%22content%22%3A%7B%22field%22%3A%22cases.project.project_id%22%2C%22value%22%3A%5B%22TCGA-UVM%22%5D%7D%7D%2C%7B%22op%22%3A%22in%22%2C%22content%22%3A%7B%22field%22%3A%22files.analysis.workflow_type%22%2C%22value%22%3A%5B%22HTSeq%20-%20Counts%22%5D%7D%7D%2C%7B%22op%22%3A%22in%22%2C%22content%22%3A%7B%22field%22%3A%22files.experimental_strategy%22%2C%22value%22%3A%5B%22RNA-Seq%22%5D%7D%7D%5D%7D&amp;searchTableTab=cases" target="_blank">this link.(these data were gathered on July 17th, 2018)</a></p> _____no_output_____![image.png](attachment:image.png)_____no_output_____# Login to GenePattern _____no_output_____<div class="alert alert-info"> <h3 style="margin-top: 0;"> Instructions <i class="fa fa-info-circle"></i></h3> <ol> <li>Login to the *GenePattern Cloud* server.</li> </ol> </div>_____no_output_____ <code> # Requires GenePattern Notebook: pip install genepattern-notebook import gp import genepattern # Username and password removed for security reasons. genepattern.display(genepattern.session.register("https://gp-beta-ami.genepattern.org/gp", "", ""))_____no_output_____ </code> # Downloading RNA-Seq HTSeq Counts Using TCGAImporter _____no_output_____Use the TCGAImporter module to download RNA-Seq HTSeq counts from the GDC Data Portal using a Manifest file and a Metadata file_____no_output_____<p><strong>Input files</strong></p> <ul> <li><em>Manifest file</em>: a file containing the list of RNA-Seq samples to be downloaded.</li> <li><em>Metadata file</em>: a file containing information about the files present at the GDC Data Portal. Instructions for downloading the Manifest and Metadata files can be found here: <a href="https://github.com/genepattern/TCGAImporter/blob/master/how_to_download_a_manifest_and_metadata.pdf" target="_blank">https://github.com/genepattern/TCGAImporter/blob/master/how_to_download_a_manifest_and_metadata.pdf</a></li> </ul> <p><strong>Output files</strong></p> <ul> <li><em>OV_TCGA.gct</em> - This is a tab delimited file that contains the gene expression&nbsp;(HTSeq&nbsp;counts) from the samples listed on the Manifest file. For more info on GCT files, look at reference <a href="#References">1</a><em> </em></li> <li><em><em>OV_TCGA.cls</em> -</em> The CLS file defines phenotype labels (in this case Primary Tumor and Normal Sample) and associates each sample in the GCT file with a label. For more info on CLS files, look at reference <a href="#References">2</a></li> </ul> _____no_output_____<div class="alert alert-info"> <h3 style="margin-top: 0;"> Instructions <i class="fa fa-info-circle"></i></h3> <ol> <li>Load the manifest file in **Manifest** parameter.</li> <li>Load the metadata file in **Metadata** parameter.</li> <li>Click **run**.</li> </ol> </div>_____no_output_____<p><strong>Estimated run time for TCGAImporter</strong> : ~ 8 minutes</p> _____no_output_____ <code> tcgaimporter_task = gp.GPTask(genepattern.session.get(0), 'urn:lsid:broad.mit.edu:cancer.software.genepattern.module.analysis:00369') tcgaimporter_job_spec = tcgaimporter_task.make_job_spec() tcgaimporter_job_spec.set_parameter("manifest", "https://cloud.genepattern.org/gp/users/marylu257/tmp/run5355034370763214664.tmp/manifest_TCGA.txt") tcgaimporter_job_spec.set_parameter("metadata", "https://cloud.genepattern.org/gp/users/marylu257/tmp/run8546621941789077280.tmp/metadata_TCGA.json") tcgaimporter_job_spec.set_parameter("output_file_name", "OV_TCGA") tcgaimporter_job_spec.set_parameter("gct", "True") tcgaimporter_job_spec.set_parameter("translate_gene_id", "False") tcgaimporter_job_spec.set_parameter("cls", "True") genepattern.display(tcgaimporter_task) job35192 = gp.GPJob(genepattern.session.get(0), 35192) genepattern.display(job35192)_____no_output_____collapsedataset_task = gp.GPTask(genepattern.session.get(0), 'urn:lsid:broad.mit.edu:cancer.software.genepattern.module.analysis:00134') collapsedataset_job_spec = collapsedataset_task.make_job_spec() collapsedataset_job_spec.set_parameter("dataset.file", "https://cloud.genepattern.org/gp/jobResults/32330/OV_TCGA.gct") collapsedataset_job_spec.set_parameter("chip.platform", "ftp://ftp.broadinstitute.org/pub/gsea/annotations/ENSEMBL_human_gene.chip") collapsedataset_job_spec.set_parameter("collapse.mode", "Maximum") collapsedataset_job_spec.set_parameter("output.file.name", "<dataset.file_basename>.collapsed") genepattern.display(collapsedataset_task) job32415 = gp.GPJob(genepattern.session.get(0), 32415) genepattern.display(job32415)_____no_output_____ </code> # Downloads _____no_output_____<p>You can download the input and output files of TCGAImporter for this cancer type here:</p> <p><strong>Inputs:</strong></p> <ul> <li><a href="https://datasets.genepattern.org/data/TCGA_HTSeq_counts/KIRP/KIRP_MANIFEST.txt" target="_blank">https://datasets.genepattern.org/data/TCGA_HTSeq_counts/OV/OV_MANIFEST.txt</a></li> <li><a href="https://datasets.genepattern.org/data/TCGA_HTSeq_counts/KIRP/KIRP_METADATA.json" target="_blank">https://datasets.genepattern.org/data/TCGA_HTSeq_counts/OV/OV_METADATA.json</a></li> </ul> <p><strong>Outputs:</strong></p> <ul> <li><a href="https://datasets.genepattern.org/data/TCGA_HTSeq_counts/KIRP/KIRP_TCGA.gct" target="_blank">https://datasets.genepattern.org/data/TCGA_HTSeq_counts/OV/OV_TCGA.gct</a></li> <li><a href="https://datasets.genepattern.org/data/TCGA_HTSeq_counts/KIRP/KIRP_TCGA.cls" target="_blank">https://datasets.genepattern.org/data/TCGA_HTSeq_counts/OV/OV_TCGA.cls</a></li> </ul> _____no_output_____If you'd like to download similar files for other TCGA datasets, visit this link: - https://datasets.genepattern.org/?prefix=data/TCGA_HTSeq_counts/_____no_output_____# References_____no_output_____[1] http://software.broadinstitute.org/cancer/software/genepattern/file-formats-guide#GCT_____no_output_____[2] http://software.broadinstitute.org/cancer/software/genepattern/file-formats-guide#CLS_____no_output_____[3] https://radiopaedia.org/articles/ovarian-serous-cystadenocarcinoma&nbsp;</p> [4] https://www.google.com/search?biw=1366&amp;bih=635&amp;tbm=isch&amp;sa=1&amp;ei=BmE-W5DPMYvF0PEPnNK_gAg&amp;q=ovarian+serous+cystadenocarcinoma+statistics+femile+male&amp;oq=ovarian+serous+cystadenocarcinoma+statistics+femile+male&amp;gs_l=img.3...2167.2167.0.2859.1.1.0.0.0.0.61.61.1.1.0....0...1c.1.64.img..0.0.0....0.IqXITYCwJLc#imgrc=HXuSxFbwFdbNmM:</p> [5] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2745605/</p> [6] https://cancergenome.nih.gov/cancersselected/ovarian</p> _____no_output_____
{ "repository": "genepattern/TCGAImporter-notebooks", "path": "TCGA_HTSeq_counts/OV/Ovarian Serous Cystadenocarcinoma (OV).ipynb", "matched_keywords": [ "RNA-seq" ], "stars": null, "size": 146272, "hexsha": "4805f494403b1cbfe54a512f9bfe670a93755372", "max_line_length": 94760, "avg_line_length": 283.4728682171, "alphanum_fraction": 0.922548403 }
# Notebook from chrispatsalis/bioinf575 Path: patsalis_hw3_refactoring.ipynb # Homework 3: Functional file parsing_____no_output_____ <code> local_files/MS_UMICH/bioinf_575/homework/homework3_refactoring/patsalis_hw3_refactoring.ipynb_____no_output_____ </code> --- ## Topic areas * Functions * I/O operations * String operations * Data structures_____no_output_____--- ## Background_____no_output_____[ClinVar][1] is a freely accessible, public archive of reports of the relationships among human variations and phenotypes, with supporting evidence. For this assignment, you will be working with a Variant Call Format (VCF) file. Below are the necessary details regarding this assignment, but consider looking [here][2] for a more detailed description of the file format. The purpose of the VCF format is to store gene sequence variations in a plain-text form. The data you will be working with (`clinvar_20190923_short.vcf`) contains several allele frequencies from different databases. The one to look for in this assignment is from ExAC database. More information about the database can be found [here][3]. ### The file format The beginning of every VCF file contains various sets of information: * Meta-information (details about the experiment or configuration) lines start with **`##`** * These lines are helpful in understanding specialized keys found in the `INFO` column. It is in these sections that one can find: * The description of the key * The data type of the values * The default value of the values * Header lines (column names) start with **`#`** From there on, each line is made up of tab (`\t`) separated values that make up eight (8) columns. Those columns are: 1. CHROM (chromosome) 2. POS (base pair position of the variant) 3. ID (identifier if applicable; `.` if not applicable/missing) 4. REF (reference base) 5. ALT (alternate base(s): comma (`,`) separated if applicable) 6. QUAL (Phred-scaled quality score; `.` if not applicable/missing) 7. FILTER (filter status; `.` if not applicable/missing) 8. INFO (any additional information about the variant) * Semi-colon (`;`) separated key-value pairs * Key-value pairs are equal sign (`=`) separated (key on the left, value on the right) * If a key has multiple values, the values are comma (`,`) separated #### Homework specific information The given data (`clinvar_20190923_short.vcf`) is a specialized form of the VCF file. As such, there are some additional details to consider when parsing for this assignment. You will be expected to consider two (2) special types of keys: 1. The `AF_EXAC` key that describes the allele frequencies from the ExAC database > `##INFO=<ID=AF_EXAC,Number=1,Type=Float,Description="allele frequencies from ExAC">` * The data included are `float`ing point numbers 2. The `CLNDN` key that gives all the names the given variant is associated with > `##INFO=<ID=CLNDN,Number=.,Type=String,Description="ClinVar's preferred disease name for the concept specified by disease identifiers in CLNDISDB">` * The data are`str`ings. **However**, if there are multiple diseases associated with a given variant, the diseases are pipe (`|`) separated (there are 178 instances of this case) --- [1]: https://www.ncbi.nlm.nih.gov/clinvar/intro/ [2]: https://samtools.github.io/hts-specs/VCFv4.3.pdf [3]: http://exac.broadinstitute.org_____no_output_____## Instructions_____no_output_____It is safe to assume that this homework will take a considerable amount of string operations to complete. But, it is important to note that this skill is _incredibly_ powerful in bioinformatics. Many dense, plain-text files exist in the bioinformatic domain, and mastering the ability to parse them is integral to many people's research. While the format we see here has a very clinical use case, other formats exist that you will likely encounter: CSV, TSV, SAM, GFF3, etc. Therefore, we <u>***STRONGLY ENCOURAGE***</u> you to: * Come to office hours * Schedule one-on-one meetings * Post to GitHub * Ask a friend Ensure you _truly_ understand the concepts therein. The concepts here are not esoteric, but very practical. Also, **ask early, ask often**. That said, on to the instructions for the assignment. ### Expectations You are expected to: 1. Move the `clinvar_20190923_short.vcf` to the same folder as this notebook 1. Write a function called `parse_line` that: 1. Takes a `str`ing as an argument 2. Extract the `AF_EXAC` data to determine the rarity of the variant 1. If the disease is rare: * `return` an a `list` of associated diseases 2. If the disease is not rare: * `return` an empty `list` 2. Write another function called `read_file` that: 1. Takes a `str`ing as an argument representing the file to be opened 2. Open the file 3. Read the file _line by line_. * **Note**: You are expected to do this one line at a time. The reasoning is that if the file is sufficiently large, you may not have the memory available to hold it. So, **do not** use `readlines()`! * If you do, your grade will be reduced 4. Passes the line to `parse_line` 5. Use a dictionary to count the results given by `parse_line` to keep a running tally (or count) of the number of times a specific disease is observed 6. `return` that dictionary 3. `print` the results from `read_file` when it is complete 4. Each function must have its own cell 5. The code to run all of your functions must have its own cell_____no_output_____--- ## Academic Honor Code In accordance with Rackham's Academic Misconduct Policy; upon submission of your assignment, you (the student) are indicating acceptance of the following statement: > “I pledge that this submission is solely my own work.” As such, the instructors reserve the right to process any and all source code therein contained within the submitted notebooks with source code plagiarism detection software. Any violations of the this agreement will result in swift, sure, and significant punishment._____no_output_____--- ## Due date This assignment is due **October 7th, 2019 by Noon (12 PM)**_____no_output_____--- ## Submission > `<uniqname>_hw3.ipynb` ### Example > `mdsherm_hw3.ipynb` We will *only* grade the most recent submission of your exam._____no_output_____--- ## Late Policy Each submission will receive a **10%** penalty per day (up to three days) that the assignment is late. After that, the student will receive a **0** for the exam._____no_output_____--- ## Good luck and code responsibly! ---_____no_output_____# Original Code_____no_output_____ <code> # Define your read_file function here """ This function will open a vcf file and read in contents line by line. This function will take a file location as a string and open the file. Each line of the string will be read in, one at a time, and passed to the function parse_line() if it does not start with a #. The results of that function are the CLNDN column of the line, assuming it fits the parameters. The result of parse_line() will be saved as the variable diseases and used to create a dictionary of the results(disease_dict), including the frequency of which the result is returned. Parameters: arg1 (string): The input argument of this function is a file location in string format. Returns: Dictionary: The result of this function is a dictionary, disease_dict, that displays the associated diseases and their frquency among rare gene variants. """ def read_file(string): disease_dict={} with open(string, 'r') as clinvar_file: while True: line=clinvar_file.readline() #will continue to read line by line until EOF if not line: break if not line.startswith("#"): #Ignores the ID lines at the beginning of file diseases=parse_line(line) for x in diseases: if x in disease_dict: disease_dict[x]=disease_dict[x]+1 else: disease_dict[x]=1 return disease_dict_____no_output_____""" This function will parse a vcf text file, identifying the gene variant rarity and creating a list of the associated diseases for each variant. This function will take in a vcf text line and parse the argument for the AF_EXAC value. If the gene variant has a value <= .0001, It creates a list of the associated clinical diseases(CLNDN column) and returns it. If the gene variant has an AF_EXAC value >.0001, it returns an empty list. Parameters: arg1 (string): This function takes in a vcf file line in the format of a string. Returns: List: This function will return a list of associated clinical diseases for rare gene variants, as defined by an AF_EXAC value <= .0001, and an empty list for common gene variants. """ def parse_line(string): CLNDN_disease_clean=[] line_lists=string.strip().split("\t") line_lists=line_lists[7].strip().split(";") for element in line_lists: if element.startswith("AF_EXAC") ==True: #index to AF_EXAC column and compare values AF_EXAC_list2=element.split("=") AF_EXACnum=float(AF_EXAC_list2[1]) if AF_EXACnum == 0.0001 or AF_EXACnum< 0.0001: for element2 in line_lists: if element2.startswith("CLNDN") == True: #Index to diseases to create list CLNDN_list = element2.split('=') CLNDN_disease=CLNDN_list[1] CLNDN_disease1=CLNDN_disease.split('|') for element1 in CLNDN_disease1: if element1 != 'not_provided' and element1 !='not_specified': CLNDN_disease_clean.append(element1) return CLNDN_disease_clean_____no_output_____%time read_file('clinvar_20190923_short.vcf')CPU times: user 2 µs, sys: 1e+03 ns, total: 3 µs Wall time: 5.96 µs # DO NOT MODIFY THIS CELL! # If your code works as expected, this cell should print the results from pprint import pprint pprint(read_file('clinvar_20190923_short.vcf')){'Cardiovascular_phenotype': 14, 'Cleft_palate': 1, 'Congenital_myasthenic_syndrome': 3, 'Developmental_regression': 1, 'Dystonia': 1, 'EEG_with_generalized_epileptiform_discharges': 1, 'Ehlers-Danlos_syndrome,_progeroid_type,_2': 2, 'Expressive_language_delay': 1, 'Failure_to_thrive': 1, 'Global_developmental_delay': 2, 'Growth_delay': 1, 'Hypothyroidism': 1, 'Idiopathic_generalized_epilepsy': 14, 'Immunodeficiency_16': 2, 'Immunodeficiency_38_with_basal_ganglia_calcification': 4, 'Inability_to_walk': 1, 'Inborn_genetic_diseases': 2, 'Infantile_axial_hypotonia': 1, 'Intellectual_disability': 1, 'Limb_hypertonia': 1, 'Marfanoid_habitus': 1, 'Mental_retardation,_autosomal_dominant_42': 1, 'Multifocal_epileptiform_discharges': 1, 'Muscular_hypotonia': 2, 'Myasthenic_syndrome,_congenital,_8': 85, 'Myelodysplastic_syndrome': 1, 'Neurodevelopmental_Disability': 2, 'Nystagmus': 1, 'Seizures': 2, 'Severe_Myopia': 1, 'Shprintzen-Goldberg_syndrome': 37, 'Spinocerebellar_ataxia_21': 1, 'Spondyloepimetaphyseal_dysplasia_with_joint_laxity': 1, 'Strabismus': 1, 'Upper_limb_hypertonia': 1, 'hypotonia': 2} </code> ---_____no_output_____# Refactored code #_____no_output_____ <code> import numpy as np import pandas as pd import itertools as it_____no_output_____def read_file_re(string): """ This function will save a vcf file as a dataframe and return a dictionary of diseases (and thier frequency) related to rare disease variants. Rare disease variants are defined as an AF_EXAC value <= 0.0001. This function will take a file (.vcf) location as a string and save it to a dataframe including only the INFO column. The function parse_file is applied to each row of the dataframe and the results are saved as CLNDN. The nonetype values are removed and a dictionary of rare disease variants including each of their frequency mentioned in the file is returned. Parameters: arg1 (string): The input argument of this function is a file (.vcf) location in string format. Returns: Dictionary: The result of this function is a dictionary, disease_dict, that displays the associated diseases and their frquency among rare gene variants. """ info=pd.read_csv(string, sep='\t', comment='#', names=['CHROM','POS','ID','REF','ALT','QUAL','FILTER','INFO']) info=info['INFO'].str.split(';') CLNDN=info.apply(parse_file_re) CLNDN=[disease for disease in CLNDN if disease] CLNDN=list(it.chain.from_iterable(CLNDN)) CLNDNs=set(CLNDN) disease_dict={x:CLNDN.count(x) for x in CLNDNs if not x.startswith('not_') == True} return disease_dict_____no_output_____def parse_file_re(dataframe): """ This function parses each list of the input series and returns a list of the diseases associated to the gene variant, ONLY if it is considered 'rare', defined by an AF_EXAC value <=0.0001. Otherwise a none type object is present. This function takes in a series of the info column from the input file (.vcf) and parses it for rare gene variants, as defined by an AF_EXAC value <= 0.0001. The function will return a list of the diseases associated with the rare gene variant or a none type object if none are known. Parameters: dataframe (seres): This function takes in a series consisting of the INFO column of a .vcf file split into lists. Returns: List: This function will returns a list of associated clinical diseases for rare gene variants, as defined by an AF_EXAC value <= .0001 or a none type object for non-rare variants or those without known diseases. """ disease_set=set() for element in dataframe: if element.startswith('AF_EXAC') == True: AF_EXAC=element.split('=') AF_EXACnum=float(AF_EXAC[1]) if AF_EXACnum ==0.0001 or AF_EXACnum < 0.0001: for elementC in dataframe: if elementC.startswith('CLNDN') ==True: CLNDN=elementC.split('=') CLNDN=CLNDN[1].split('|') return CLNDN_____no_output_____%time read_file_re('clinvar_20190923_short.vcf')CPU times: user 2 µs, sys: 1 µs, total: 3 µs Wall time: 6.2 µs </code>
{ "repository": "chrispatsalis/bioinf575", "path": "patsalis_hw3_refactoring.ipynb", "matched_keywords": [ "SAMtools", "bioinformatics" ], "stars": null, "size": 23582, "hexsha": "480c94d9883ce9c78ea17818bf5b0a09ee989bfc", "max_line_length": 483, "avg_line_length": 37.6108452951, "alphanum_fraction": 0.5759477568 }
# Notebook from mariaeduardagimenes/Manual-Pratico-Deep-Learning Path: Adaline.ipynb No notebook anterior, nós aprendemos sobre o Perceptron. Vimos como ele aprende e como pode ser utilizado tanto para classificação binária quanto para regressão linear. Nesse notebook, nós veremos um algoritmo muito parecido com o Perceptron, mais conhecido como __Adaline__, que foi uma proposta de melhoria ao algoritmo original do Perceptron. Veremos as semelhanças e diferenças entre os dois algoritmos e iremos implementá-lo utilizando python e numpy. Por fim, vamos aplicar nos mesmos problemas de classificação do notebook do Perceptron para entender de fato suas diferenças. __O código para utilizar o Adaline em problemas de regressão é exatamente o mesmo do perceptron__. __Objetivos__: - Entender as diferenças entre os algoritmos do Perceptron e Adaline. - Implementar o Adaline e seu modelo de aprendizado em Python puro e Numpy - Utilizar o Adaline para classificação e regressão._____no_output_____# Sumário_____no_output_____[Introdução](#Introdução) [Regra de Aprendizado do Adaline](#Regra-de-Aprendizado-do-Adaline) [Classificação](#Classificação) - [Porta AND/OR](#Porta-AND/OR) - [Exercício de Classificação](#Exerc%C3%ADcio-de-Classificação)_____no_output_____# Imports e Configurações_____no_output_____ <code> import numpy as np import pandas as pd import matplotlib.pyplot as plt from random import random from sklearn.linear_model import LinearRegression from sklearn.preprocessing import MinMaxScaler from sklearn.datasets.samples_generator import make_blobs %matplotlib inline_____no_output_____ </code> # Introdução_____no_output_____Poucos meses após a publicação do teorema da convergência do Perceptron por Rosenblatt, os engenheiros da Universidade de Stanford, Bernard Widrow e Marcian Hoff, publicaram um trabalho descrevendo uma rede neural muito parecida com o Perceptron, a __Adaline__ (do inglês _ADAptive LINear Element_). Porém, ao invés de utilizar a função _step_ como função de ativação, a __Adaline utiliza função de ativação linear e tem uma nova regra de aprendizado supervisionado__, conhecida como __regra de Widrow-Hoff__ (ou __regra delta__, ou ainda __regra LMS__). De fato, tanto o Perceptron quanto o Adaline possuem muitas características semelhantes e __é comum ver o pessoal confundindo o Perceptron com o Adaline__. Entre as principais semelhanças, podemos destacar: - Ambos possuem __apenas um neurônio de N entradas e apenas uma saída. Não há camadas escondidas__. - Ambos são __classificadores lineares binários__ por definição, mas podemos adaptá-los para efetuar __regressão linear__, da mesma forma como vimos no notebook sobre o Perceptron. __Na verdade, o código para treinar um Adaline para regressão é o mesmo de um Perceptron__. - Ambos tem o **método de aprendizagem _online_**. Isto é, a atualização dos pesos é efetuada amostra por amostra. - Ambos tem uma **função _step_ para classificação**. Porém, ao contrário do Perceptron, __na Adaline ela não é utilizada na atualização dos pesos__. Nós veremos por que a seguir. Porém, a principal diferença entre o Perceptron e a Adaline é que o Perceptron utiliza os labels das classes para fazer a atualização dos pesos, enquanto __a Adaline utiliza o resultado da função de ativação (linear) como valor contínuo de predição__. Isto é, ao invés da saída ser discreta como no Perceptron (0 ou 1), __na Adaline a saída pode ser qualquer valor contínuo__. Essa diferença fica mais clara quando vemos a figura a seguir: <img src="images/comparacao_perceptron_adaline.png"> [Fonte](https://www.quora.com/What-is-the-difference-between-a-Perceptron-Adaline-and-neural-network-model) Repare, como dito, que ambos têm a função _step_. No Perceptron, ela é utilizada como função de ativação. No Adaline, por sua vez, a função de ativação é linear e a funcão _step_ é utilizada para gerar a predição. Por calcular a saída como um valor contínuo, __muitos consideram o Adaline mais poderoso__, uma vez que a diferença entre a saída desejada e o valor predito ($y_i - \widehat{y}_i$) nos diz agora "o quanto estamos certos ou errados". __Na prática, isso faz com o que o Adaline tente encontrar a "melhor solução" para o problema, ao invés de somente uma "solução adequada"__. Tomando como exemplo a figura abaixo, o Perceptron pode encontrar diversas retas que separam as classes, enquanto o Adaline tenta encontrar a melhor reta que separa as classes. <img src="images/hiperplanos_perceptron_adaline.png" width='700'> [Fonte](http://www.barbon.com.br/wp-content/uploads/2013/08/RNA_Aula4.pdf) Ok, mas como isso muda o aprendizado? É o que veremos a seguir._____no_output_____## Regra de Aprendizado do Adaline_____no_output_____A atualização dos pesos do Adaline é dada pela mesma fórmula do Perceptron: $$w_i = w_i + \lambda(y_i - \widehat{y}_i)x_i$$ Onde $\lambda$ é a __taxa de aprendizagem__. Mas você já imaginou da onde vem essa fórmula? Em primeiro lugar, o método de atualização dos pesos é baseado na __Regra Delta__ (*Delta Rule*). Sendo $\overrightarrow{w} = \{w_1, w_2, ..., w_D\}$, a atualização dos pesos é dada por: $$\overrightarrow{w} = \overrightarrow{w} - \Delta{\overrightarrow{w}}$$ em que: $$\Delta{\overrightarrow{w}} = \lambda\nabla E(\overrightarrow{w})$$ Sendo $\nabla E(\overrightarrow{w})$ o gradiente de uma função que depende de $\overrightarrow{w}$ e que queremos minimizar. No caso do Adaline, __a função de custo é dada pela soma dos erros quadrados__: $$J(w) = \frac{1}{2}\sum_{i}^N (y_i - \widehat{y}_i)^2$$ Onde $N$ é a quantidade de amostras nos dados, e as demais variáveis representam as mesmas vistas anteriormente. Repare que a função de custo é quase uma _Mean Squared Error (MSE)_, só que ao invés de dividir por $N$, estamos dividindo por 2 o resultado do somatório. O por quê disso será entendido mais a frente na demonstração. Queremos encontrar, então, o vetor $\overrightarrow{w}$ que minimiza a função $J$. Assim, temos: $$\frac{\partial J}{\partial w_i} = \frac{\partial}{\partial w_i}\frac{1}{2}\sum_i^N (y_i - \widehat{y}_i)^2$$ Como a derivada do somatório é igual ao somatório das derivadas: $$= \frac{1}{2}\sum_i^N \frac{\partial}{\partial w_i}(y_i - \widehat{y}_i)^2$$ Aplicando a regra da cadeia: $$= \sum_i^N (y_i - \widehat{y}_i)\frac{\partial}{\partial w_i}(y_i - \widehat{y}_i)$$ Repare que, quando derivamos $(y_i - \widehat{y}_i)^2$, o expoente 2, ao sair do somatório, foi multiplicado por $\frac{1}{2}$, tornando-o 1. Isso é o que os matemáticos denominam de "conveniência matemática". Como $\widehat{y}_i = x_iw_i + b$ é uma função que depende de $w$, e sua derivada em relação a $w_i$ é apenas $x_i$, temos que: $$\frac{\partial J}{\partial w_i} = \sum_i^N (y_i - \widehat{y}_i)(-x_i)$$ $$\frac{\partial J}{\partial w_i} = -\sum_i^N (y_i - \widehat{y}_i)x_i$$ $$\frac{\partial J}{\partial \overrightarrow{w}} = -(\overrightarrow{y} - \overrightarrow{\widehat{y}_i})\overrightarrow{x}$$ De maneira análoga, podemos calcular que a derivada de $J$ em relação a $b_i$ é: $$\frac{\partial J}{\partial b_i} = -\sum_i^N (y_i - \widehat{y}_i)*1.0$$ Já que a derivada de $\widehat{y}_i$ em relação a $b_i$ ($\frac{\partial J}{\partial b_i}$) é igual a 1.0. Logo, a atualização dos bias será dada por: $$b_i = b_i + \lambda(y_i - \widehat{y}_i)$$_____no_output_____# Regressão_____no_output_____ <code> df = pd.read_csv('data/notas.csv') print(df.shape) df.head(10)_____no_output_____x = df[['prova1', 'prova2', 'prova3']].values y = df['final'].values.reshape(-1, 1) print(x.shape, y.shape)_____no_output_____minmax = MinMaxScaler(feature_range=(-1,1)) x = minmax.fit_transform(x.astype(np.float64))_____no_output_____D = x.shape[1] w = [2*random() - 1 for i in range(D)] b = 2*random() - 1 learning_rate = 1e-2 for step in range(2001): cost = 0 for x_n, y_n in zip(x, y): y_pred = sum([x_i*w_i for x_i, w_i in zip(x_n, w)]) + b error = y_n - y_pred w = [w_i + learning_rate*error*x_i for x_i, w_i in zip(x_n, w)] b = b + learning_rate*error cost += error**2 if step%200 == 0: print('step {0}: {1}'.format(step, cost)) print('w: ', w) print('b: ', b)_____no_output_____ </code> # Classificação_____no_output_____## Porta AND/OR_____no_output_____ <code> x = np.array([[0, 0], [0, 1], [1, 0], [1, 1]]) # y = np.array([[0, 1, 1, 1]]).T # porta OR y = np.array([0, 0, 0, 1]).T # porta AND print(x.shape, y.shape)_____no_output_____ </code> ### Python_____no_output_____ <code> D = x.shape[1] w = [2*random() - 1 for i in range(D)] b = 2*random() - 1 learning_rate = 1.0 # <- tente estimar a learning_rate for step in range(101): cost = 0 for x_n, y_n in zip(x, y): # qual linha devemos remover para transformar o Perceptron num Adaline? y_pred = sum([x_i*w_i for x_i, w_i in zip(x_n, w)]) + b y_pred = 1 if y_pred > 0 else 0 error = y_n - y_pred w = [w_i + learning_rate*error*x_i for x_i, w_i in zip(x_n, w)] b = b + learning_rate*error cost += error**2 if step%10 == 0: print('step {0}: {1}'.format(step, cost)) print('w: ', w) print('b: ', b)_____no_output_____ </code> ### Numpy_____no_output_____ <code> D = x.shape[1] w = 2*np.random.random(size=D)-1 b = 2*np.random.random()-1 learning_rate = 1.0 # <- use a mesma learning rate do python for step in range(101): cost = 0 for x_n, y_n in zip(x, y): # qual linha devemos remover para transformar o Perceptron num Adaline? y_pred = np.dot(x_n, w) + b y_pred = np.where(y_pred > 0, 1, 0) error = y_n - y_pred w = w + learning_rate*np.dot(error, x_n) b = b + learning_rate*error cost += error**2 if step%10 == 0: print('step {0}: {1}'.format(step, cost)) print('w: ', w) print('b: ', b) print('y_pred: {0}'.format(np.dot(x, w)+b))_____no_output_____ </code> ## Exercício de Classificação_____no_output_____ <code> x, y = make_blobs(n_samples=100, n_features=2, centers=2, random_state=1234) print(x.shape, y.shape) plt.scatter(x[:,0], x[:,1], c=y.ravel(), cmap='bwr')_____no_output_____def plot_linear_classifier(x, y, w, b): x1_min, x1_max = x[:,0].min(), x[:,0].max() x2_min, x2_max = x[:,1].min(), x[:,1].max() x1, x2 = np.meshgrid(np.linspace(x1_min-1, x1_max+1,100), np.linspace(x2_min-1, x2_max+1, 100)) x_mesh = np.array([x1.ravel(), x2.ravel()]).T plt.scatter(x[:,0], x[:,1], c=y.ravel(), cmap='bwr') y_mesh = np.dot(x_mesh, np.array(w).reshape(1, -1).T) + b y_mesh = np.where(y_mesh < 0.5, 0, 1) plt.contourf(x1, x2, y_mesh.reshape(x1.shape), cmap='bwr', alpha=0.5) plt.xlim(x1_min-1, x1_max+1) plt.ylim(x2_min-1, x2_max+1)_____no_output_____ </code> ### Python_____no_output_____ <code> D = x.shape[1] w = [2*random() - 1 for i in range(D)] b = 2*random() - 1 learning_rate = 1.0 # <- tente estimar a learning_rate for step in range(1): # <- tente estimar a #epochs cost = 0 for x_n, y_n in zip(x, y): y_pred = sum([x_i*w_i for x_i, w_i in zip(x_n, w)]) + b error = y_n - y_pred w = [w_i + learning_rate*error*x_i for x_i, w_i in zip(x_n, w)] b = b + learning_rate*error cost += error**2 if step%100 == 0: print('step {0}: {1}'.format(step, cost)) print('w: ', w) print('b: ', b) plot_linear_classifier(x, y, w, b)_____no_output_____ </code> ### Numpy_____no_output_____ <code> D = x.shape[1] w = 2*np.random.random(size=D)-1 b = 2*np.random.random()-1 learning_rate = 1.0 # <- use a mesma learning rate do python for step in range(1): # <- use a mesma #epochs do python cost = 0 for x_n, y_n in zip(x, y): y_pred = np.dot(x_n, w) + b error = y_n - y_pred w = w + learning_rate*np.dot(error, x_n) b = b + learning_rate*error cost += error**2 if step%100 == 0: print('step {0}: {1}'.format(step, cost)) print('w: ', w) print('b: ', b) plot_linear_classifier(x, y, w, b)_____no_output_____ </code> # Referências_____no_output_____- [http://sisne.org/Disciplinas/PosGrad/PsicoConex/aula6.pdf](http://sisne.org/Disciplinas/PosGrad/PsicoConex/aula6.pdf) - [What is the difference between a Perceptron, Adaline, and neural network model?](https://www.quora.com/What-is-the-difference-between-a-Perceptron-Adaline-and-neural-network-model) - [RNA – Adaline e Regra do Delta](http://www.barbon.com.br/wp-content/uploads/2013/08/RNA_Aula4.pdf)_____no_output_____
{ "repository": "mariaeduardagimenes/Manual-Pratico-Deep-Learning", "path": "Adaline.ipynb", "matched_keywords": [ "RNA" ], "stars": 53, "size": 18143, "hexsha": "480d461f27bc6382ce43b568e57f6d2bae6e46d6", "max_line_length": 690, "avg_line_length": 34.4923954373, "alphanum_fraction": 0.5681530067 }
# Notebook from bollwyvl/nbpresent Path: notebooks/proposal.ipynb # nbpresent nbslides is the evolution of the work by the Jupyter community to make notebooks into authorable, presentable, designed assets for communicating._____no_output_____> 1. The problem that this enhancement addresses. If possible include code or anecdotes to describe this problem to readers._____no_output_____## Problem Creating conference-quality presentations with the Notebook requires a good understanding of the limitations of the notebook, nbconvert, reveal.js, RISE and the publishing platform (nbconvert and hosting, or nbviewer). The user is forced into a particular mindset about how slides are structured and authored, based on the these limitations, and at the end is still left with a potentially fragile artifact that doesn't reflect the amount of effort that goes into its creation._____no_output_____### User Story: Jill Jill is presenting in support of her journal paper at a conference. While performing her work in Jupyter notebooks, she has prepared some beautiful visualizations and meaningful code snippets, as well a number of content pieces which are included in the draft and final versions of her journal paper. She has seen some example slide decks on nbviewer that use reveal.js, and they look pretty neat, so she decides to turn on the _Slides_ cell metadata and starts a brand new document, copy-and-pasting the content from her notebooks, making some screenshots. Going back and forth between some command line scripts, her local web server and her notebook, she finally has something she can present. The resulting presentation looks pretty ho-hum, and has some formatting issues, but gets the point across. Right before the presentation, the organizers tell her they need a ppt or pdf of her slides, as there are A/V issues, and she won't be able to connect her laptop directly or use the internet during her talk. She decides to just use LibreOffice next time. _____no_output_____### User Story: John John maintains a family of presentations, which are frequently updated for customer courses. While the content is light, the format represents his corporate and personal brand, and is a selling tool for his organization. In addition to wanting to create printed take-home materials, he has slides that contain interactive features. Jack has experimented with [showoff](https://github.com/puppetlabs/showoff), [beamer](https://en.wikipedia.org/wiki/Beamer_(LaTeX), but keeps ending up with LibreOffice. He evaluates using the notebook for his presentations, but finds that it lacks a number of key features: reuse of slides, repeatable PDF output, slide numbering, branding for his company and others. He tries using showoff, but still finds it requires a lot of knowledge of CSS and html, and even has to learn some ruby. He decides to just use LibreOffice. _____no_output_____> A brief (1-2 sentences) overview of the enhancement you are proposing. If possible include hypothetical code sample to describe how the solution would work to readers._____no_output_____## Proposed Enhancement We propse unifying a number of features in different parts of the ecosystem into a single installable package which contains authoring, conversion, and management of presentation-related features. _____no_output_____> A detailed explanation covering relevant algorithms, data structures, an API spec, and any other relevant technical information_____no_output_____## Logical Architecture_____no_output_____> ## _Using `nbpresent`, a user can seamlessly author, present and publish notebooks as fixed-viewport, regioned, themed, layered, composable, hierarchically-stated experiences.______no_output_____## ...user The key user is the author/presenter of technical findings and opinions involving code, prose and data. While we must assume a fair degree of technical competence, they should not need to learn many skills outside their area of interest._____no_output_____## ...seamlessly After deciding to author slides and getting `npresent` installed on their server, a user should not have to leave their area of comfort, whether that's the JS Notebook environment, a `hydrogen` session or the command line, to be able to see their presentation. _____no_output_____### ...author, present The authoring and live presenting experience should dominate nbpresent design choices. A significant shortcoming of reveal, showoff, and beamer are required knowledge of some form of esoteric language(s) to get your presentaton to Look Good. LibreOffice has mostly succeeded here with its drag-and-drop UI, but makes it more difficult to achieve a designed, consistent layout, as you are given free reign to make pixel-level corrections, sapping productivity. Somewhere in between is a system that achieves separation of content, composition, layout and style which allows the user (and eventually a team) to concentrate on the appropriate task at hand, evolving an outline into a delivered, polished, and rehearsed presentation. For rapid authoring, an author needs to be able to drop in and out of the logical model of the traditional notebook view, the tooled slide authoring view, and a full-on presentation view, suitable for delivering presentations which include rich widgets and require execution._____no_output_____### ...publish At the end of the process, a user will need to be able to hit "publish", initially to some simple choices like PDF, a standalone HTML file, or a zip of their file ready to be hosted on a plain-old-host (github/google pages). Jupyter Notebook Viewer represents a specific publishing target, and will have to be able to provide a out-of-the-box compatibility._____no_output_____### ....notebooks At a granular level, cell inputs and outputs, and to an extent, widgets, will be the content that make up presentations, but there is seldom a one-to-one mapping of cell to slide, cell to part-of-slide, or part-of-cell to part-of-slide. Furthermore, presentations frequently represent the unification of a number of sources, and need to be remixable, suggesting there is not a one-to-one mapping between notebook and presentation. We propose allow using per-cell metadata to allow mapping each cell's input (and outputs, for code cells) to a specific region of a slide. Since cross-notebook cell transclusion must be possible, we will need to be able to assign stable identifiers to cells, and by extension their inputs and outputs._____no_output_____### ...fixed-viewport One key aspect of presentations (as opposed the the incubating dashboards) is working within the boundaries of the presented screen... whichever screen: desktop, mobile, hi-res presentation. Pixel-based approaches to this are significantly limiting. To work within this, a constraint-based system, built on a cassowary-derived engine such as [kiwi.js](https://github.com/nucleic/kiwi/tree/feature-js) is proposed._____no_output_____### ...regioned Within the confines of a particular slide, instead of a pure stream of slides, subslides and fragments as in the current slideshow mechanism, we need to be able to add cell (inputs and outputs) to different parts of a slide layout: even to multiple regions of multiple slides._____no_output_____## ...themed Today, it takes heroic effort to view a notebook-as-slides in anything other than the stock default. Unlike the notebook, where consistent UI is a feature, presentations need to represent the brand of the author and/or their organization. Intertwined with regioned layouts is CSS, fonts, images (and potentially JS, such as typographic effects) that enables a specific branding of a presentation. These theme assets need to be referenced/embedded along with presentations, with effective fallback to a default brand (such as on nbviewer)._____no_output_____## ...layered Backgrounds, headers and footers are important to presentations, but without a full z-index stack, managing these becomes very static. Layers, a la inkscape, provide a useful solution to this design problem._____no_output_____## ...composable A huge advantage of showoff and beamer is the (technically adept) user's ability to reuse content. A presentation notebook needs to be able to bring in one or more cells (inputs and outputs) from other slides. These guest cells would *not* be editable within the host notebook, and refreshing of guest cells should be as painless as possible (automatic on save, if possible, on keystroke, eventually)._____no_output_____## ...hierarchically-stated Slides should participate in both a layout hierarchy, as well as a state hierarchy. This is a generalization of the reveal sub-slide and fragment constructs. Additional dimensions of state include scroll position_____no_output_____## Repo Architecture - `setup.py` - `nbpresent` - `static` - `nbpresent`_____no_output_____### Web-Based Presentation The most challenging part is the presentation system itself. The current choice, [`reveal.js`](http://lab.hakim.se/reveal-js), has gotten us very far, but is turning out to be challening to support: its CSS, build chain, and opinions make it a poor match for the kinds of complex HTML that come out of notebooks, as well as the design goals of many of our users. The initial implementation would continue to use reveal, but would not try to directly translate cells to `<section>` content._____no_output_____#### Web-based Presentation: Future This feature would be packaged as a number of `npm` modules, using current Jupyter front-end development practices: - es6/typescript - browserify PhosphorJS is a strong contender for the underlying layout engine, specifically the proposed [constraint-based layout](https://github.com/phosphorjs/phosphor/issues/35), and will be generally available in the notebook in a roadmapped version. In addition to the core rendering functions, specific components would be packaged as plugin modules that carry their own JS and CSS: - layouts - themes Several of these plugins would be packaged with the `pip`/`conda`-installable Jupyter package, but these could be added/upgraded by the user._____no_output_____### Notebook Authoring The authoring environment_____no_output_____> A list of pros that this implementation has over other potential implementations._____no_output_____## Pros_____no_output_____> A list of cons that this implementation has._____no_output_____## Cons_____no_output_____> A list of individuals who would be interested in contributing to this enhancement should it be accepted._____no_output_____## Interested Contributors - [@bollwyvl]((https://github.com/bollwyvl) _Continuum Analytics_ - [@damianavila]((https://github.com/damianavila) _Continuum Analytics______no_output_____## Data Model <div class="mermaid"> graph LR nb[notebook] --> md[metadata] md --> nbp[present] nbp --> slide nbp --> layer slide --> region nb -- * --> cells cells --> c[cell] cp[cell part] c --> input c -- 0,1 --> outputs c -- 0,1 --> widgets input --- cp outputs --- cp widgets --- cp c --> cmd[cell metadata] cmd --> cnbp[cell present] cnbp -- 0,* --> cpm cp --> cpm layer --> cpm region --> cpm effect --> cpm </div>_____no_output_____## Mockups_____no_output_____## UI won't change much vs RISE ![](./design/screens.svg#layers=bg|cells.content|cells.bg)_____no_output_____## _New Slide_ should default to something useful ![](./design/screens.svg#layers=bg|cells.content|cells.bg|present.none)_____no_output_____ <code> # these customize the environment from IPython.display import Javascript, display foo = [display(Javascript(url=url)) for url in [ "https://bollwyvl.github.io/nb-inkscapelayers/main.js", "https://bollwyvl.github.io/nb-mermaid/init.js" ]],_____no_output_____ </code>
{ "repository": "bollwyvl/nbpresent", "path": "notebooks/proposal.ipynb", "matched_keywords": [ "evolution" ], "stars": null, "size": 22035, "hexsha": "480d763fbffa74df0633b03b16147a0d1f14dd38", "max_line_length": 483, "avg_line_length": 33.7442572741, "alphanum_fraction": 0.5907420014 }
# Notebook from PathwayMerger/PathMe-Resources Path: notebooks/case_scenarios/evaluating_similarity_equivalent_pathways.ipynb # Evaluating the degree of overlap between equivalent pathways in KEGG, Reactome, and WikiPathways This notebook outlines the process of evaluating the overlap between the equivalent representations in the three databases. First, we calculate a similarity index by using a variation of the Szymkiewicz–Simpson/Overlap coefficient (Equation 1) calculated for the shared amount of molecular nodes in the graph. This variation sums the three coefficients obtained for each individual pairwise comparison to generate the similarity index. Second, Venn diagrams illustrates the degree of overlap in a more visual way. The notebook was generated with data parsed on the 26th of January, 2019._____no_output_____$$overlap(X,Y))=\frac{\left | X \cap Y \right |}{min(\left |X \right |,\left |Y \right |))}$$ <center> Equation 1. Szymkiewicz–Simpson/Overlap coefficient </center>_____no_output_____# Summary of the similarity evaluation between equivalent pathways _____no_output_____| Pathway Name | Pathway Similarity Index | Molecular Entities Present in all Three Equivalent Pathways | Comments About the Results | | --- | --- | --- | | Cell Cycle | 0.70 | ANAPC1, ANAPC10, ANAPC11, ANAPC2, ANAPC4, ANAPC5, ANAPC7, ATM, CCNA1, CCNA2, CCNB1, CCNB2, CCND1, CCNE1, CDC16, CDC20, CDC23, CDC25A, CDC27, CDK1, CDK2, CDK4, CDKN1A, CHEK1, EP300, FZR1, MAD1L1, ORC1, ORC2, ORC3, ORC4, ORC5, ORC6, PLK1, RAD21, RB1, SMC1A, SMC1B, SMC3, STAG1, STAG2, TP53 | All three representations converge to a largely shared set of molecular players that include cyclins (e.g., CDK1, CDK2, CDK4, CDKN1A) and other cell cycle regulators (e.g., ATM, P53, FZR1) | | Toll-like Receptor Signaling Pathway | 0.62 | CASP8, CD14, CHUK, FADD, IKBKB, IRAK1, IRAK4, IRF7, MAP3K7, MYD88, TLR2, TLR3, TLR4, TLR9, TRAF3, TRAF6 | Members from the toll like receptor family (e.g., TLR2, TLR3, TLR4, TLR9) and TRAFs (e.g. TRAF3, TRAF6) utilized by TLRs for signaling are shared amongst all equivalent representations of this pathway | | Target Of Rapamycin (TOR) Signaling | 0.58 | EIF4EBP1, MLST8, MTOR, RHEB, RPS6KB1, RPTOR, TSC1, TSC2 | Major pathway players include MTOR, RPTOR, MLST8 and RHEB | | Hedgehog Signaling Pathway | 0.56 | GRK2, SMO, SUFU | The conserved SMO protein is found in all three representations as is the pathway regulator SUFU. More pronounced overlap can be seen in KEGG and Reactome | | Apoptosis | 0.43 | APAF1, BAD, BAX, BID, CASP3, CASP7, CASP8, CASP9, CYCS, DFFA, FAS, FASLG, TP53 | The common apototic elements (e.g., caspase family, BAX/BAD) are shared across the three equivalent pathways | | IL17 signaling pathway | 0.42 | - | Reactome is completely embedded in the WikiPathways representation. However, there is no overlap between Reactome and KEGG | | PI3K-Akt Signaling Pathway | 0.42 | MDM2, PTEN, TP53, TSC2 | PTEN and TP53 are among the common players in this pathway | | Wnt Signaling Pathway | 0.41 |AXIN1, AXIN2, CTBP1, CTNNB1, DVL1, DVL2, GSK3B, LRP6, MAP3K7, MYC, NLK, PRKCA| Well-known interacting molecules of the beta-catenin complex are present (e.g., AXIN1, AXIN2, DVL2, GSK3B) | | MAPK Signaling Pathway | 0.40 | BRAF, JUN, RAF1 | Both serine/threonine kinases (i.e., BRAF/RAF1) as well as the proto-oncogene JUN, or P39, are present in all representations | | B Cell Receptor Signaling Pathway | 0.37 | BLNK, BTK, CARD11, CD19, CD22, DAPP1, MALT1, PIK3AP1, PRKCB, PTPN6, SYK, VAV1 | Protein kinases (e.g., BTK, SYK) and adaptor proteins (e.g., BLNK, CD19, PIK3AP1) involved in the pathway can be found in the three equivalent representations | | Notch Signaling Pathway | 0.29 | EP300, SNW1 | Co-activators EP300 and SNW1 involved in notch signaling are common to all three. A greater degree of overlap exists in KEGG and WikiPathways representations, including NOTCH and HES proteins | | DNA Replication | 0.28 | - | Both KEGG and WikiPathways have cartoon representations of the pathway and lack mechanistic information | | Prolactin Signaling Pathway | 0.28 | JAK2, PRLR | The prolactin receptor (i.e. PRLR) and the kinase JAK2 are common elements in all equivalent representations of this pathway | | TGF-beta Signaling Pathway | 0.26 | RHOA, SMAD2, SMAD3, SMAD4, SMAD7, SMURF1, SMURF2, TGFBR1, TGFBR2, ZFYVE16 | Protein kinases (i.e., TGFBR1, TGFBR2) and transcription factors (i.e., SMAD2, SMAD3, SMAD4, SMAD7) are among the common players in this pathway | | Thyroxine (Thyroid Hormone) Production | 0.20 | - | KEGG and WikiPathways have a minimal amount of overlap with cause or association relations; Reactome has exclusively reaction type edges thus no overlap with KEGG and WikiPathways | | Sphingolipid Metabolism | 0.16 | - | Surprisingly, while KEGG shows some overlap with Reactome and WikiPathways, there is no overlap between the Reactome and WikiPathways representations | | Mismatch repair | 0.08 | - | WikiPathways RDF representation is empty. No edges present in KGML file (only BEL component membership edges generated)| | Non-homologous end joining | 0 | - | Both KEGG and WikiPathways contain cartoon representations of the pathway and lack mechanistic information | | _____no_output_____Table 1. Equivalent pathways ordered by similarity (column 2). Common core of nodes is listed in column 3. Notes explaining the results are presented in column 4._____no_output_____ <code> import os from collections import Iterable, defaultdict import warnings import pandas as pd import itertools as itt import operator import matplotlib.pyplot as plt from matplotlib_venn import venn3 from pybel import from_pickle, union from pybel_tools import utils from pybel.struct.mutation import remove_biological_processes, remove_filtered_nodes, collapse_to_genes from pybel.constants import REACTION, COMPLEX, COMPOSITE from pybel.struct.filters.node_predicate_builders import function_inclusion_filter_builder from pathme.constants import REACTOME_BEL, KEGG_BEL, WIKIPATHWAYS_BEL from pathme.utils import get_files_in_folder from pathme_viewer.graph_utils import add_annotation_key, add_annotation_value from bio2bel_kegg import Manager as KeggManager from bio2bel_reactome import Manager as ReactomeManager from bio2bel_wikipathways import Manager as WikipathwaysManager_____no_output_____%matplotlib inline_____no_output_____# Remove Warnings Venn Diagram warnings.filterwarnings('ignore')_____no_output_____# Initiate WikiPathways Manager wikipathways_manager = WikipathwaysManager() # Initiate Reactome Manager reactome_manager = ReactomeManager() # Initiate KEGG Manager kegg_manager = KeggManager()_____no_output_____# Equivalent pathway IDs (ordered) reactome_ids = ['R-HSA-5358508','R-HSA-209968','R-HSA-195721','R-HSA-5683057','R-HSA-71336','R-HSA-1257604','R-HSA-168898','R-HSA-983705','R-HSA-157118','R-HSA-109581','R-HSA-428157','R-HSA-5358351','R-HSA-71403','R-HSA-69306','R-HSA-5693571','R-HSA-1640170','R-HSA-9006936','R-HSA-165159','R-HSA-448424','R-HSA-74182','R-HSA-1170546'] kegg_ids = ['hsa03430','hsa04918','hsa04310','hsa04010','hsa00030','hsa04151','hsa04620','hsa04662','hsa04330','hsa04210','hsa00600','hsa04340','hsa00020','hsa03030','hsa03450','hsa04110','hsa04350','hsa04150','hsa04657','hsa00072','hsa04917'] wikipathways_ids = ['WP531','WP1981','WP363','WP382','WP134','WP4172','WP75','WP23','WP61','WP254','WP1422','WP47','WP78','WP466','WP438','WP179','WP366','WP1471','WP2112','WP311','WP2037'] # Reactome pathways not contained in Reactome RDF file REACTOME_BLACK_LIST = ['R-HSA-2025928','R-HSA-9604323', 'R-HSA-9013700','R-HSA-9017802','R-HSA-168927', 'R-HSA-9014325', 'R-HSA-9013508', 'R-HSA-9013973', 'R-HSA-9013957', 'R-HSA-9013695','R-HSA-9627069']_____no_output_____ </code> Methods used in this notebook_____no_output_____ <code> def flatten(l): for el in l: if isinstance(el, Iterable) and not isinstance(el, (str, bytes)): yield from flatten(el) else: yield el def get_all_pathway_children_by_id(manager, reactome_id): pathway = manager.get_pathway_by_id(reactome_id) if not pathway.children: return pathway.reactome_id children = [] for child in pathway.children: children.append(get_all_pathway_children_by_id(manager, child.reactome_id)) return children def add_pathway_annotations(graph, resource_name, pathway_id): graph.annotation_pattern['Database'] = '.*' add_annotation_key(graph) add_annotation_value(graph, 'Database', resource_name) return graph def get_bel_graph(resource_name, pathway_id): if resource_name == 'reactome': pickle_path = os.path.join(REACTOME_BEL, pathway_id + '.pickle') elif resource_name == 'kegg': pickle_path = os.path.join(KEGG_BEL, pathway_id + '_unflatten.pickle') elif resource_name == 'wikipathways': pickle_path = os.path.join(WIKIPATHWAYS_BEL, pathway_id + '.pickle') # Get BEL graph from pickle bel_graph = from_pickle(pickle_path) return add_pathway_annotations(bel_graph, resource_name, pathway_id) def calculate_jaccard(set_1, set_2): """Calculate jaccard similarity between two sets. :param set set_1: set 1 :param set set_2: set 2 :returns similarity :rtype: float """ intersection = len(set_1.intersection(set_2)) smaller_set = min(len(set_1), len(set_2)) return intersection/smaller_set def calculate_pathway_similarity(set_1, set_2, set_3): """Calculate pathway similarity between three sets. :param set set_1: set 1 :param set set_2: set 2 :param set set_3: set 3 :returns similarity :rtype: float """ scores = [ calculate_jaccard(set_one, set_two) for set_one, set_two in itt.combinations([set_1, set_2, set_3], 2) if set_one and set_two # Ensure non-empty sets ] return sum(scores) def prepare_venn_diagram(pathway_name,kegg_set, reactome_set, wikipathways_set): # Nodes present in KEGG but not in Reactome, nor in WikiPathways unique_kegg = len(kegg_set.difference(reactome_set).difference(wikipathways_set)) # Nodes present in Reactome but not in KEGG, nor in WikiPathways unique_reactome = len(reactome_set.difference(kegg_set).difference(wikipathways_set)) # Nodes present in WikiPathways but not in KEGG, nor in Reactome unique_wikipathways = len(wikipathways_set.difference(kegg_set).difference(reactome_set)) # Nodes common between KEGG and Reactome but not in WikiPathways common_kegg_reactome = len(kegg_set.intersection(reactome_set).difference(wikipathways_set)) # Nodes common between KEGG and WikiPathways but not in Reactome common_kegg_wikipathways = len(kegg_set.intersection(wikipathways_set).difference(reactome_set)) # Nodes common between Reactome and WikiPathways but not in KEGG common_reactome_wikipathways = len(reactome_set.intersection(wikipathways_set).difference(kegg_set)) # Nodes common between KEGG and Reactome and WikiPathways common_kegg_reactome_wikipathways = len(kegg_set.intersection(reactome_set).intersection(wikipathways_set)) return ( unique_kegg, unique_reactome, common_kegg_reactome, unique_wikipathways, common_kegg_wikipathways, common_reactome_wikipathways, common_kegg_reactome_wikipathways ) def plot_venn_diagram(pathway_name, data): plt.figure(figsize=(10, 10)) diagram = venn3( subsets = data, set_labels = ("KEGG", "Reactome", "WikiPathways") ) for text in diagram.set_labels: if text: text.set_fontsize(28) for text in diagram.subset_labels: if text: text.set_fontsize(16) plt.title(pathway_name, fontsize=30) if diagram.get_patch_by_id('001'): diagram.get_patch_by_id('001').set_color('#5bc0de') #WikiPathways diagram.get_patch_by_id('001').set_alpha(1.0) if diagram.get_patch_by_id('010'): diagram.get_patch_by_id('010').set_color('#df3f18') # Reactome diagram.get_patch_by_id('010').set_alpha(1.0) if diagram.get_patch_by_id('011'): diagram.get_patch_by_id('011').set_color('#9d807b') # Wiki - Reactome diagram.get_patch_by_id('011').set_alpha(1.0) if diagram.get_patch_by_id('100'): diagram.get_patch_by_id('100').set_color('#5cb85c') # KEGG diagram.get_patch_by_id('100').set_alpha(1.0) if diagram.get_patch_by_id('110'): diagram.get_patch_by_id('110').set_color('#f3ac1f') # KEGG U Reactome diagram.get_patch_by_id('110').set_alpha(0.8) if diagram.get_patch_by_id('111'): diagram.get_patch_by_id('111').set_color('#ffffff') # Middle diagram.get_patch_by_id('111').set_alpha(1.0) if diagram.get_patch_by_id('101'): diagram.get_patch_by_id('101').set_color('#a2ded0') # KEGG - Wiki diagram.get_patch_by_id('101').set_alpha(1.0) plt.show() _____no_output_____ </code> Create a dictionary for Reactome pathways with children_____no_output_____ <code> parent_to_child = dict() for reactome_id in reactome_ids: all_children = get_all_pathway_children_by_id(reactome_manager, reactome_id) if isinstance(all_children, str): continue flattened_children = flatten(all_children) parent_to_child[reactome_id] = [pathway for pathway in flattened_children]_____no_output_____ </code> Get the merged network for every equivalent pathway_____no_output_____ <code> merged_pathways = defaultdict(list) for counter, reactome_id in enumerate(reactome_ids): reactome_graphs = [] kegg_bel_graph = get_bel_graph('kegg', kegg_ids[counter]) wikipathways_bel_graph = get_bel_graph('wikipathways', wikipathways_ids[counter]) pathway_name = wikipathways_manager.get_pathway_by_id(wikipathways_ids[counter]) # Check if reactome ID is in black list if reactome_id in REACTOME_BLACK_LIST: continue # If Reactome parent pathway has children, get merged graph of children if reactome_id in parent_to_child: pathway_children = parent_to_child[reactome_id] for child in pathway_children: if child not in REACTOME_BLACK_LIST: reactome_bel_graph = get_bel_graph('reactome', child) reactome_graphs.append(reactome_bel_graph) # Get Reactome parent pathway graph else: reactome_graphs.append(get_bel_graph('reactome', reactome_id)) # Get union of all bel graphs for each equivalent pathway merged_graph = union([kegg_bel_graph, wikipathways_bel_graph, union(reactome_graphs)]) # Collapse all protein, RNA, miRNA nodes to gene nodes collapse_to_genes(merged_graph) merged_pathways[str(pathway_name)] = merged_graph _____no_output_____ </code> ## Visualizing the overlaps across equivalent pathways_____no_output_____ <code> # Dictionary of pathway names to venn diagram circle sizes to plot venn_diagram_dict = {} pathway_similarity = {} for pathway_name, merged_graph in merged_pathways.items(): node_set_kegg = set() node_set_reactome = set() node_set_wikipathways = set() # For each resource, add nodes with edges to resource node set if node is gene, protein, RNA, miRNA or chemical for u, v, k, data in merged_graph.edges(keys=True, data=True): info = merged_graph[u][v][k] if info['annotations']['Database'].get('kegg'): if u.function not in {'BiologicalProcess', 'Complex', 'Composite', 'Reaction'}: node_set_kegg.add(u) if v.function not in {'BiologicalProcess', 'Complex', 'Composite', 'Reaction'}: node_set_kegg.add(v) if info['annotations']['Database'].get('reactome'): if u.function not in {'BiologicalProcess', 'Complex', 'Composite', 'Reaction'}: node_set_reactome.add(u) if v.function not in {'BiologicalProcess', 'Complex', 'Composite', 'Reaction'}: node_set_reactome.add(v) if info['annotations']['Database'].get('wikipathways'): if u.function not in {'BiologicalProcess', 'Complex', 'Composite', 'Reaction'}: node_set_wikipathways.add(u) if v.function not in {'BiologicalProcess', 'Complex', 'Composite', 'Reaction'}: node_set_wikipathways.add(v) venn_diagram_dict[pathway_name] = prepare_venn_diagram( pathway_name, node_set_kegg, node_set_reactome, node_set_wikipathways ) pathway_similarity[pathway_name] = calculate_pathway_similarity( node_set_kegg, node_set_reactome, node_set_wikipathways )_____no_output_____ </code> ### Similarity results_____no_output_____ <code> for pathway, score in sorted(pathway_similarity.items(), key=operator.itemgetter(1),reverse=True): print("'{}' has a pathway similarity index of: {}".format(pathway, round(score, 2)))'Cell Cycle' has a pathway similarity index of: 2.1 'Toll-like Receptor Signaling Pathway' has a pathway similarity index of: 1.85 'Target Of Rapamycin (TOR) Signaling' has a pathway similarity index of: 1.75 'Hedgehog Signaling Pathway' has a pathway similarity index of: 1.67 'Apoptosis' has a pathway similarity index of: 1.28 'IL17 signaling pathway' has a pathway similarity index of: 1.27 'PI3K-Akt Signaling Pathway' has a pathway similarity index of: 1.25 'Wnt Signaling Pathway' has a pathway similarity index of: 1.24 'MAPK Signaling Pathway' has a pathway similarity index of: 1.19 'B Cell Receptor Signaling Pathway' has a pathway similarity index of: 1.11 'Pentose Phosphate Pathway' has a pathway similarity index of: 1.0 'TCA Cycle' has a pathway similarity index of: 1.0 'Synthesis and Degradation of Ketone Bodies' has a pathway similarity index of: 1.0 'Notch Signaling Pathway' has a pathway similarity index of: 0.87 'DNA Replication' has a pathway similarity index of: 0.83 'Prolactin Signaling Pathway' has a pathway similarity index of: 0.83 'TGF-beta Signaling Pathway' has a pathway similarity index of: 0.78 'Thyroxine (Thyroid Hormone) Production' has a pathway similarity index of: 0.6 'Sphingolipid Metabolism' has a pathway similarity index of: 0.49 'Mismatch repair' has a pathway similarity index of: 0.23 'Non-homologous end joining' has a pathway similarity index of: 0 </code> ### Venn diagrams of nodes in each equivalent pathway present in KEGG PATHWAYS, Reactome and WikiPathways_____no_output_____ <code> plot_venn_diagram('Cell Cycle', venn_diagram_dict['Cell Cycle'])_____no_output_____plot_venn_diagram( 'Toll-like Receptor Signaling Pathway', venn_diagram_dict['Toll-like Receptor Signaling Pathway'] )_____no_output_____plot_venn_diagram( 'Target Of Rapamycin (TOR) Signaling', venn_diagram_dict['Target Of Rapamycin (TOR) Signaling'] )_____no_output_____plot_venn_diagram('Hedgehog Signaling Pathway', venn_diagram_dict['Hedgehog Signaling Pathway'])_____no_output_____plot_venn_diagram('Apoptosis', venn_diagram_dict['Apoptosis'])_____no_output_____plot_venn_diagram('IL17 signaling pathway', venn_diagram_dict['IL17 signaling pathway'])_____no_output_____plot_venn_diagram('Wnt Signaling Pathway', venn_diagram_dict['Wnt Signaling Pathway'])_____no_output_____plot_venn_diagram('PI3K-Akt Signaling Pathway', venn_diagram_dict['PI3K-Akt Signaling Pathway'])_____no_output_____plot_venn_diagram('MAPK Signaling Pathway', venn_diagram_dict['MAPK Signaling Pathway'])_____no_output_____plot_venn_diagram('B Cell Receptor Signaling Pathway', venn_diagram_dict['B Cell Receptor Signaling Pathway'])_____no_output_____plot_venn_diagram('Notch Signaling Pathway', venn_diagram_dict['Notch Signaling Pathway'])_____no_output_____plot_venn_diagram('DNA Replication', venn_diagram_dict['DNA Replication'])_____no_output_____plot_venn_diagram('Prolactin Signaling Pathway', venn_diagram_dict['Prolactin Signaling Pathway'])_____no_output_____plot_venn_diagram('TGF-beta Signaling Pathway', venn_diagram_dict['TGF-beta Signaling Pathway'])_____no_output_____plot_venn_diagram( 'Thyroxine (Thyroid Hormone) Production', venn_diagram_dict['Thyroxine (Thyroid Hormone) Production'] )_____no_output_____plot_venn_diagram('Pentose Phosphate Pathway', venn_diagram_dict['Pentose Phosphate Pathway'])_____no_output_____plot_venn_diagram( 'Synthesis and Degradation of Ketone Bodies', venn_diagram_dict['Synthesis and Degradation of Ketone Bodies'] )_____no_output_____plot_venn_diagram('Mismatch repair', venn_diagram_dict['Mismatch repair'])_____no_output_____plot_venn_diagram('TCA Cycle', venn_diagram_dict['TCA Cycle'])_____no_output_____plot_venn_diagram('Sphingolipid Metabolism', venn_diagram_dict['Sphingolipid Metabolism'])_____no_output_____plot_venn_diagram('Non-homologous end joining', venn_diagram_dict['Non-homologous end joining'])_____no_output_____ </code>
{ "repository": "PathwayMerger/PathMe-Resources", "path": "notebooks/case_scenarios/evaluating_similarity_equivalent_pathways.ipynb", "matched_keywords": [ "RNA" ], "stars": 1, "size": 951860, "hexsha": "480fce4c1d230d95cd9e503ae5a1cb885c1a7994", "max_line_length": 54448, "avg_line_length": 981.2989690722, "alphanum_fraction": 0.9526232849 }
# Notebook from czbiohub/BingWu_DarmanisGroup_TracheaDevTmem16a Path: scrublet/scrublet_P4_Oct18_mut_green.ipynb This example shows how to: 1. Load a counts matrix (10X Chromium data from human peripheral blood cells) 2. Run the default Scrublet pipeline 3. Check that doublet predictions make sense_____no_output_____ <code> import sys sys.path_____no_output_____ _____no_output_____%matplotlib inline import scrublet as scr import scipy.io import matplotlib.pyplot as plt import numpy as np import os_____no_output_____plt.rcParams['font.family'] = 'sans-serif' plt.rcParams['font.sans-serif'] = 'Arial' plt.rc('font', size=14) plt.rcParams['pdf.fonttype'] = 42_____no_output_____ </code> #### Load counts matrix and gene list Load the raw counts matrix as a scipy sparse matrix with cells as rows and genes as columns._____no_output_____ <code> input_dir = '/home/ubuntu/velocyto/P4_mm10_1_2_0/86_1B/86_1B/86_1B/outs/filtered_gene_bc_matrices/mm10-1.2.0/' counts_matrix = scipy.io.mmread(input_dir + '/matrix.mtx').T.tocsc() genes = np.array(scr.load_genes(input_dir + 'genes.tsv', delimiter='\t', column=1)) print('Counts matrix shape: {} rows, {} columns'.format(counts_matrix.shape[0], counts_matrix.shape[1])) print('Number of genes in gene list: {}'.format(len(genes)))Counts matrix shape: 1144 rows, 27998 columns Number of genes in gene list: 27998 </code> #### Initialize Scrublet object The relevant parameters are: - *expected_doublet_rate*: the expected fraction of transcriptomes that are doublets, typically 0.05-0.1. Results are not particularly sensitive to this parameter. For this example, the expected doublet rate comes from the Chromium User Guide: https://support.10xgenomics.com/permalink/3vzDu3zQjY0o2AqkkkI4CC - *sim_doublet_ratio*: the number of doublets to simulate, relative to the number of observed transcriptomes. This should be high enough that all doublet states are well-represented by simulated doublets. Setting it too high is computationally expensive. The default value is 2, though values as low as 0.5 give very similar results for the datasets that have been tested. - *n_neighbors*: Number of neighbors used to construct the KNN classifier of observed transcriptomes and simulated doublets. The default value of `round(0.5*sqrt(n_cells))` generally works well. _____no_output_____ <code> scrub = scr.Scrublet(counts_matrix, expected_doublet_rate=0.01)_____no_output_____ </code> #### Run the default pipeline, which includes: 1. Doublet simulation 2. Normalization, gene filtering, rescaling, PCA 3. Doublet score calculation 4. Doublet score threshold detection and doublet calling _____no_output_____ <code> doublet_scores, predicted_doublets = scrub.scrub_doublets(min_counts=2, min_cells=3, min_gene_variability_pctl=85, n_prin_comps=30)Preprocessing... Simulating doublets... Embedding transcriptomes using PCA... Calculating doublet scores... Automatically set threshold at doublet score = 0.10 Detected doublet rate = 0.6% Estimated detectable doublet fraction = 5.9% Overall doublet rate: Expected = 1.0% Estimated = 10.3% Elapsed time: 2.1 seconds </code> #### Plot doublet score histograms for observed transcriptomes and simulated doublets The simulated doublet histogram is typically bimodal. The left mode corresponds to "embedded" doublets generated by two cells with similar gene expression. The right mode corresponds to "neotypic" doublets, which are generated by cells with distinct gene expression (e.g., different cell types) and are expected to introduce more artifacts in downstream analyses. Scrublet can only detect neotypic doublets. To call doublets vs. singlets, we must set a threshold doublet score, ideally at the minimum between the two modes of the simulated doublet histogram. `scrub_doublets()` attempts to identify this point automatically and has done a good job in this example. However, if automatic threshold detection doesn't work well, you can adjust the threshold with the `call_doublets()` function. For example: ```python scrub.call_doublets(threshold=0.25) ```_____no_output_____ <code> scrub.plot_histogram();/home/ubuntu/.local/lib/python2.7/site-packages/matplotlib/font_manager.py:1331: UserWarning: findfont: Font family [u'sans-serif'] not found. Falling back to DejaVu Sans (prop.get_family(), self.defaultFamily[fontext])) </code> #### Get 2-D embedding to visualize the results_____no_output_____ <code> print('Running UMAP...') scrub.set_embedding('UMAP', scr.get_umap(scrub.manifold_obs_, 10, min_dist=0.3)) # # Uncomment to run tSNE - slow # print('Running tSNE...') # scrub.set_embedding('tSNE', scr.get_tsne(scrub.manifold_obs_, angle=0.9)) # # Uncomment to run force layout - slow # print('Running ForceAtlas2...') # scrub.set_embedding('FA', scr.get_force_layout(scrub.manifold_obs_, n_neighbors=5. n_iter=1000)) print('Done.')Running UMAP... </code> #### Plot doublet predictions on 2-D embedding Predicted doublets should co-localize in distinct states._____no_output_____ <code> scrub.plot_embedding('UMAP', order_points=True); # scrub.plot_embedding('tSNE', order_points=True); # scrub.plot_embedding('FA', order_points=True);_____no_output_____print(doublet_scores)[0.01530222 0.00562475 0.0028144 ... 0.01387407 0.00103199 0.01891892] print(predicted_doublets)[False False False ... False False False] sum(predicted_doublets)_____no_output_____len(predicted_doublets)_____no_output_____cwd = os.getcwd() print (cwd)/home/ubuntu/scrublet/examples doublet_scores.tofile('P4_Oct18_mut_green_doubletScore.csv',sep=',',format='%s')_____no_output_____min(doublet_scores[predicted_doublets])_____no_output_____ </code>
{ "repository": "czbiohub/BingWu_DarmanisGroup_TracheaDevTmem16a", "path": "scrublet/scrublet_P4_Oct18_mut_green.ipynb", "matched_keywords": [ "gene expression" ], "stars": 1, "size": 96523, "hexsha": "481023be1c35ff13da9bd368e683ba34d7ebe5fe", "max_line_length": 68940, "avg_line_length": 214.4955555556, "alphanum_fraction": 0.9181542223 }
# Notebook from guilhermealbm/TechSpaces Path: graphs/python_graph.ipynb <code> import pandas as pd import networkx as nx import community import operator_____no_output_____df = pd.read_csv('../tags_with_wiki_relationship.csv') df_____no_output_____df_wiki = pd.read_csv('../tags_with_wiki_and_category.csv', lineterminator='\n') df_wiki_____no_output_____node_attr = df_wiki.set_index('TagName').to_dict('index')_____no_output_____Graphtype = nx.Graph() G = nx.from_pandas_edgelist(df, edge_attr='weight', create_using=Graphtype) nx.set_node_attributes(G, node_attr)_____no_output_____G.nodes['python']_____no_output_____G['python']_____no_output_____# Find modularity part_1 = community.best_partition(G, random_state = 27) mod_1 = community.modularity(part_1,G)_____no_output_____part_1['python']_____no_output_____number_of_comm_1 = max(part_1.items(), key=operator.itemgetter(1))[1] + 1 number_of_comm_1_____no_output_____list_of_comm_1 = [] for i in range(number_of_comm_1): list_of_comm_1.append([k for k,v in part_1.items() if v == i]) list_of_comm_1[part_1['python']]_____no_output_____G_1 = G.subgraph(list_of_comm_1[part_1['python']])_____no_output_____G_1.edges_____no_output_____part_2 = community.best_partition(G_1) mod_2 = community.modularity(part_2,G_1)_____no_output_____sorted(G_1.degree, key=lambda x: x[1], reverse=True)_____no_output_____f = open("../filter/final_categories.txt", "r") categories = f.read().split(", ") categories_____no_output_____for a, b in sorted(G_1.degree, key=lambda x: x[1], reverse=True): if str(G.nodes[a]['root']) != "nan" and str(G.nodes[a]['root']) in categories: print(a + "," + str(b) + "," + G.nodes[a]['root'] + "," + node_attr[a]["Body"]) #else: # print(a + " " + str(b))python,1451,language,python is a multi-paradigm dynamically typed multi-purpose programming language. r,534,environment,r is a free open-source programming language & software environment for statistical computing bioinformatics visualization & general computing. django,503,framework,django is an open-source server-side web application framework written in python. matlab,418,language,matlab is a high-level language and interactive programming environment for numerical computation and visualization developed by mathworks. numpy,235,extension,numpy is an extension of the python language that adds support to large multidimensional arrays and matrixes along with a large library of high-level mathematical functions for operations with these arrays. csv,171,database,comma-separated values or character-separated values (csv) is a standard "flat file database" (or spreadsheet-style) format for storing tabular data in plain text with fields separated by a special character (comma tab etc). opencv,171,library,opencv (open source computer vision) is a library for real time computer vision. scipy,140,library,scipy is an open source library of algorithms and mathematical tools for the python programming language. wolfram-mathematica,138,system,wolfram mathematica is a computer algebra system and programming language from wolfram research. matplotlib,133,library,matplotlib is a plotting library for python which may be used interactively or embedded in stand-alone guis. prolog,104,language,prolog is the most commonly used logic programming language. sqlalchemy,100,toolkit,sqlalchemy is a python sql toolkit and object relational mapper that gives application developers the full power and flexibility of sql. lxml,76,library,lxml is a full-featured high performance python library for processing xml and html. nltk,74,library,the natural language toolkit is a python library for computational linguistics. virtualenv,73,tool,virtualenv is a tool that creates sandboxed python environments. py2exe,68,extension,py2exe is a python extension that converts python scripts to windows executables. octave,63,language,gnu octave is a free and open-source mathematical software package and scripting language. plone,53,system,plone is a content management system (cms) written in python. antlr,53,tool,antlr another tool for language recognition is a language tool that provides a framework for constructing recognizers interpreters compilers and translators from grammatical descriptions containing actions in a variety of target languages. google-cloud-datastore,49,database,google cloud datastore is a scalable fully-managed nosql document database for web and mobile applications. gnuplot,46,utility,gnuplot is a portable command-line driven graphing utility for linux os/2 ms windows osx vms and many other platforms. tex,42,system,tex is a typesetting system where the output is defined by command-sequences. weka,42,library,weka (waikato environment for knowledge analysis) is an open source machine learning library written in java. flask,42,framework,flask is a lightweight framework for developing web applications using python. cherrypy,40,framework,cherrypy is a pythonic object-oriented http framework. buildout,39,system,zc.buildout is a python-based build system for creating assembling and deploying applications from multiple parts some of which may be non-python-based. web2py,38,framework,web2py is a open-source full-stack web framework written in python 2.x. python-sphinx,37,tool,sphinx is a tool that makes it easy to create intelligent and beautiful documentation. distutils,37,system,distutils is the standard packaging system for python modules and applications. pyramid,36,framework,pyramid is a python-based web framework provided by the pylons project. web.py,36,framework,web.py is a minimalist web framework for python. pythonpath,36,environment,pythonpath is an environment variable that can be used to augment the default search path for module imports in python. scrapy,34,framework,scrapy is a fast open-source high-level screen scraping and web crawling framework written in python used to crawl websites and extract structured data from their pages. edge-detection,32,tool,edge detection is a tool in computer vision used to find discontinuities (edges) in images or graphs. decision-tree,32,tool,a decision tree is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences including chance event outcomes resource costs and utility. celery,31,framework,celery is a distributed task queue framework for python used for asynchronous and parallel execution. gdal,30,library,gdal (geospatial data abstraction library) is a library for reading and writing raster geospatial data formats. pdflatex,29,utility,pdflatex is a command line utility used to create pdfs directly from latex source code. gevent,29,library,gevent is a coroutine-based python networking library that uses greenlet to provide a high-level synchronous api on top of libevent (libev after 1.0) event loop. arcgis,28,platform,arcgis is a platform consisting of a group of geographic information system (gis) software products produced by esri. django-testing,27,framework,django is a high-level python web framework that encourages rapid development and clean pragmatic design. pycharm,27,environment,pycharm is an integrated development environment (ide) for python. libsvm,27,library,libsvm is a library for support vector machines geodjango,27,framework,geodjango intends to be a world-class geographic web framework. parser-generator,26,tool,a parser-generator is a tool that accepts a grammar description of a language (usually as an extended backus-naur formalism (ebnf)) and generates computer code that will parse the language described by that grammar. wordnet,25,database,wordnet is a lexical database for the english language. mako,25,library,mako is a template library written in python. iconv,24,library,iconv is a library and api for converting between different character encodings. sweave,24,system,sweave is a system for combining s (or r) code with latex in a single document. suds,24,library,suds is a soap services library for python & javascript. tk,23,toolkit,the tk toolkit is a scripted gui toolkit that is designed to be used from dynamic languages (initially tcl but also perl and python). pinax,22,platform,pinax is an open-source platform built on the django web framework. elementtree,22,library,elementtree is a python library for creating and parsing xml. django-middleware,21,framework,middleware is a framework of hooks into django’s request/response processing. sympy,21,library,sympy is an open source python library for symbolic mathematics. pyinstaller,21,tool,pyinstaller is a multi-platform tool designed to convert python (.py) files into stand-alone executable files on windows linux macos solaris and aix. simulink,20,environment,simulink® is an environment developed by the mathworks for multidomain simulation and model-based design for dynamic and embedded systems. maya,20,tool,autodesk maya is a 3d modeling animation and rendering tool used in film tv and advertising. data.table,20,extension,the r data.table package is an extension of data. bigtable,20,system,bigtable is a distributed storage system (built by google) for managing structured data that is designed to scale to a very large size: petabytes of data across thousands of commodity servers. satchmo,18,framework,satchmo is an open source python framework for e-commerce web applications--it is built on top of the django project. twill,16,language,twill is an open source language written in python that allows browsing the web from a command line interface. tipfy,16,framework,tipfy is a small but powerful framework made specifically for google app engine. xlrd,16,library,xlrd is a python library to extract data from microsoft excel spreadsheet files pandas,15,library,pandas is a python library for data manipulation and analysis e.g. dataframes multidimensional time series and cross-sectional datasets commonly found in statistics experimental science results econometrics or finance. matlab-deployment,15,language,matlab is a high-level language and programming environment developed by mathworks. scapy,15,tool,scapy is a network packet manipulation tool for use with python. maxima,14,system,maxima is an opensource computer algebra system based on the legendary macsyma. werkzeug,14,library,werkzeug is a wsgi utility library for python. antlrworks,13,environment,antlrworks is a grammar development environment for antlr v3 grammars written by jean bovet. genshi,13,library,genshi is a python library that provides an integrated set of components for parsing generating and processing html xml or other textual content for output generation on the web. expert-system,13,system,in artificial intelligence an expert system is a computer system that emulates the decision-making ability of a human expert. miktex,12,system,miktex is a typesetting system for microsoft windows consisting of an implementation of tex and a set of related programs. beaker,12,library,beaker is a library for caching and sessions for use with web applications and stand-alone python scripts and applications. manage.py,12,utility,`manage.py` is a command-line utility for django's administrative tasks. zodb,11,database,zodb is an object database for python. dot,11,language,dot is both a mathematical graph description language and a particular graph rendering engine (dot). hindi,11,language,hindi is the most spoken language of india the world's second-most populated country. chaco,10,toolkit,chaco is a python plotting application toolkit that facilitates writing plotting applications at all levels of complexity. gql,10,language,gql is a sql-like language for retrieving entities or keys from google cloud datastore. peg,10,language,a “parsing expression grammar” (“peg”) is a formal language to describe formal languages. django-sites,10,framework,django-sites is a framework for associating objects and functionality to particular web sites. structured-data,10,system,structured data is a system of pairing a name with a value that helps search engines categorize and index your content. ogr,9,library,the ogr simple features library is a c++ open source library (and commandline tools) providing read (and sometimes write) access to a variety of vector file formats including esri shapefiles s-57 sdts postgis oracle spatial and mapinfo mid/mif and tab formats. html5lib,9,library,html5lib is a library for parsing and serializing html documents and fragments in python with ports to dart php and ruby. urlfetch,9,library,urlfetch is a built-in library that is used to fetch external urls. arcobjects,9,environment,arcobjects is a development environment for arcgis applications by esri. ampl,9,language,ampl is an algebraic modeling language for mathematical optimization. qt-designer,9,tool,qt designer is qt's tool for designing and building graphical user interfaces (guis) from qt components. pyscripter,9,environment,pyscripter is a free and open-source software python integrated development environment (ide) editor for windows. pyro,9,library,pyro is a library that enables you to build applications in which python remote objects can talk to each other over the network with minimal programming effort. zpt,8,tool,zope page templates are an html/xml generation tool. bicubic,8,extension,bicubic interpolation is an extension of cubic interpolation for interpolating data points on a two dimensional regular grid. plone-3.x,8,system,plone is a free and open source content management system built on top of the zope application server. rapidminer,8,environment,rapidminer is an environment for machine learning data mining text mining predictive analytics and business analytics. python-requests,8,library,requests is a full-featured python http library with an easy-to-use logical api. python-extensions,8,language,python is an interpreted general-purpose high-level programming language whose design philosophy emphasizes code readability. webapp2,8,framework,webapp2 is a lightweight python web framework compatible with google app engine's webapp. whoosh,8,library,whoosh is a fast featureful full-text indexing and searching library implemented in pure python. gnuradio,8,toolkit,gnu radio is a free software development toolkit that provides the signal processing runtime and processing blocks to implement software radios. template-metal,8,extension,metal (macro expansion template attribute language) is a an extension to tal xml-attribute based templating language standard to add macro functionality used by zope page templates chameleon phptal and other templating libraries. matlab-figure,7,language,matlab is a high-level language and programming environment developed by mathworks. logo-lang,7,language,logo is a computer programming language created mainly for the purposes of education. imdb,7,database,the internet movie database (imdb) is an online database of information related to films television programs and video games. pyml,7,framework,pyml is an interactive object oriented framework for machine learning written in python. supervisord,7,system,supervisor is a client/server system that allows its users to control a number of processes on unix-like operating systems. kombu,7,framework,kombu is an amqp messaging framework for python. xmpppy,7,library,xmpppy is a python library that provides support for the xmpp protocol (jabber). formalchemy,7,library,formalchemy is a python library that allows for easy html forms manipulation when using sqlalchemy models. netlogo,7,language,netlogo is an agent-based programming language and integrated modeling environment. epydoc,7,tool,epydoc is a tool for generating api documentation for python modules based on their docstrings. blast,7,tool,blast is a basic local alignment search tool for comparing biological sequence information. wing-ide,7,environment,wing ide is an integrated development environment for python made by wingware and available both in free and paid editions. django-evolution,7,extension,django evolution is an extension to django that allows you to track changes in your models over time and to update the database to reflect those changes. python-sip,6,library,sip is a python library used to port native c/c++ apis into python. django-reversion,6,extension,django-reversion is an extension to the django web framework that provides comprehensive version control facilities. gold-parser,6,system,gold is a parsing system targeting multiple programming language. cheetah,6,framework,cheetah is an open source python based templating framework. stochastic,6,system,a stochastic system is a system which state depends or some random elements making its behavior non-deterministic. flask-sqlalchemy,6,extension,flask-sqlalchemy is an extension for flask that provides sqlalchemy support. gensim,6,framework,gensim is a free python framework designed to automatically extract semantic topics from documents as efficiently (computer-wise) and painlessly (human-wise) as possible. geohashing,6,system,geohash is a latitude/longitude encoding system using base-32 characters. scikit-learn,6,library,scikit-learn is a machine-learning library for python that provides simple and efficient tools for data analysis and data mining with a focus on machine learning. toscawidgets,6,framework,toscawidgets is a wsgi framework for building reusable web ui components in python. soappy,6,library,soapy is a soap/xml schema library for python. dexterity,5,framework,dexterity is a content type framework for cmf applications with particular emphasis on plone. matlab-engine,5,library,matlab engine is a library to call matlab® software from standalone programs for example written in c/c++ or fortran. osqa,5,system,osqa (open source question and answer) is an open source question-answer system written in python with django. webharvest,5,tool,web-harvest is open source web data extraction tool written in java. pyquery,5,library,pyquery is a jquery-like library for python that allows you to make jquery queries on xml documents. mox,5,framework,mox is a mock object framework for python. windmill,5,tool,windmill is a web testing tool designed to let you painlessly automate and debug your web application. robotics-studio,5,environment,microsoft robotics developer studio is an environment for simulating and controlling robots. mercury,5,language,mercury is a purely declarative logical/functional language. snowball,5,language,snowball is a small language for writing stemming algorithms used primarily in information retrieval and natural language processing. clevercss,5,language,clevercss is a small markup language for css inspired by python that can be used to build a style sheet in a clean and structured way. pyusb,5,library,pyusb is a python library allowing easy usb access. scraperwiki,5,tool,scraperwiki was an online tool for screen scraping. quantlib,5,library,quantlib is a free and open-source library for quantitative finance. coverage.py,5,tool,coverage.py is a tool for measuring test code coverage of python programs cinema-4d,5,tool,cinema 4d is 3d modeling tool developed by maxon computer gmbh. pyexcelerator,5,library,pyexcelerator is a python library for reading files compatible with excel 95 or later and writing files compatible with excel 97 or later. cytoscape,5,platform,cytoscape is an open source software platform for visualizing complex networks and integrating these with any type of attribute data. sentry,5,platform,sentry is an event logging platform primarily focused on capturing and aggregating exceptions. twitter4j,5,library,twitter4j is a library enabling calls to the twitter api from java. python-coverage,4,tool,coverage.py is a tool for measuring code coverage of python programs. alchemyapi,4,platform,alchemyapi is a saas platform that enriches textual content through automated tagging categorization linguistic analysis and semantic mining. metatrader4,4,platform,metatrader 4 is an electronic client/server trading platform widely used by online retail foreign exchange speculative traders. graphite,4,system,graphite is a highly scalable real-time graphing system specifically built to visualize numeric time-series data. genericsetup,4,toolkit,genericsetup is a zope/cmf-based toolkit for managing site configuration. emotion,4,library,emotion is a performant and flexible css-in-js library. spyder,4,environment,spyder (previously known as pydee) is a powerful interactive development environment for the python language with advanced editing interactive testing debugging and introspection features. buildbot,4,framework,buildbot is an open-source framework for automating software build test and release processes. matlab-guide,4,environment,guide is a graphical user interfaces (gui) design environment in matlab. pybrain,4,library,pybrain is an open source machine-learning library for python. rstudio,4,language,rstudio is an ide for the r statistical programming language. mel,4,language,mel is the maya embedded language used to automate processes in the 3d computer animation and modeling software "maya". viola-jones,4,framework,the viola–jones object detection framework is the first object detection framework to provide competitive object detection rates in real-time proposed in 2001 by paul viola and michael jones. alice,4,environment,alice is an innovative 3d programming environment that makes it easy to create an animation for telling a story playing an interactive game or a video to share on the web. mql4,4,language,metaquotes language 4 (mql4) is a language used in the metatrader 4 online-trading terminal software for programming trading strategies. rgl,4,system,`rgl` is a 3d visualization device system for r using opengl as the rendering backend. roxygen2,4,system,roxygen2 is a doxygen-like in-source documentation system for rd collation and namespace. transmogrifier,4,framework,transmogrifier is a framework for creating repeatable/shareable processes in python with modules called blueprints combined into a pipeline. clpq,3,extension,clpq or clp(q) is a prolog language extension for constraint logic programming over the rationals. colander,3,framework,colander is a simple framework for validating serializing and deserializing of data obtained via xml json an html form post or any other equally simple data structure. simplecv,3,framework,simplecv is an open source framework for machine vision matplotlib-basemap,3,library,the matplotlib basemap toolkit is a library for plotting 2d data on maps in python. fuzzer,3,tool,a fuzzer is a tool used to provide invalid and unexpected data to the inputs of a program in order to obtain crashes memory leaks or invalid program states. quantstrat,3,framework,quantstrat is a quantitative strategy framework for r vispy,3,library,vispy is a high-performance interactive 2d/3d data visualization library. jes,3,environment,the (jes)jython environment for students is a full-featured media computation environment for programming in jython. turbo-prolog,3,system,turbo prolog is a very fast old prolog system that only works with ms-dos. logtalk,3,language,logtalk is an object-oriented logic programming language based on prolog twistd,3,utility,`twistd` is an utility that can be used to run twisted applications. keras,3,library,keras is a neural network library providing a high-level api in python and r. use this tag for questions relating to how to use this api. please also include the tag for the language/backend ([python] [r] [tensorflow] [theano] [cntk]) that you are using. mongrel2,3,language,mongrel2 is an application language and network architecture agnostic web server that focuses on web applications using modern browser technologies. openpyxl,3,library,openpyxl is a python library for reading and writing excel 2010 xlsx/xlsm/xltx/xltm files. jsonpickle,3,library,jsonpickle is a python library for serialization and deserialization of complex python objects to and from json. linguaplone,3,tool,linguaplone is a tool to manage and maintain multilingual content that integrates seamlessly with plone. sphinx-apidoc,3,tool,sphinx-apidoc is a command-line tool for automatic generation of sphinx restructuredtext sources that with the sphinx autodoc extension document a whole package in the style of other automatic api documentation tools. mpmath,3,library,mpmath is a python library for arbitrary-precision floating-point arithmetic. tensorflow,3,library,tensorflow is an open-source library and api designed for deep learning written and maintained by google. golfscript,3,language,golfscript is a stack oriented esoteric programming language aimed at solving problems in as few keystrokes as possible. yap,3,system,yap is a prolog system developed since 1985 at the universities of porto and rio de janeiro glumpy,3,library,glumpy is a python library for producing scientific visualizations. python-nose,3,tool,nose is an alternate python unittest collecting and running tool. gdbm,3,database,gnu dbm is a key-value database. seaborn,3,library,seaborn is a python data visualization library based on matplotlib. autokey,3,utility,autokey is a desktop automation utility for linux and x11 gasp,3,library,graphics api for students of python (gasp) is a procedural graphics library for beginning programmers. paver,3,tool,paver is a python-based software project scripting tool along the lines of make or rake. turing-lang,2,language,turing is a pascal-like programming language developed in 1982 by ric holt and james cordy then of university of toronto canada. statet,2,ide,statet is an eclipse based ide (integrated development environment) for r. iron,2,framework,iron is a extensible web framework for rust. hoops,2,toolkit,hoops visualize from tech soft 3d is a portable graphics development toolkit for creating or enhancing 3d applications. gap-system,2,system,gap (groups algorithms and programming) is an open source mathematical software system for discrete computational algebra (https://www.gap-system.org). mathgl,2,library,mathgl is a library for making high-quality scientific graphics under linux and windows minikanren,2,system,kanren is a declarative logic programming system with first-class relations embedded in a pure functional subset of scheme. ometa,2,language,ometa is an object oriented language for pattern matching which tries to provide convenient way for programmers to create parsers and compilers. urllib3,2,library,`urllib3` is a 3rd-party python http library with thread-safe connection pooling file post and more. f2py,2,tool,f2py is a tool that provides an interface between the python and fortran programming languages. mapguide,2,platform,mapguide open source is a web-based platform that enables users to develop and deploy web mapping applications and geospatial web services. metapost,2,language,metapost is a picture-drawing language used to generate figures for documents to be printed on postscript printers either directly or embedded in (la)tex documents. church-pl,2,language,the church programming language is a scheme-based probabilistic programming language and toolkit used in cognitive science research. vte,2,library,vte is a library that provides a terminal widget implementation in gtk functions for starting a new process on a new pseudo-terminal and for manipulating pseudo-terminals. gbk,2,extension,gbk is an extension of the gb2312 character set for simplified chinese characters used in the people's republic of china. colorbrewer,2,tool,colorbrewer is an online tool designed to help people select good color schemes for maps and other graphics. dabo,2,framework,dabo is a 3-tier cross-platform application development framework written in python atop the wxpython gui toolkit that enables you to easily create powerful desktop applications. xpce,2,library,xpce is swi-prolog's native gui library gegl,2,framework,gegl or the generic graphics library is a framework for image processing. rapache,2,language,rapache is a project supporting web application development using the r statistical language and environment and the apache web server gdcm,2,library,grassroots dicom (gdcm) is a cross-platform library written in c++ for dicom medical files. freemat,2,environment,freemat is a free environment for rapid engineering and scientific prototyping and data processing. grinder,2,framework,the grinder is a java load-testing framework that makes it easy to run a distributed test using many load injector machines. mathprog,2,language,gnu mathprog is a modeling language intended for describing linear mathematical programming models. pgu,2,library,pgu (phil's pygame utilities) is a python library that builds on top of pygame. plunit,2,framework,plunit is a unit testing framework for prolog anylogic,2,tool,anylogic is a flexible multi-method modeling tool capable of producing a variety of simulations. jmespath,2,language,jmespath (json matching expression paths) is a query language for json. glr,2,extension,glr parser ("generalized left-to-right rightmost derivation parser") is an extension of an lr parser algorithm to handle nondeterministic and ambiguous grammars. pymunk,2,library,pymunk is a 2d physics library for python built on top of chipmunk. multi-agent,2,system,a multi-agent system (mas) is a system composed of multiple interacting intelligent agents within a given environment based on the new paradigm for conceptualizing designing and implementing software systems. condor,2,system,condor is a freely available workload management system designed to enable high-throughput computing processes across local and distributed computer networks. aesthetics,2,library,aes short for "aesthetics" is an r library used to "generate aesthetic mappings that describe how variables in the data are mapped to visual properties (aesthetics) of geoms." the aesthetics tag should **not** be used to reference software's artistic merit. disco,2,framework,disco is a distributed computing framework based on the mapreduce paradigm. libtcod,2,library,libtcod (also: the doryen library) is a library for advanced console output and api for developing roguelikes. libtorrent,1,library,libtorrent is a cross-platform library implementing the bittorrent protocol. kivy,1,library,kivy is an open source python library for rapid development of cross-platform applications equipped with novel user interfaces such as multi-touch apps. websphinx,1,library,websphinx is a java class library for building web crawlers. kss,1,framework,kss (kinetic style sheets) is a development framework that enables to develop rich user interfaces with ajax without the need to use any javascript. taskjs,1,library,task.js is an experimental library for es6 that makes sequential blocking i/o simple and beautiful using the power of javascript’s new yield operator. visual-prolog,1,extension,visual prolog is a strongly-typed object-oriented extension of prolog. abbrevia,1,toolkit,abbrevia is a compression toolkit for delphi c++builder kylix and free pascal. django-shop,1,system,django shop is a modular django-based e-commerce system. evil-dicom,1,library,evil dicom is a lightweight library written in c# enabling the inspection and manipulation of dicom files. python-dragonfly,1,framework,dragonfly is a speech recognition framework for python. pop-11,1,language,pop-11 is a reflective incrementally compiled programming language with many of the features of an interpreted language. tatukgis,1,toolkit,tatukgis is a comprehensive gis development toolkit with many functions and properties gis internet server and aerial imagery rectification software. lolcode,1,language,lolcode is an esoteric programing language designed to mimic many internet memes. wxformbuilder,1,tool,wxformbuilder is a cross-platform tool that provides a visual interface for designing gui applications using the wxwidgets toolkit. jwi,1,library,the mit java wordnet interface (jwi) is a java library for interfacing with the wordnet electronic dictionary. steam,1,system,steam is an entertainment platform payment system and community for video games. gource,1,tool,gource is a software version control visualization tool. ruffus,1,library,ruffus is a computation pipeline library for python. forbiddenfruit,1,library,forbiddenfruit is a python library that allows patching built in objects tox,1,tool,tox is a generic virtualenv management and test running tool. jacket,1,platform,jacket is a numerical computing platform enabling gpu acceleration of matlab-based codes. ods,1,extension,ods opendocument spreadsheet is an extension of spreadsheet files in open document format for office applications (odf). gspread,1,library,gspread is a python library for interacting with google spreadsheets. testthat,1,tool,testthat is a testing tool for r. tuprolog,1,system,tuprolog is a light-weight prolog system for distributed applications and infrastructures intentionally designed around a minimal core plone-4.x,1,system,plone is a open source content management system developed in the python which is used to create the any kind of website including blogs internet sites webshops and internal websites. urwid,1,library,urwid is a console user interface library for python. appscale,1,platform,appscale is an open-source hybrid cloud platform. gecode,1,toolkit,gecode is a toolkit for developing constraint-based systems and applications. django-rest-framework,1,framework,django is a high-level python web framework that encourages rapid development and clean pragmatic design. xmgrace,1,tool,grace (originally named xmgrace) is a 2d plotting tool which allows you to interactively modify plots to set all kind of plot parameters change the appearance of your figure and to save the figure in the format of your choice. </code>
{ "repository": "guilhermealbm/TechSpaces", "path": "graphs/python_graph.ipynb", "matched_keywords": [ "bioinformatics", "evolution" ], "stars": null, "size": 772523, "hexsha": "48122341ac98095053c8865fa6321c04c1705c06", "max_line_length": 463322, "avg_line_length": 271.3463294696, "alphanum_fraction": 0.5961259406 }
# Notebook from Coalemus/Python-Projects Path: .incomplete/stockpredict/vantage/alphavantage.ipynb <code> from pandas_datareader import data import matplotlib.pyplot as plt import pandas as pd import datetime as dt import urllib.request, json import os import numpy as np import tensorflow as tf # This code has been tested with TensorFlow 1.6 from sklearn.preprocessing import MinMaxScaler_____no_output_____data_source = 'kaggle' # alphavantage or kaggle if data_source == 'alphavantage': # ====================== Loading Data from Alpha Vantage ================================== api_key = '<your API key>' # American Airlines stock market prices ticker = "AAL" # JSON file with all the stock market data for AAL from the last 20 years url_string = "https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol=%s&outputsize=full&apikey=%s"%(ticker,api_key) # Save data to this file file_to_save = 'stock_market_data-%s.csv'%ticker # If you haven't already saved data, # Go ahead and grab the data from the url # And store date, low, high, volume, close, open values to a Pandas DataFrame if not os.path.exists(file_to_save): with urllib.request.urlopen(url_string) as url: data = json.loads(url.read().decode()) # extract stock market data data = data['Time Series (Daily)'] df = pd.DataFrame(columns=['Date','Low','High','Close','Open']) for k,v in data.items(): date = dt.datetime.strptime(k, '%Y-%m-%d') data_row = [date.date(),float(v['3. low']),float(v['2. high']), float(v['4. close']),float(v['1. open'])] df.loc[-1,:] = data_row df.index = df.index + 1 print('Data saved to : %s'%file_to_save) df.to_csv(file_to_save) # If the data is already there, just load it from the CSV else: print('File already exists. Loading data from CSV') df = pd.read_csv(file_to_save) else: # ====================== Loading Data from Kaggle ================================== # You will be using HP's data. Feel free to experiment with other data. # But while doing so, be careful to have a large enough dataset and also pay attention to the data normalization df = pd.read_csv(os.path.join('Stocks','hpq.us.txt'),delimiter=',',usecols=['Date','Open','High','Low','Close']) print('Loaded data from the Kaggle repository')_____no_output_____# Sort DataFrame by date df = df.sort_values('Date') # Double check the result df.head()_____no_output_____plt.figure(figsize = (18,9)) plt.plot(range(df.shape[0]),(df['Low']+df['High'])/2.0) plt.xticks(range(0,df.shape[0],500),df['Date'].loc[::500],rotation=45) plt.xlabel('Date',fontsize=18) plt.ylabel('Mid Price',fontsize=18) plt.show()_____no_output_____# First calculate the mid prices from the highest and lowest high_prices = df.loc[:,'High'].as_matrix() low_prices = df.loc[:,'Low'].as_matrix() mid_prices = (high_prices+low_prices)/2.0_____no_output_____train_data = mid_prices[:11000] test_data = mid_prices[11000:]_____no_output_____# Scale the data to be between 0 and 1 # When scaling remember! You normalize both test and train data with respect to training data # Because you are not supposed to have access to test data scaler = MinMaxScaler() train_data = train_data.reshape(-1,1) test_data = test_data.reshape(-1,1)_____no_output_____# Train the Scaler with training data and smooth data smoothing_window_size = 2500 for di in range(0,10000,smoothing_window_size): scaler.fit(train_data[di:di+smoothing_window_size,:]) train_data[di:di+smoothing_window_size,:] = scaler.transform(train_data[di:di+smoothing_window_size,:]) # You normalize the last bit of remaining data scaler.fit(train_data[di+smoothing_window_size:,:]) train_data[di+smoothing_window_size:,:] = scaler.transform(train_data[di+smoothing_window_size:,:])_____no_output_____# Reshape both train and test data train_data = train_data.reshape(-1) # Normalize test data test_data = scaler.transform(test_data).reshape(-1)_____no_output_____# Now perform exponential moving average smoothing # So the data will have a smoother curve than the original ragged data EMA = 0.0 gamma = 0.1 for ti in range(11000): EMA = gamma*train_data[ti] + (1-gamma)*EMA train_data[ti] = EMA # Used for visualization and test purposes all_mid_data = np.concatenate([train_data,test_data],axis=0)_____no_output_____window_size = 100 N = train_data.size std_avg_predictions = [] std_avg_x = [] mse_errors = [] for pred_idx in range(window_size,N): if pred_idx >= N: date = dt.datetime.strptime(k, '%Y-%m-%d').date() + dt.timedelta(days=1) else: date = df.loc[pred_idx,'Date'] std_avg_predictions.append(np.mean(train_data[pred_idx-window_size:pred_idx])) mse_errors.append((std_avg_predictions[-1]-train_data[pred_idx])**2) std_avg_x.append(date) print('MSE error for standard averaging: %.5f'%(0.5*np.mean(mse_errors)))_____no_output_____plt.figure(figsize = (18,9)) plt.plot(range(df.shape[0]),all_mid_data,color='b',label='True') plt.plot(range(window_size,N),std_avg_predictions,color='orange',label='Prediction') #plt.xticks(range(0,df.shape[0],50),df['Date'].loc[::50],rotation=45) plt.xlabel('Date') plt.ylabel('Mid Price') plt.legend(fontsize=18) plt.show()_____no_output_____window_size = 100 N = train_data.size run_avg_predictions = [] run_avg_x = [] mse_errors = [] running_mean = 0.0 run_avg_predictions.append(running_mean) decay = 0.5 for pred_idx in range(1,N): running_mean = running_mean*decay + (1.0-decay)*train_data[pred_idx-1] run_avg_predictions.append(running_mean) mse_errors.append((run_avg_predictions[-1]-train_data[pred_idx])**2) run_avg_x.append(date) print('MSE error for EMA averaging: %.5f'%(0.5*np.mean(mse_errors)))_____no_output_____plt.figure(figsize = (18,9)) plt.plot(range(df.shape[0]),all_mid_data,color='b',label='True') plt.plot(range(0,N),run_avg_predictions,color='orange', label='Prediction') #plt.xticks(range(0,df.shape[0],50),df['Date'].loc[::50],rotation=45) plt.xlabel('Date') plt.ylabel('Mid Price') plt.legend(fontsize=18) plt.show()_____no_output_____class DataGeneratorSeq(object): def __init__(self,prices,batch_size,num_unroll): self._prices = prices self._prices_length = len(self._prices) - num_unroll self._batch_size = batch_size self._num_unroll = num_unroll self._segments = self._prices_length //self._batch_size self._cursor = [offset * self._segments for offset in range(self._batch_size)] def next_batch(self): batch_data = np.zeros((self._batch_size),dtype=np.float32) batch_labels = np.zeros((self._batch_size),dtype=np.float32) for b in range(self._batch_size): if self._cursor[b]+1>=self._prices_length: #self._cursor[b] = b * self._segments self._cursor[b] = np.random.randint(0,(b+1)*self._segments) batch_data[b] = self._prices[self._cursor[b]] batch_labels[b]= self._prices[self._cursor[b]+np.random.randint(0,5)] self._cursor[b] = (self._cursor[b]+1)%self._prices_length return batch_data,batch_labels def unroll_batches(self): unroll_data,unroll_labels = [],[] init_data, init_label = None,None for ui in range(self._num_unroll): data, labels = self.next_batch() unroll_data.append(data) unroll_labels.append(labels) return unroll_data, unroll_labels def reset_indices(self): for b in range(self._batch_size): self._cursor[b] = np.random.randint(0,min((b+1)*self._segments,self._prices_length-1)) dg = DataGeneratorSeq(train_data,5,5) u_data, u_labels = dg.unroll_batches() for ui,(dat,lbl) in enumerate(zip(u_data,u_labels)): print('\n\nUnrolled index %d'%ui) dat_ind = dat lbl_ind = lbl print('\tInputs: ',dat ) print('\n\tOutput:',lbl)_____no_output_____D = 1 # Dimensionality of the data. Since your data is 1-D this would be 1 num_unrollings = 50 # Number of time steps you look into the future. batch_size = 500 # Number of samples in a batch num_nodes = [200,200,150] # Number of hidden nodes in each layer of the deep LSTM stack we're using n_layers = len(num_nodes) # number of layers dropout = 0.2 # dropout amount tf.reset_default_graph() # This is important in case you run this multiple times_____no_output_____# Input data. train_inputs, train_outputs = [],[] # You unroll the input over time defining placeholders for each time step for ui in range(num_unrollings): train_inputs.append(tf.placeholder(tf.float32, shape=[batch_size,D],name='train_inputs_%d'%ui)) train_outputs.append(tf.placeholder(tf.float32, shape=[batch_size,1], name = 'train_outputs_%d'%ui))_____no_output_____lstm_cells = [ tf.contrib.rnn.LSTMCell(num_units=num_nodes[li], state_is_tuple=True, initializer= tf.contrib.layers.xavier_initializer() ) for li in range(n_layers)] drop_lstm_cells = [tf.contrib.rnn.DropoutWrapper( lstm, input_keep_prob=1.0,output_keep_prob=1.0-dropout, state_keep_prob=1.0-dropout ) for lstm in lstm_cells] drop_multi_cell = tf.contrib.rnn.MultiRNNCell(drop_lstm_cells) multi_cell = tf.contrib.rnn.MultiRNNCell(lstm_cells) w = tf.get_variable('w',shape=[num_nodes[-1], 1], initializer=tf.contrib.layers.xavier_initializer()) b = tf.get_variable('b',initializer=tf.random_uniform([1],-0.1,0.1))_____no_output_____# Create cell state and hidden state variables to maintain the state of the LSTM c, h = [],[] initial_state = [] for li in range(n_layers): c.append(tf.Variable(tf.zeros([batch_size, num_nodes[li]]), trainable=False)) h.append(tf.Variable(tf.zeros([batch_size, num_nodes[li]]), trainable=False)) initial_state.append(tf.contrib.rnn.LSTMStateTuple(c[li], h[li])) # Do several tensor transofmations, because the function dynamic_rnn requires the output to be of # a specific format. Read more at: https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn all_inputs = tf.concat([tf.expand_dims(t,0) for t in train_inputs],axis=0) # all_outputs is [seq_length, batch_size, num_nodes] all_lstm_outputs, state = tf.nn.dynamic_rnn( drop_multi_cell, all_inputs, initial_state=tuple(initial_state), time_major = True, dtype=tf.float32) all_lstm_outputs = tf.reshape(all_lstm_outputs, [batch_size*num_unrollings,num_nodes[-1]]) all_outputs = tf.nn.xw_plus_b(all_lstm_outputs,w,b) split_outputs = tf.split(all_outputs,num_unrollings,axis=0)_____no_output_____# When calculating the loss you need to be careful about the exact form, because you calculate # loss of all the unrolled steps at the same time # Therefore, take the mean error or each batch and get the sum of that over all the unrolled steps print('Defining training Loss') loss = 0.0 with tf.control_dependencies([tf.assign(c[li], state[li][0]) for li in range(n_layers)]+ [tf.assign(h[li], state[li][1]) for li in range(n_layers)]): for ui in range(num_unrollings): loss += tf.reduce_mean(0.5*(split_outputs[ui]-train_outputs[ui])**2) print('Learning rate decay operations') global_step = tf.Variable(0, trainable=False) inc_gstep = tf.assign(global_step,global_step + 1) tf_learning_rate = tf.placeholder(shape=None,dtype=tf.float32) tf_min_learning_rate = tf.placeholder(shape=None,dtype=tf.float32) learning_rate = tf.maximum( tf.train.exponential_decay(tf_learning_rate, global_step, decay_steps=1, decay_rate=0.5, staircase=True), tf_min_learning_rate) # Optimizer. print('TF Optimization operations') optimizer = tf.train.AdamOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 5.0) optimizer = optimizer.apply_gradients( zip(gradients, v)) print('\tAll done')_____no_output_____print('Defining prediction related TF functions') sample_inputs = tf.placeholder(tf.float32, shape=[1,D]) # Maintaining LSTM state for prediction stage sample_c, sample_h, initial_sample_state = [],[],[] for li in range(n_layers): sample_c.append(tf.Variable(tf.zeros([1, num_nodes[li]]), trainable=False)) sample_h.append(tf.Variable(tf.zeros([1, num_nodes[li]]), trainable=False)) initial_sample_state.append(tf.contrib.rnn.LSTMStateTuple(sample_c[li],sample_h[li])) reset_sample_states = tf.group(*[tf.assign(sample_c[li],tf.zeros([1, num_nodes[li]])) for li in range(n_layers)], *[tf.assign(sample_h[li],tf.zeros([1, num_nodes[li]])) for li in range(n_layers)]) sample_outputs, sample_state = tf.nn.dynamic_rnn(multi_cell, tf.expand_dims(sample_inputs,0), initial_state=tuple(initial_sample_state), time_major = True, dtype=tf.float32) with tf.control_dependencies([tf.assign(sample_c[li],sample_state[li][0]) for li in range(n_layers)]+ [tf.assign(sample_h[li],sample_state[li][1]) for li in range(n_layers)]): sample_prediction = tf.nn.xw_plus_b(tf.reshape(sample_outputs,[1,-1]), w, b) print('\tAll done')_____no_output_____ mse_test_loss = 0.0 our_predictions = [] if (ep+1)-valid_summary==0: # Only calculate x_axis values in the first validation epoch x_axis=[] # Feed in the recent past behavior of stock prices # to make predictions from that point onwards for tr_i in range(w_i-num_unrollings+1,w_i-1): current_price = all_mid_data[tr_i] feed_dict[sample_inputs] = np.array(current_price).reshape(1,1) _ = session.run(sample_prediction,feed_dict=feed_dict) feed_dict = {} current_price = all_mid_data[w_i-1] feed_dict[sample_inputs] = np.array(current_price).reshape(1,1) # Make predictions for this many steps # Each prediction uses previous prediciton as it's current input for pred_i in range(n_predict_once): pred = session.run(sample_prediction,feed_dict=feed_dict) our_predictions.append(np.asscalar(pred)) feed_dict[sample_inputs] = np.asarray(pred).reshape(-1,1) if (ep+1)-valid_summary==0: # Only calculate x_axis values in the first validation epoch x_axis.append(w_i+pred_i) mse_test_loss += 0.5*(pred-all_mid_data[w_i+pred_i])**2 session.run(reset_sample_states) predictions_seq.append(np.array(our_predictions)) mse_test_loss /= n_predict_once mse_test_loss_seq.append(mse_test_loss) if (ep+1)-valid_summary==0: x_axis_seq.append(x_axis) current_test_mse = np.mean(mse_test_loss_seq) # Learning rate decay logic if len(test_mse_ot)>0 and current_test_mse > min(test_mse_ot): loss_nondecrease_count += 1 else: loss_nondecrease_count = 0 if loss_nondecrease_count > loss_nondecrease_threshold : session.run(inc_gstep) loss_nondecrease_count = 0 print('\tDecreasing learning rate by 0.5') test_mse_ot.append(current_test_mse) print('\tTest MSE: %.5f'%np.mean(mse_test_loss_seq)) predictions_over_time.append(predictions_seq) print('\tFinished Predictions')_____no_output_____best_prediction_epoch = 28 # replace this with the epoch that you got the best results when running the plotting code plt.figure(figsize = (18,18)) plt.subplot(2,1,1) plt.plot(range(df.shape[0]),all_mid_data,color='b') # Plotting how the predictions change over time # Plot older predictions with low alpha and newer predictions with high alpha start_alpha = 0.25 alpha = np.arange(start_alpha,1.1,(1.0-start_alpha)/len(predictions_over_time[::3])) for p_i,p in enumerate(predictions_over_time[::3]): for xval,yval in zip(x_axis_seq,p): plt.plot(xval,yval,color='r',alpha=alpha[p_i]) plt.title('Evolution of Test Predictions Over Time',fontsize=18) plt.xlabel('Date',fontsize=18) plt.ylabel('Mid Price',fontsize=18) plt.xlim(11000,12500) plt.subplot(2,1,2) # Predicting the best test prediction you got plt.plot(range(df.shape[0]),all_mid_data,color='b') for xval,yval in zip(x_axis_seq,predictions_over_time[best_prediction_epoch]): plt.plot(xval,yval,color='r') plt.title('Best Test Predictions Over Time',fontsize=18) plt.xlabel('Date',fontsize=18) plt.ylabel('Mid Price',fontsize=18) plt.xlim(11000,12500) plt.show() _____no_output_____ </code>
{ "repository": "Coalemus/Python-Projects", "path": ".incomplete/stockpredict/vantage/alphavantage.ipynb", "matched_keywords": [ "evolution" ], "stars": null, "size": 23041, "hexsha": "4812fef4790483a11e1fe6a40c674368b294593a", "max_line_length": 143, "avg_line_length": 36.8656, "alphanum_fraction": 0.5677704961 }
# Notebook from junyaogz/topic.recognition.td3- Path: src/5_train_and_test_model1_in_tf/train_model1_and_test.ipynb <code> # [Author]: Jun Yao # [Date]: 2021-12-10 # [Description] # this file has the following functionalities # (1) train model 1 in the paper and evaluate it against test data with golden labels. # (2) calculate random guess accuracy # (3) evaluate the decoded texts from model 2 (tri3 model trained in Kaldi). # input of this script: # stm_transcript_labels.csv # test_text_from_model2.csv # output of this script: # prediction accuracy in the conclusion # [Conclusion] # (1) random guess accuracy is merely 0.11, # (2) test accuracy of model 1 using the transcripts provided by TEDLIUM-3 is 0.40. # (3) test accuracy of model 1 using the decoded text provided by model 2 is 0.28. # as a reference, human prediction accuracy by the author is 0.53 (tried 3 times and pick the highest), # [References] # 1. https://keras.io/examples/nlp/multi_label_classification/ # 2. https://en.wikipedia.org/wiki/Multi-label_classification_____no_output_____from tensorflow.keras import layers from tensorflow import keras import tensorflow as tf from sklearn.model_selection import train_test_split from ast import literal_eval import matplotlib.pyplot as plt import pandas as pd import numpy as np_____no_output_____orig_data = pd.read_csv("stm_transcript_labels.csv",sep=",", error_bad_lines=False) print(f"There are {len(orig_data)} rows in the dataset.") orig_data.head()There are 2351 rows in the dataset. # ================ Remove duplicate items total_duplicate_titles = sum(orig_data["titles"].duplicated()) print(f"There are {total_duplicate_titles} duplicate titles.") orig_data = orig_data[~orig_data["titles"].duplicated()] print(f"There are {len(orig_data)} rows in the deduplicated dataset.") # There are some terms with occurrence as low as 1. print(sum(orig_data["terms"].value_counts() == 1)) # How many unique terms? print(orig_data["terms"].nunique()) # Filtering the rare terms. orig_data_filtered = orig_data.groupby("terms").filter(lambda x: len(x) > 1) orig_data_filtered.shape # ================ Convert the string labels to lists of strings orig_data_filtered["terms"] = orig_data_filtered["terms"].apply(lambda x: literal_eval(x)) orig_data_filtered["terms"].values[:5] # ================ Use stratified splits because of class imbalance test_split = 0.4 # Initial train and test split. train_df, test_df = train_test_split( orig_data_filtered, test_size=test_split, stratify=orig_data_filtered["terms"].values, ) # Splitting the test set further into validation # and new test sets. val_df = test_df.sample(frac=0.5) test_df.drop(val_df.index, inplace=True) print(f"Number of rows in training set: {len(train_df)}") print(f"Number of rows in validation set: {len(val_df)}") print(f"Number of rows in test set: {len(test_df)}")There are 0 duplicate titles. There are 2351 rows in the deduplicated dataset. 1668 1940 Number of rows in training set: 409 Number of rows in validation set: 137 Number of rows in test set: 137 # ================ Multi-label binarization terms = tf.ragged.constant(train_df["terms"].values) #terms = tf.ragged.constant(orig_data_filtered["terms"].values) lookup = tf.keras.layers.StringLookup(output_mode="multi_hot") lookup.adapt(terms) vocab = lookup.get_vocabulary() def invert_multi_hot(encoded_labels): """Reverse a single multi-hot encoded label to a tuple of vocab terms.""" hot_indices = np.argwhere(encoded_labels == 1.0)[..., 0] return np.take(vocab, hot_indices) print("Vocabulary:\n") print(vocab) sample_label = train_df["terms"].iloc[0] print(f"Original label: {sample_label}") label_binarized = lookup([sample_label]) print(f"Label-binarized representation: {label_binarized}")Vocabulary: ['[UNK]', 'culture', 'business', 'design', 'TEDx', 'global issues', 'entertainment', 'art', 'cities', 'brain', 'technology', 'creativity', 'activism', 'communication', 'collaboration', 'architecture', 'computers', 'Africa', 'economics', 'health care', 'health', 'education', 'biology', 'science', 'music', 'climate change', 'biotech', 'TED Fellows', 'evolution', 'history', 'community', 'humor', 'future', 'disease', 'demo', 'animals', 'TED Prize', 'DNA', 'performance', 'medicine', 'environment', 'cognitive science', 'cancer', 'astronomy', 'aging', 'religion', 'neuroscience', 'innovation', 'exploration', 'engineering', 'data', 'consumerism', 'consciousness', 'AIDS', 'social change', 'politics', 'film', 'energy', 'Internet', 'personal growth', 'media', 'language', 'government', 'entrepreneur', 'crime', 'compassion', 'biodiversity', 'AI', 'storytelling', 'photography', 'medical research', 'industrial design', 'humanity', 'finance', 'feminism', 'depression', 'television', 'sustainability', 'spoken word', 'space', 'race', 'prison', 'poetry', 'physics', 'nan', 'military', 'memory', 'love', 'illness', 'happiness', 'genetics', 'family', 'disability', 'corruption', 'capitalism', 'bacteria', 'atheism', 'anthropology', 'algorithm', 'agriculture', 'Planets', 'Autism spectrum disorder', 'war', 'typography', 'transportation', 'sports', 'solar energy', 'social media', 'self', 'relationships', 'productivity', 'poverty', 'philosophy', 'philanthropy', 'parenting', 'paleontology', 'ocean', 'nature', 'motivation', 'math', 'marketing', 'magic', 'literature', 'leadership', 'law', 'invention', 'interview', 'insects', 'illusion', 'identity', 'flight', 'empathy', 'drones', 'driverless cars', 'dinosaurs', 'democracy', 'decision-making', 'death', 'dance', 'curiosity', 'comedy', 'china', 'botany', 'bees', 'Vaccines', 'String theory', 'Moon', 'Mission Blue', 'Middle East', 'Egypt', 'Buddhism', 'Big Bang', 'Asia'] Original label: ['activism', 'culture', 'global issues'] Label-binarized representation: [[0. 1. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] # ================ Data preprocessing and tf.data.Dataset objects train_df["summaries"].apply(lambda x: len(x.split(" "))).describe() _____no_output_____max_seqlen = 2200 batch_size = 128 padding_token = "<pad>" auto = tf.data.AUTOTUNE def unify_text_length(text, label): # Split the given abstract and calculate its length. word_splits = tf.strings.split(text, sep=" ") sequence_length = tf.shape(word_splits)[0] # Calculate the padding amount. padding_amount = max_seqlen - sequence_length # Check if we need to pad or truncate. if padding_amount > 0: unified_text = tf.pad([text], [[0, padding_amount]], constant_values="<pad>") unified_text = tf.strings.reduce_join(unified_text, separator="") else: unified_text = tf.strings.reduce_join(word_splits[:max_seqlen], separator=" ") # The expansion is needed for subsequent vectorization. return tf.expand_dims(unified_text, -1), label def make_dataset(dataframe, is_train=True): labels = tf.ragged.constant(dataframe["terms"].values) label_binarized = lookup(labels).numpy() dataset = tf.data.Dataset.from_tensor_slices( (dataframe["summaries"].values, label_binarized) ) dataset = dataset.shuffle(batch_size * 10) if is_train else dataset dataset = dataset.map(unify_text_length, num_parallel_calls=auto).cache() return dataset.batch(batch_size) # prepare the tf.data.Dataset objects train_dataset = make_dataset(train_df, is_train=True) validation_dataset = make_dataset(val_df, is_train=False) test_dataset = make_dataset(test_df, is_train=False)_____no_output_____# ================ Dataset preview text_batch, label_batch = next(iter(train_dataset)) for i, text in enumerate(text_batch[:5]): label = label_batch[i].numpy()[None, ...] print(f"Abstract: {text[0]}") print(f"Label(s): {invert_multi_hot(label[0])}") print(" ")Abstract: b'make a legal and spiritual decision to spend the rest of their lives together. and not to have sex with anyone else. else. ever he buys a ring she buys a dress. they go shopping for all sorts of things she takes him to arthur murray for ballroom dancing lessons. and the big day comes and they ll stand before god and family and some guy her dad once did business with and they ll vow that nothing. not abject poverty not life threatening illness not complete. and utter misery will ever put the tiniest damper on their eternal love and devotion. these optimistic young bastards promise to honor and cherish each other through. hot flashes and midlife crises and a cumulative. 50 pound weight gain until that. when one of them. is finally able to rest in peace you. know because they can t hear the snoring anymore. and then they ll get stupid drunk and smash cake in each other s faces and do the macarena and we ll be there showering them with towels and toasters and drinking their free booze and throwing. birdseed at them every single time. even though we know statistically half of them will be divorced within a decade. of course the other half won t right they ll keep forgetting anniversaries and arguing about where to spend holidays and debating. which way the toilet paper. should come off of the roll and. some of them will even still be enjoying each other s company when neither of them can chew solid food anymore. and researchers want to know why i mean look it doesn t take a. study to figure out what makes a marriage not. too much time on facebook having sex with other people but you can have the exact opposite of all of those things respect. excitement. a broken internet connection. monogamy and the thing still can go to hell in a handbasket so what s. what do the folks who make it all the way to side by side burial plots have in common what are they doing right what can we learn from them. and if you re still happily sleeping solo why should you stop what you re doing and make it your life s work to find that one special person that you can annoy for the rest. researchers spend billions of your tax dollars trying to figure that out. they stalk blissful couples and study their every move and mannerism and they try to pinpoint what it is that sets them apart from their miserable neighbors and friends. and it turns out the success stories share a few similarities beyond that they don t have sex with other people. for instance in the happiest marriages. wife is thinner and better looking than the husband obvious right it s obvious that this leads to marital bliss because women. we care a great deal about being thin and good looking whereas men mostly care about sex. ideally with women who are thinner and better looking than they are. the beauty of this research though is that no one is suggesting that women have to be thin to be happy we just have to be thinner than our partners. so instead of all that laborious dieting and exercising we just need to wait for them to get fat. maybe bake a few pies this is good information to have and it s not that complicated. research also suggests that the happiest couples are the ones that focus on the positives. for example the happy wife instead of pointing out her husband s growing gut or suggesting he go for a run she might say. wow honey thank you for going out of your way to make me relatively thinner. these are couples who can find good in any situation yeah it was devastating when we lost everything in that fire. but it s kind of nice sleeping out here under the stars. and it s a good thing you ve got all that body fat to keep us warm. one of my favorite studies found that the more willing a husband is to do housework the more attractive his wife will find him. because we needed a study to tell us this. laughter but here s what s going on here the more attractive she finds him the more sex they have the more sex they have the nicer he is to her the. nicer he is to her the less she nags him about leaving wet towels on the bed and ultimately they live happily ever after. in other words men you might want to pick it up a notch in the domestic department. here s an interesting one one study found that people who smile in childhood photographs. are less likely to get a divorce this is an actual study. and let me clarify the researchers were not looking at documented self reports of childhood happiness or even studying old journals the data were based entirely on whether people looked happy in these early pictures. now i don t know how old all of you are but when i was a kid your parents took. they didn t take three hundred shots of you in that rapid fire digital video mode and then pick out the nicest smiliest one for the christmas card. no they dressed you up they lined you up and you smiled for the fucking camera like they told you to or you could kiss your birthday. i have a huge pile of fake happy childhood pictures and. i m glad they make me less likely than some people to get a divorce. so what else can you do to safeguard your marriage do not win an oscar for best actress. i m serious bettie davis joan crawford halle berry hilary swank sandra bullock. reese witherspoon all of them single soon after taking home that statue they actually call it the oscar curse it is the marriage kiss of death and something that should be avoided. and it s not just successfully starring in films that s dangerous it turns out merely watching a romantic comedy causes relationship satisfaction to plummet. laughter apparently. bitter realization that maybe it could happen to us but it obviously hasn t and it probably never will makes our lives seem unbearably grim in comparison. and theoretically i suppose if we opt for a film where someone gets brutally murdered or dies in a fiery car crash. we are more likely to walk out of that theater feeling like we ve got it pretty good. i can t tell you anymore about that one because i stopped reading it at the headline but here s a scary one divorce is contagious that. s right when you have a close couple friend split up it increases your chances of getting a divorce by. percent now i have to say i don t get this one at all my husband and i have watched quite a few friends. divide their assets. and then struggle with being our age and single in an age of sexting and viagra and eharmony and. i m thinking they ve done more for my marriage than a lifetime of therapy ever could so now you may be wondering why does anyone. the us federal government counts more than a thousand legal benefits to being someone s spouse. a list that includes visitation rights in jail but hopefully you ll never need that one. but beyond the profound federal perks married people make more money we. re healthier physically and emotionally we produce happier more stable and more successful kids. we have more sex than our supposedly swinging single friends believe it or not. we even live longer which is a pretty compelling argument for marrying someone you like a lot in the first place. laughter now if you re not currently experiencing the joy of the joint tax return i can t tell you how to find. of the approximately ideal size and attractiveness who prefers horror movies and doesn t have a lot of friends hovering on the brink of divorce. but i can only encourage you to try because the benefits as i ve pointed out. are significant the. bottom line is whether you re in it or you re searching for it i believe marriage is an institution worth pursuing. so i hope you ll use the information i ve given you today to weigh your personal strengths against your own risk factors. for instance in my marriage i d say i m doing ok. one the one hand i have a husband who s annoyingly lean and incredibly handsome so i m obviously going to need fatten him up. and like i said we have those divorced friends who may secretly or subconsciously be trying to break us up so we have to keep an eye. on that and we do like a cocktail or two on the other hand. i have the fake happy picture thing. and also my husband does a lot around the house and would happily never see another romantic comedy as long as he lives so i ve got all those things going for me. i plan to work extra hard to not win an oscar anytime soon and for the good of your relationships i would encourage you to do the same i ll. see you at the bar<pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>' Label(s): ['culture' 'TEDx' 'humor'] Abstract: b'reality today. that you can download products from the web product data i should say from the web. perhaps tweak it and personalize it to your own preference or your own taste and have that information. sent to a desktop machine. that will fabricate it for you on the spot we can actually build for you very rapidly a physical object. and the reason we can do this is through an emerging technology called additive manufacturing or 3d printing. this is a 3d printer. they have been around for almost thirty years now which is quite amazing to think of but. they re only just starting to filter into the public arena. and typically you would take data like the data of a pen here which would be a geometric representation of that product in 3d. and we would pass that data with material into a machine and a process that would happen in the machine would mean layer by layer that. product would be built and we can take out the physical product and ready to use or to perhaps assemble into something else. but if these machines have been around for almost thirty years why don t we know about them. because typically they ve been. too inefficient inaccessible they ve not been fast enough they ve been quite expensive but today. becoming a reality that they are now becoming successful many barriers are breaking down that means that you guys will soon be able to access one of these machines if not this minute. and it will change and disrupt the landscape of manufacturing and most certainly our lives our businesses and the lives of our children. so how does it work. it typically reads cad data which is a product design data created on professional product design programs and here you can see an engineer it could be an architect or it could be a professional product designer. create a product in 3d and this data gets sent to a machine that slices the data into. of that product. all the way through almost like slicing it like salami. and that data layer by layer gets passed through the machine starting at the base of the product and depositing material layer upon layer infusing the new layer of materials to the old layer. in an additive process and this material that s deposited either starts as a liquid. material powder form. and the bonding process can happen by either melting and depositing or depositing then melting in this case we can see a laser sintering machine developed by eos it s actually using a laser to fuse the new layer of material to the old layer. and over time quite rapidly actually in a number of hours we can build a physical product ready to take out of the machine and use. and this is quite an extraordinary idea but it is reality today. so all these products that you can see on the screen were made in the same way they were all 3d printed and. you can see they re ranging from shoes rings that were made out of stainless steal phone covers out of plastic. all the way through to spinal implants for example. ll notice about all of these products is they re very very intricate the design is. quite extraordinary. because we re taking this data in 3d form slicing it up before it gets past the machine we can actually create structures that are more. intricate than any other manufacturing technology or in. are impossible to build in any other way. and you can create parts with moving components hinges parts within parts so in some cases we can abolish totally the need for manual labor. it sounds great. it is great. we can have 3d printers today that build structures like these this is almost three meters high. and this was built by depositing artificial sandstone layer upon layer in layers of. about five millimeters to ten mm in thickness slowly growing this structure this was created by an architectural firm called shiro. and you can actually walk into it. and on the other end of the spectrum this is a microstructure it s. created depositing layers of about four microns so really the resolution is quite incredible the detail that you can get today. is quite amazing so who s using it typically because we can create products very rapidly. it s been used by product designers or anyone who wanted to prototype a product. and very quickly create or reiterate a design. and actually what s quite amazing about this technology as well is that you can create bespoke products en masse there s very little economies. of scale so you can now create one offs very easily. architects for example they want to create prototypes of buildings again you can see this is a. building of the free university in berlin and it was designed by foster and partners. again not buildable in any other way and very hard to even create this by hand. now this is an. engine. component it was developed by a company called within technologies and 3t rpd. it s very very very detailed inside with the design now 3d printing can break away barriers in design which challenge the constraints of mass production. actually sitting here you can see that it has a number of cooling channels pass through it. which means it s a more efficient product you can t create this with standard manufacturing techniques even if you tried to do it manually. it s more efficient. because we can now create all these cavities within the object that cool fluid and it s used by aerospace. and automotive it s a lighter part. and. uses less material waste so it s overall performance and efficiency. just exceeds standard mass produced products. and then taking this idea of creating a very detailed structure we can apply it to honeycomb structures and use them within. implants typically an implant is. more effective within the body if it s more porous because our body tissue will grow into it. there s a lower chance of rejection. but it s very hard to create that in standard ways with 3d printing. we re seeing today that we can create much better implants and in fact because we can create bespoke products en masse one offs we can create implants that are specific to individuals. so as you can see this technology and the quality of what comes out of the machines is fantastic and we re starting to see it being used for final end products. and in fact as the detail is improving the quality is improving the price of the machines. are falling and they re becoming quicker they re also now small enough to sit on a desktop you can buy a machine today for. that you can create yourself. which is quite incredible but then it begs the question why don t we all have one in our home. because simply most of us here today don t know how to create the data that a 3d printer reads. if i gave you a 3d printer you wouldn t know how to direct it to make what you want it to but there are more and more. technologies software and processes today that are breaking down those barriers i believe we re at a tipping point where. this is now. something that we can t. avoid this technology is really going to disrupt the landscape of manufacturing and i believe cause a revolution in manufacturing. so today you can download products from the web. anything you would have on your desktop. you can use software like google sketchup to create products from scratch very easily. 3d printing can be also used to download spare parts from the web. so imagine you have say a hoover in your home and it has broken down you need a spare part but you realize that that hoover s been discontinued. can you imagine going online this is a reality. and finding that spare part from a database of geometries of that discontinued product and downloading that information that data and having the product. made for you at home ready to use on your demand and in fact because we can create spare parts with things the machines are quite literally making. you re having machines fabricate themselves these are parts of a reprap machine which is a kind of desktop printer. but what interests my company the most is the fact that you can create individual unique products en masse. there s no need to do a run of thousands of millions or send that product to be injection molded in china you can just make it physically on the spot. which means that we can now present to the public. the next generation of customization this is something that is now possible today that you can direct personally how you want your products to look we. re all familiar with the idea of customization or personalization brands like nike are doing it it s all over the web in fact every major household name is allowing you to interact with their products. on a daily basis all the way from smart cars to prada. to ray ban for example but this is not really mass customization it s known as variant production variations of the same product. what you could do is really influence your product now. i m not sure about you guys but i ve had experiences when i ve walked into a store and i ve know exactly what i ve wanted and i ve searched everywhere for that perfect lamp that i know where i want to sit in my house. and i just can t find the right thing or that perfect piece of jewelry. as a gift or for myself imagine that you can now engage with a brand. and. so that you can pass your personal attributes to the products that you re about to buy. you can today download a product with software like this. view the product in 3d this is the sort of 3d data that a machine will read this is a lamp and. you can start iterating the design you can direct. what color that product will. perhaps what material. and also you can engage in shape manipulation of that product but within boundaries that are safe because obviously the public are not professional product designers the piece of software will keep an individual within the bounds of the possible. and when somebody is ready to purchase the product in their personalized design they click enter and this data gets converted into the data that a 3d printer reads. and gets passed to. perhaps on someone s desktop. but i don t think that that s immediate i don t think that will happen soon what s more likely and we re seeing it today is that that data gets sent. to a local manufacturing center this means lower carbon footprint we re now instead of shipping a. data across the internet here s the product being built. you can see this came out of the machine in one piece and the electronics were inserted later it s this lamp as you can see here. so as long as you have the data you can create the part on demand. and you don t necessarily need to use this for just aesthetic customization you can use it for functional customization scanning parts of the body and creating. things that are made to fit so we can run this through to something like prosthetics which is highly specialized to an individual s handicap. or we can create very specific. prosthetics for that individual. scanning teeth today you can have your teeth scanned and dental coatings made in this way to fit you. while you wait at the dentist a machine will quietly be creating this for you ready to insert in the teeth. and the idea of now creating implants. scanning data an mri scan of somebody can now be converted into 3d data and we can create very specific implants. for them. and applying this to the idea of building up what s in our bodies you. know this is pair of lungs and the bronchial tree it s very intricate you couldn t really create this or simulate it in any. but with mri data we can just build the product as you can see very intricately. using this process pioneers in the industry are layering up cells today so one of the pioneers for example is dr anthony atala. and he has been working on layering cells to create body parts. bladders valves kidneys now this is not. something that s ready for' Label(s): ['business' 'design' 'technology'] Abstract: b'i ve got a great idea that s going to change the world it s fantastic it s going to blow your mind. it s my beautiful baby here s the thing everybody loves a beautiful baby i mean i was a beautiful baby. here s me and my dad a couple days after i was born so in the world of product design the beautiful baby s like the concept car. it s the knockout you see it and you go oh my god i d buy that in a second. so why is it that this year s new cars look pretty much exactly like last year s new cars. what went wrong between the design studio and the factory today i don t want to talk about beautiful babies i want to talk about the awkward adolescence of design. those sort of. dorky teenage years where you re trying to figure out how the world works i m. going to start with an example from some work that we did on newborn health so here s a problem four million babies. around the world mostly in developing countries. die every year before their first birthday even before their first month of life it turns out half of those kids or about one point eight million newborns around the world would make it. if you could just keep them warm for the first three days. maybe the first week so this is a newborn intensive care unit in kathmandu nepal all of these kids in blankets belong in incubators something like this. this is a donated japanese atom incubator that we found in a nicu in kathmandu. this is what we want probably what happened is a hospital in japan upgraded their equipment and donated their old stuff to. to nepal the problem is. technicians without spare parts donations like this very quickly turn into junk. so this seemed like a problem that we could do something about keeping a baby warm for a week that. s not rocket science so. we got started we partnered with a leading medical research institution here in boston. we conducted months of user research overseas trying to think like designers human centered design let s figure. out what people want. we killed thousands of post it notes we made dozens of prototypes to get to this. so this is the neonurture infant incubator and this has a lot of smarts built into it and we felt great so the idea here is unlike the concept car we want to marry something beautiful with something that. and our idea is that this design would inspire manufacturers and other people of influence to take this model and run with it. here s the bad news. the only baby ever actually put inside the neonurture incubator. was this kid during a time magazine photo shoot. so recognition is fantastic we want design to get out for people to see it it won lots of awards. but it felt like a booby prize. we wanted to make beautiful things that are going to make the world a better place. and i don t think this kid was even in it long enough to get warm so it turns out that design for inspiration. doesn doesn t really i guess what i would say is for us for what i want to do it s either too slow. or it just doesn t work it s ineffective so really. i want to design for outcomes i don t want to make beautiful stuff i want to make the world a better place. so when we were designing neonurture we paid a lot of attention to the people who are going to use this thing for example poor families rural doctors overloaded nurses. even repair technicians we thought we had all our bases covered we d done everything right well it turns out there s this whole constellation of people who have to be involved in a product for it to be successful manufacturing. distribution regulation. michael free at path says you have to figure out who will choose use and pay the dues for a product like this and i have to ask the question that. vcs always ask sir what is your business and who is your customer who is our customer. well here s an example this is a bangladeshi hospital director. outside his facility it turns out he doesn t buy any of his equipment those decisions are made by the ministry of health. or by foreign donors and it just kind of shows up similarly here s a multinational. turns out they ve got to fish where the fish are. so it turns out that in emerging markets where the fish are are the emerging middle class of these countries diseases of affluence heart disease infertility. turns out that design for outcomes. in one aspect really means thinking about design for manufacture and distribution ok that was an important lesson second. we took that lesson and tried to push it into our next project so we started by finding a manufacturer an organization called mtts in vietnam. that manufactures newborn care technologies for southeast asia our other partner is east meets west an american foundation that distributes that technology to poor hospitals. around that region. we started with them saying well what do you want what s a problem you want to solve and they said let s work on newborn jaundice. so this is another one of these mind boggling global problems jaundice affects two thirds of newborns. around the world. of those newborns one in ten roughly if it s not treated the jaundice gets so severe that it leads to either. even die there. s one way to treat jaundice and that s what s called an exchange transfusion so as you can imagine that s expensive and a little bit dangerous. there is another cure it s very. technological it s very complex a little daunting you ve got to shine blue light on the kid. bright blue light on as much of the skin as you can cover. how is this a hard problem. i went to mit ok we ll figure that out. laughter so here s an example this is an overhead phototherapy device that s designed for american hospitals and here s how it s supposed to be used it s. over the baby illuminating a single patient take it out of an american hospital send it overseas to a crowded facility in asia here s how. actually used the effectiveness of phototherapy is a function of light intensity these dark blue squares show you where it s effective phototherapy. here s what it looks like under actual use so those kids on the edges aren t actually receiving effective phototherapy but without training. without some kind of light meter how would you know. we see other examples of problems like this here s a neonatal intensive care unit where moms come in to visit their babies. keep in mind that mom maybe just had a c section so that s already kind of a bummer. mom s visiting her kid she sees her baby naked lying under some blue lights looking kind of vulnerable. it s not uncommon for mom to put a blanket over the baby. from a phototherapy standpoint maybe not the best behavior in fact that sounds. what we ve learned is that there s no such thing as a dumb user. there are only dumb products we have to think like existentialists it s not the painting we would have painted it s the painting that we actually painted it s the use. designed for actual use how are people actually going to use this. so similarly when we think about our partner mtts they ve made some amazing technologies for treating newborn illnesses so here s an overhead warmer and a cpap. really rugged they ve treated fifty thousand kids in vietnam with this technology but here s the problem. every doctor in the world every hospital administrator has seen tv curse those er reruns. turns out they all know what a medical device is supposed to look like they want buck rogers they don t want. it sounds crazy it sounds dumb but there are actually hospitals who would rather have no equipment than something that looks cheap and crummy. so again if we want people to trust a device it has to look trustworthy so thinking about outcomes it turns out appearances matter. we took all that information together we tried this time to get it right and here s what we developed this is the firefly phototherapy device. except this time we didn t stop at the concept car. from the very beginning we started by talking to manufacturers our goal is to make a state of the art product that our partner mtts can actually manufacture our goal is. to study how they work the resources they have access to so that they can make this product so. single bassinet it only fits a single baby and the idea here is it s obvious how you ought to use this device if you try to put more than one kid in you re stacking them on top of each other. so. the idea here is you want to make it hard to use wrong in other words you want to make the right way to use it the easiest way to use it another example. again silly mom silly mom thinks her baby looks cold wants to put a blanket over the baby that s why we have lights above and below the baby in firefly so if mom. does put a blanket over the baby it s still receiving effective phototherapy from below. last story here. i ve got a friend in india who told me that you haven t really tested a piece of electronic technology for distribution in in asia until you ve trained a cockroach to climb in and pee on every single little component on the inside you think. it s funny i had a laptop in the peace corps and. had all these dead pixels on it and one day i looked in they were all dead ants that had gotten into my laptop and perished those poor ants. so with firefly what we did is the problem is electronics get hot. and you have to put in vents or fans to keep them cool in most products. we decided we can t put a do not enter sign next to the vent we actually got rid of all that stuff so firefly s totally sealed. these are the kinds of lessons as awkward as it was to be a pretty goofy teenager. much worse to be a frustrated designer so i was thinking what i really want to do is change the world i have to pay attention to manufacturing and distribution i have to pay attention to how people are actually going to use a device. i actually have to pay attention there s no excuse for failure i have to think like an existentialist i have to accept that there are no dumb users only dumb products. to ask ourselves hard questions are we designing for the world that we want. are we designing for the world that we have are we designing for the world that s coming whether we re ready or not i got into this business. designing products i ve since learned that if you really want to make a difference in the world you have to design outcomes and that s design that matters thank you<pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>' Label(s): ['design' 'TEDx' 'health care'] Abstract: b'two years ago i was invited as an artist to participate in an exhibition commemorating one hundred years of islamic art in europe. the curator had only one condition i had to use the arabic script for my artwork. now as an artist a woman an arab. or a human being living in the world in two thousand and ten i only had one thing to say i wanted to say no. and in arabic to say no we say no and a thousand times no. so i decided to look for a thousand different noes. on everything ever produced under islamic or arab patronage in the past one thousand four hundred years from spain. spain to the borders of china. i collected my findings in a book placed them chronologically stating the name the patron the medium and the date. now the book sat on a small shelf next to the installation which stood three by seven meters in munich germany in september of two thousand and ten. two thousand and eleven the revolution started and life stopped for eighteen days. and on the 12th of february we naively celebrated on the streets of cairo believing. that the revolution had succeeded nine months later i found myself spraying messages in tahrir square. the reason for this act was this image. that i saw in my. newsfeed i did not feel that i could live in a city. where people were being killed and thrown like garbage on the street. so i took one no off a tombstone from the islamic museum in cairo and i added a message to it. no to military rule and i started spraying that on the streets in cairo but that led to a series of no coming out of the book like ammunition. and adding. messages to them and i started spraying them on the walls. so i ll be sharing some of these noes with you no to a new pharaoh because whoever comes next should understand that we will never be ruled by another dictator. no to violence ramy essam came. to tahrir on the second day of the revolution and he sat there with this. one month after mubarak stepped down this was his reward no to blinding heroes. ahmed harara lost his right eye on the 28th of january. and he lost his left eye on the 19th of november by two different snipers. no to killing in this case no to killing men of religion because sheikh ahmed adina refaat. was shot on december. behind. three orphans and a widow. no to burning books the institute of egypt was burned on december 17th a huge cultural loss no. to stripping the people and the blue bra is to remind us of our shame. as a nation when we allow a veiled woman to be stripped and beaten on the street and the footprint reads. long live a peaceful revolution because we will never retaliate with violence no to barrier walls on february 5th. concrete. roadblocks were set up in cairo to. protect the ministry of defense from protesters now speaking of walls. i want to share with you the story of one wall in cairo a group of artists decided. to paint a life size tank on a wall it s one to one. in front of this tank there s a man on a bicycle with a breadbasket on his head. to any passerby there s no problem with this visual. after acts of violence another artist came painted blood protesters being run over by the tank. message that read starting tomorrow i wear the new face the face of every martyr i exist. authority comes. paints the wall white leaves the tank and adds a message army and people one hand egypt for egyptians. another. the head of the military as a monster eating a maiden in a river of blood in front of the tank. authority comes paints the wall white leaves the tank leaves the suit and throws a bucket of black paint just to hide the face of the monster. so i come with my stencils and i spray them on the suit on the tank and on the whole wall and this is how it stands. with a final no i found neruda scribbled on a piece of paper in a field hospital in tahrir. and i decided to take a no of mamluk mausoleum in cairo the message reads. you can crush the flowers but you can t delay spring<pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>' Label(s): ['TED Fellows' 'Middle East' 'Egypt'] Abstract: b'i m born in the western congo in an area. around here. and then went to university in kisangani and after i finished i went to this area the ituri forest. but what i ve been doing. when i was about. fourteen. i grew in my uncle s house and my father was. a soldier and my uncle was a fisherman and also a. poacher what i ve been doing from fourteen to seventeen was i was assisting them collecting. ivory tusk. meat and whatever they were killing poaching hunting in the forest bring it in the main city to get access to the market. but finally i got myself involved. around seventeen to twenty years i became myself poacher. and i wanted to do it because i believed to continue my studies. i wanted to go to university but my father was poor my uncle even. so i did it and for three to four years i went to university for three times i applied to. to biomedical science to be a doctor i didn t succeed i was having my inscriptions my admission to biology. and i said no way i m not doing it my family s poor my area don t have better health care i want to be a doctor to serve them. three times that means three years and i start getting old i say oh no i continue. so i did tropical ecology and plant botany. when i finished i went to ituri forest for my internship. it s where i really getting passion with what i m doing right up to now i m standing in front of you. doing botany and wildlife conservation. that time the ituri forest was created as. a forest reserve. with some animals and also plants and the training center there was built around the. scientific congolese staff and some american scientists also. so the okapi faunal reserve protects. number i. that is the largest number of elephant we have right now in protected area in congo it has also chimpanzees and it has been named. okapi faunal reserve because of this beautiful creature that is a forest giraffe i think you guys know it quite well here we have savanna. giraffe but through evolutions we have this forest giraffe that lives only in congo. it has also. some beautiful primates thirteen species highest diversity we can find in one single area in africa. and it has the ituri forest itself about one thousand three hundred species of plants so far known. i join the wildlife conservation society working there in one thousand nine hundred and ninety five but i started working with them as a student in. ninety one i was appointed as. teaching assistant at my university because i accomplished with honor. but i didn t like the way the instruction i got was very poor. and i wanted to be from to a training center and a research center with the end of. dictatorship regime of mobutu sese seko that most of you know. life became very very difficult and the work we have been doing was completely difficult to do and to achieve it. when kabila started. his movement to liberate congo so mobutu soldiers. started moving and retreated so they started fleeing from the east to the west and. okapi faunal reserve is there. so there was a road from. goma somewhere here and coming like this so. they might go through pass through okapi faunal reserve congo has five of world s richest sites. of protected area and okapi faunal reserve is one of them. so soldier was fleeing in the okapi faunal reserve. on their way they looted everything torture wars oh my god you can t believe. every person was looking his way where to go we don t know and it was for us young. the first time really we hear the language of war of guns. and even people who faced the rebellion of nineteen sixty. three after our independence they didn t believe what was happening. they were killing people they were doing whatever they want because they have power who have been doing that. young children. child soldiers you can t ask him how old he is because he has guns. but i was from the west working in the east i even that time. not speaking swahili. and when they came. they looted everything you can t speak lingala because lingala was from mobutu and everyone speaking lingala is soldier and i was from the same area to him. all my friends said we are leaving because we are a target. but i m not going to the east because there i don t know swahili. i stay. if i go i will be killed i can t go back to my area it s more. than one thousand kilometers i stayed after they looted everything. we have been doing research on botany and we have a small herbarium of four thousand five hundred. sheets of plants we cut we dry and we packed them we mounted them on a folder. purpose so that we start them for agriculture for medicine for whatever and for science. for. the study of the flora and the change of the forest. that is people moving around that s even pygmies and this is. a bright guy hard working person and pygmies i ve been working with him about ten years. and with soldiers they went to the forest for poaching elephants because he s pygmies he know how to track. elephant in the forest he have been attacked by. leopard and they abandon him in the forest. they came to told me i have to save him and what i did i gave him just antibiotics that we care for that. tuberculosis and fortunately i saved his life. and that was the language of the war everywhere there have. been constant extraction of mineral killing animals. the logging timbers and so on and what of important things i think all of you here have a cell phone. that mineral have killed a lot five five millions of congolese have gone because of this colombo tantalite they call it coltan. that they use it to. make cell phone and it have been in that area all over in congo extraction and good big business of the war. and what i did for the first war. after we have lost everything i have to save something even myself my life and life of the staff. i buried some of our new vehicle engine i buried it to save it and some. equipment went with them on the top of the canopy to save it he s not collecting plants he s going to save our equipment on the canopy and. with the material that s left because they wanted to destroy it to burn it they didn t understand it they didn t go to school. i packed it and that is me going to. hurrying to uganda try to save that four thousand material with people. carrying them on the bikes bicycles and after that we succeeded i housed that four thousand material at the. herbarium of makerere university and after the war i have been able to bring it back home so that we continue our studies. the second war came while we didn t expect it with friends we have been sitting and watching match. football and having some good music with. worldspace radio when it started i think. so it was so bad we heard that now from the east again the war started. and it s going fast this time i think kabila will go in place of as he did with mobutu and the reserve was target to the rebels. movements and two militia acting in the same area and competing for natural resources and there was no way to to work. they destroy everything. poaching oh no way and that s the. powerful men we have to meet and to talk to them what s the regulation of the reserve and what is the regulation of the parks and they can t do what. they are doing so we went to meet them that is. coltan extraction gold mining mining. so we started talking with them. convincing them that we are in a protected area there is regulations. that it s prohibited to do logging mining and poaching specifically. but they said you guys you think that soldier who are dying are. not important and your animals you are protecting are most important. we don t think so we have to do it because to let our movement advance i say no way you are not going to do it here. we started talking with them and i was negotiating. tried to protect our equipment tried to protect our staff and the villages of about one thousand. five hundred people. and we continued but i was doing that. negotiating with them sometimes we are having meeting and they are talking with jean pierre bemba with mbusa nyamwisi with kabila. and. i m there sometimes they talk to my own language that is lingala i hear it and what strategy they are doing what they are planning sometimes they are having helicopter to supply them with ammunition. and so on they used me to carry that. and i was doing counting what comes from where and where and where i had only this equipment. my satellite phone my computer and a plastic. solar panel that i hide it in the forest and every time daily after we have meeting what. compromise we have whatever i go i write a short email send it. i don t know how many people i had on my. my address i sent the message what is going about the progress of the war and what they are planning to do they started suspecting that what we do on the morning and. the afternoon it s on the news bbc rfi. something might be going. on and one day we went for a meeting. sorry. one day we went to meet the chief commander he had the same. iridium cell phone like me and he asked me do you know how to use this i said i have never seen it. i don t know and i had mine on my pocket so it was a chance that they trusted me a lot they didn. was not. looking on me so i was scared and when we finished the meeting i went to return it in the forest and i was. sending news. doing whatever reporting daily to u n to unesco to our institution in new york what have been going and for that they have been having big pressure to leave to free the area because there was no way. whatever they do it s known the same times. during the first two rebellions they killed all animals in the zoo we have a zoo of fourteen okapis and one of them was pregnant. and during the war after a week of heavy war. fighting in the area we succeeded we had the first okapi this is the only trouser and shirt i remind me of this this is not local population this is rebels. they are now happy sending the news that they have protected the okapi with the war because we sent the news that they are killing and poaching everywhere. after a week we celebrated the. birthday of that okapi they killed elephant just fifty meters to the. area where the zoo where okapi was born and i was mad i oppose it that they are now going to dissect it until i do my. my report and then i see. the chief commander and i succeeded the elephant just decayed and they just got the the tusks. what we are doing after that that was the. situation of the war we have to rebuild i had some money i was paid one hundred and fifty dollars i devoted. half of it. to rebuild the herbarium because we didn t have good infrastructure to start plants wildlife conservation society more dealing with plants. i started this with seventy dollars and start fundraising money to where i ve been going i had opportunity to go all over where herbarium for my african material is and. they supported me a bit. and i built this now it s doing work to train young congolese and also what one of the speciality we are doing. my design is tracking the global warming effect on biodiversity and what the impacts of the ituri forest is playing to uptake carbon. this is one of the study we are doing on forty hectares plot where we have tagged. trees and lianas from one centimeters and we are tracking them we have now data of about fifteen years. to see how that forest is contributing to the carbon reductions. and that is i think it. s difficult for me this is a very embarrassing talk i know i don t' Label(s): ['activism' 'Africa' 'animals'] # ================ Vectorization train_df["total_words"] = train_df["summaries"].str.split().str.len() vocabulary_size = train_df["total_words"].max() print(f"Vocabulary size: {vocabulary_size}") text_vectorizer = layers.TextVectorization( max_tokens=vocabulary_size, ngrams=2, output_mode="tf_idf" ) # `TextVectorization` layer needs to be adapted as per the vocabulary from our # training set. with tf.device("/CPU:0"): text_vectorizer.adapt(train_dataset.map(lambda text, label: text)) train_dataset = train_dataset.map( lambda text, label: (text_vectorizer(text), label), num_parallel_calls=auto ).prefetch(auto) validation_dataset = validation_dataset.map( lambda text, label: (text_vectorizer(text), label), num_parallel_calls=auto ).prefetch(auto) test_dataset = test_dataset.map( lambda text, label: (text_vectorizer(text), label), num_parallel_calls=auto ).prefetch(auto)Vocabulary size: 5761 # ================ Create a text classification model def make_model(): shallow_mlp_model = keras.Sequential( [ layers.Dense(512, activation="relu"), layers.Dense(256, activation="relu"), layers.Dense(lookup.vocabulary_size(), activation="sigmoid"), ] # More on why "sigmoid" has been used here in a moment. ) return shallow_mlp_model_____no_output_____# ================ Train the model epochs = 20 shallow_mlp_model = make_model() shallow_mlp_model.compile( loss="binary_crossentropy", optimizer="adam", metrics=["categorical_accuracy"] ) history = shallow_mlp_model.fit( train_dataset, validation_data=validation_dataset, epochs=epochs ) def plot_result(item): plt.plot(history.history[item], label=item) plt.plot(history.history["val_" + item], label="val_" + item) plt.xlabel("Epochs") plt.ylabel(item) plt.title("Train and Validation {} Over Epochs".format(item), fontsize=14) plt.legend() plt.grid() plt.show() plot_result("loss") plot_result("categorical_accuracy")Epoch 1/20 4/4 [==============================] - 1s 107ms/step - loss: 7.5088 - categorical_accuracy: 0.0293 - val_loss: 1.5670 - val_categorical_accuracy: 0.2409 Epoch 2/20 4/4 [==============================] - 0s 68ms/step - loss: 1.0566 - categorical_accuracy: 0.2347 - val_loss: 0.4267 - val_categorical_accuracy: 0.0000e+00 Epoch 3/20 4/4 [==============================] - 0s 69ms/step - loss: 0.4026 - categorical_accuracy: 0.0098 - val_loss: 0.3820 - val_categorical_accuracy: 0.0146 Epoch 4/20 4/4 [==============================] - 0s 70ms/step - loss: 0.3552 - categorical_accuracy: 0.1296 - val_loss: 0.3489 - val_categorical_accuracy: 0.0000e+00 Epoch 5/20 4/4 [==============================] - 0s 67ms/step - loss: 0.2976 - categorical_accuracy: 0.1247 - val_loss: 0.3069 - val_categorical_accuracy: 0.2336 Epoch 6/20 4/4 [==============================] - 0s 67ms/step - loss: 0.2547 - categorical_accuracy: 0.3227 - val_loss: 0.2583 - val_categorical_accuracy: 0.0511 Epoch 7/20 4/4 [==============================] - 0s 67ms/step - loss: 0.2141 - categorical_accuracy: 0.1320 - val_loss: 0.2277 - val_categorical_accuracy: 0.1533 Epoch 8/20 4/4 [==============================] - 0s 68ms/step - loss: 0.1812 - categorical_accuracy: 0.3325 - val_loss: 0.2026 - val_categorical_accuracy: 0.1168 Epoch 9/20 4/4 [==============================] - 0s 68ms/step - loss: 0.1517 - categorical_accuracy: 0.2592 - val_loss: 0.1799 - val_categorical_accuracy: 0.1898 Epoch 10/20 4/4 [==============================] - 0s 67ms/step - loss: 0.1272 - categorical_accuracy: 0.4621 - val_loss: 0.1582 - val_categorical_accuracy: 0.2482 Epoch 11/20 4/4 [==============================] - 0s 67ms/step - loss: 0.1066 - categorical_accuracy: 0.4328 - val_loss: 0.1449 - val_categorical_accuracy: 0.2336 Epoch 12/20 4/4 [==============================] - 0s 70ms/step - loss: 0.0933 - categorical_accuracy: 0.4792 - val_loss: 0.1374 - val_categorical_accuracy: 0.2190 Epoch 13/20 4/4 [==============================] - 0s 67ms/step - loss: 0.0830 - categorical_accuracy: 0.4939 - val_loss: 0.1308 - val_categorical_accuracy: 0.2774 Epoch 14/20 4/4 [==============================] - 0s 68ms/step - loss: 0.0724 - categorical_accuracy: 0.5770 - val_loss: 0.1250 - val_categorical_accuracy: 0.2555 Epoch 15/20 4/4 [==============================] - 0s 68ms/step - loss: 0.0635 - categorical_accuracy: 0.5379 - val_loss: 0.1217 - val_categorical_accuracy: 0.2920 Epoch 16/20 4/4 [==============================] - 0s 67ms/step - loss: 0.0571 - categorical_accuracy: 0.5892 - val_loss: 0.1227 - val_categorical_accuracy: 0.2993 Epoch 17/20 4/4 [==============================] - 0s 69ms/step - loss: 0.0527 - categorical_accuracy: 0.5990 - val_loss: 0.1188 - val_categorical_accuracy: 0.2701 Epoch 18/20 4/4 [==============================] - 0s 67ms/step - loss: 0.0477 - categorical_accuracy: 0.5330 - val_loss: 0.1181 - val_categorical_accuracy: 0.3139 Epoch 19/20 4/4 [==============================] - 0s 71ms/step - loss: 0.0435 - categorical_accuracy: 0.5941 - val_loss: 0.1160 - val_categorical_accuracy: 0.2847 Epoch 20/20 4/4 [==============================] - 0s 69ms/step - loss: 0.0396 - categorical_accuracy: 0.5306 - val_loss: 0.1156 - val_categorical_accuracy: 0.3066 # ================ Evaluate the model _, categorical_acc = shallow_mlp_model.evaluate(test_dataset) print(f"Categorical accuracy on the test set: {round(categorical_acc * 100, 2)}%.")2/2 [==============================] - 0s 4ms/step - loss: 0.1290 - categorical_accuracy: 0.4015 Categorical accuracy on the test set: 40.15%. # Create a model for inference. model_for_inference = keras.Sequential([text_vectorizer, shallow_mlp_model]) print(test_df.shape) print(test_df.iloc[0:5,:]) # Create a small dataset just for demoing inference. inference_dataset = make_dataset(test_df.sample(5), is_train=False) text_batch, label_batch = next(iter(inference_dataset)) predicted_probabilities = model_for_inference.predict(text_batch) predicted_acc = 0 # Perform inference. for i, text in enumerate(text_batch[:5]): label = label_batch[i].numpy()[None, ...] #print(f"Abstract: {text[0]}") print(f"Label(s): {invert_multi_hot(label[0])}") predicted_proba = [proba for proba in predicted_probabilities[i]] top_3_labels = [ x for _, x in sorted( zip(predicted_probabilities[i], lookup.get_vocabulary()), key=lambda pair: pair[0], reverse=True, ) ][:3] print(f"Predicted Label(s): ({', '.join([label for label in top_3_labels])})") print(" ") predicted_acc = predicted_acc + \ len(set(invert_multi_hot(label[0])).intersection([label for label in top_3_labels])) print(f"number of correct labels is {predicted_acc}, prediction accuracy is {predicted_acc/15:.2f}") (137, 3) titles \ 1776 RamonaPierson_2011X.stm 1466 MeklitHadero_2015F.stm 318 CameronHerold_2009X.stm 278 BobMankoff_2013S.stm 1695 PaulPholeros_2013X.stm summaries \ 1776 i m actually going to share something with you... 1466 people often ask me about my influences. or as... 318 be willing to bet that i m the dumbest guy in ... 278 talking about designing humor which is sort of... 1695 the idea of eliminating poverty. is a great go... terms 1776 [TEDx, aging, culture] 1466 [TED Fellows, art, creativity] 318 [TEDx, business, education] 278 [design, humor, art] 1695 [TEDx, architecture, design] Label(s): ['design' 'art' 'technology'] Predicted Label(s): (culture, design, art) Label(s): ['culture' 'global issues' 'Buddhism'] Predicted Label(s): (culture, global issues, Buddhism) Label(s): ['business' 'Africa' 'entrepreneur'] Predicted Label(s): (global issues, design, culture) Label(s): ['design' 'activism' 'computers'] Predicted Label(s): (design, culture, entertainment) Label(s): ['TEDx' 'community' 'leadership'] Predicted Label(s): (community, TEDx, leadership) number of correct labels is 9, prediction accuracy is 0.60 # accuracy of random guess import scipy.special num_labels = 270 num_selected = 15 random_guess_accuracy = 0 for i in range(num_selected): a = i b = scipy.special.binom(num_selected,i) c = scipy.special.binom(num_labels - i, num_selected-i) d = scipy.special.binom(num_labels, num_selected) random_guess_accuracy = random_guess_accuracy + (a*b*c)/d print(f"expected correct labels of random guess is merely \ {random_guess_accuracy:.2f}, accuracy is {random_guess_accuracy/num_selected:.2f}") expected correct labels of random guess is merely 1.67, accuracy is 0.11 # Create the test dataset from decoded text of the kaldi model (model 2) decode_test_df = pd.read_csv("test_text_from_model2.csv",sep=",", error_bad_lines=False) decode_test_df_len = len(decode_test_df) #print(decode_test_df.shape) #print(decode_test_df) decode_test_df["terms"] = decode_test_df["terms"].apply(lambda x: literal_eval(x)) def make_testdataset(dataframe): labels = tf.ragged.constant(dataframe["terms"].values) print(labels) label_binarized = lookup(labels).numpy() #print(dataframe.shape) #print(label_binarized) #print(label_binarized.shape) dataset = tf.data.Dataset.from_tensor_slices( (dataframe["summaries"].values, label_binarized) ) dataset = dataset.map(unify_text_length, num_parallel_calls=auto).cache() return dataset.batch(batch_size) inference_dataset = make_testdataset(decode_test_df) text_batch, label_batch = next(iter(inference_dataset)) predicted_probabilities = model_for_inference.predict(text_batch) predicted_acc = 0 # Perform inference. for i, text in enumerate(text_batch[:]): label = label_batch[i].numpy()[None, ...] #print(f"Abstract: {text[0]}") print(f"Label(s): {invert_multi_hot(label[0])}") predicted_proba = [proba for proba in predicted_probabilities[i]] top_3_labels = [ x for _, x in sorted( zip(predicted_probabilities[i], lookup.get_vocabulary()), key=lambda pair: pair[0], reverse=True, ) ][:3] print(f"Predicted Label(s): ({', '.join([label for label in top_3_labels])})") print(" ") predicted_acc = predicted_acc + \ len(set(invert_multi_hot(label[0])).intersection([label for label in top_3_labels])) print(f"number of correct labels is {predicted_acc}, \ prediction accuracy is {predicted_acc/(decode_test_df_len*3):.2f}") <tf.RaggedTensor [[b'beauty', b'body language', b'design'], [b'business', b'education', b'health'], [b'entertainment', b'food', b'global issues'], [b'body language', b'entertainment', b'gaming'], [b'TED Fellows', b'activism', b'medicine'], [b'astronomy', b'history', b'innovation']]> Label(s): ['[UNK]' 'design'] Predicted Label(s): (culture, global issues, design) Label(s): ['business' 'health' 'education'] Predicted Label(s): (business, culture, economics) Label(s): ['[UNK]' 'global issues' 'entertainment'] Predicted Label(s): (entertainment, global issues, culture) Label(s): ['[UNK]' 'entertainment'] Predicted Label(s): (culture, entertainment, TEDx) Label(s): ['activism' 'TED Fellows' 'medicine'] Predicted Label(s): (culture, entertainment, health care) Label(s): ['history' 'astronomy' 'innovation'] Predicted Label(s): (business, architecture, collaboration) number of correct labels is 5, prediction accuracy is 0.28 </code>
{ "repository": "junyaogz/topic.recognition.td3-", "path": "src/5_train_and_test_model1_in_tf/train_model1_and_test.ipynb", "matched_keywords": [ "neuroscience", "evolution", "biology", "ecology" ], "stars": null, "size": 142559, "hexsha": "4813710397d6e89cbb083df1a93ff020e23bc54a", "max_line_length": 36472, "avg_line_length": 175.565270936, "alphanum_fraction": 0.7859272301 }
# Notebook from bryansho/PCOS_WGS_16S_metabolome Path: Revision/ANCOM/Metabolites/Metabolites_no_BAs.ipynb # Metabolites w/o bile acids Compare placebo v. letrozole and letrozole v. let-co-housed at time points 2 and 5. Description of data files: 1. mapping file = metadata 2. metabolites counts 3. metabolite index file_____no_output_____ <code> library(tidyverse) library(magrittr) source("/Users/cayla/ANCOM/scripts/ancom_v2.1.R")── Attaching packages ───────────────────────────────── tidyverse 1.3.1 ── ✔ ggplot2 3.3.3 ✔ purrr  0.3.4 ✔ tibble  3.1.2 ✔ dplyr  1.0.6 ✔ tidyr  1.1.3 ✔ stringr 1.4.0 ✔ readr  1.4.0 ✔ forcats 0.5.1 ── Conflicts ──────────────────────────────────── tidyverse_conflicts() ── ✖ dplyr::filter() masks stats::filter() ✖ dplyr::lag() masks stats::lag() Attaching package: ‘magrittr’ The following object is masked from ‘package:purrr’: set_names The following object is masked from ‘package:tidyr’: extract Attaching package: ‘nlme’ The following object is masked from ‘package:dplyr’: collapse Welcome to compositions, a package for compositional data analysis. Find an intro with "? compositions" Attaching package: ‘compositions’ The following objects are masked from ‘package:stats’: cor, cov, dist, var The following objects are masked from ‘package:base’: %*%, norm, scale, scale.default counts <- read_csv('https://github.com/bryansho/PCOS_WGS_16S_metabolome/raw/master/DESEQ2/Metabolites_No_BA/Known_Cutoff_no_BA.csv') head(counts, n=1) ── Column specification ────────────────────────────────────────────────── cols( .default = col_double() ) ℹ Use `spec()` for the full column specifications. counts$OTUs <- as.factor(counts$OTUs)_____no_output_____metadata <- read_csv('https://github.com/bryansho/PCOS_WGS_16S_metabolome/raw/master/DESEQ2/Metabolites_No_BA/Mapping_file_w_og.csv') head(metadata, n=1) ── Column specification ────────────────────────────────────────────────── cols( .default = col_character(), Mouse = col_double(), weight = col_double(), Lh = col_double(), Testosterone = col_double(), Weight_g = col_double(), observed_SVs1250 = col_double(), pielou_e1250 = col_double(), faith_pd1250 = col_double(), shannon1250 = col_double(), FBG = col_double() ) ℹ Use `spec()` for the full column specifications. metadata %<>% select(SampleID, Week, Category)_____no_output_____indices <- read_csv('https://github.com/bryansho/PCOS_WGS_16S_metabolome/raw/master/DESEQ2/Metabolites_No_BA/taxonomy_cutoff_no_BA.csv') head(indices, n=1) ── Column specification ────────────────────────────────────────────────── cols( OTUs = col_double(), Domain = col_character(), Phylum = col_character(), Class = col_character(), Order = col_character(), Family = col_character(), Genus = col_character(), Species = col_character() ) indices$OTUs <- as.factor(indices$OTUs) indices %<>% rename(Metabolite = Domain) indices[,3:8] <- NULL_____no_output_____# subset data and metadata meta.t2.PvL <- metadata %>% filter(Week == '2', Category == 'Placebo' | Category == 'Letrozole') t2.PvL <- counts %>% select(OTUs, any_of(meta.t2.PvL$SampleID)) %>% column_to_rownames('OTUs') meta.t2.LvLCH <- metadata %>% filter(Week == '2', Category == 'Co-L' | Category == 'Letrozole') t2.LvLCH <- counts %>% select(OTUs, any_of(meta.t2.LvLCH$SampleID)) %>% column_to_rownames('OTUs') meta.t5.PvL <- metadata %>% filter(Week == '5', Category == 'Placebo' | Category == 'Letrozole') t5.PvL <- counts %>% select(OTUs, any_of(meta.t5.PvL$SampleID)) %>% column_to_rownames('OTUs') meta.t5.LvLCH <- metadata %>% filter(Week == '5', Category == 'Co-L' | Category == 'Letrozole') t5.LvLCH <- counts %>% select(OTUs, any_of(meta.t5.LvLCH$SampleID)) %>% column_to_rownames('OTUs')_____no_output_____ </code> ## Time Point 2 ### Placebo v. Letrozole_____no_output_____ <code> # Data Preprocessing # feature_table is a df/matrix with features as rownames and samples in columns feature_table <- t2.PvL sample_var <- "SampleID" group_var <- "Category" out_cut <- 0.05 zero_cut <- 0.90 lib_cut <- 0 neg_lb <- TRUE prepro <- feature_table_pre_process(feature_table, meta.t2.PvL, sample_var, group_var, out_cut, zero_cut, lib_cut, neg_lb) # Preprocessed feature table feature_table1 <- prepro$feature_table # Preprocessed metadata meta_data1 <- prepro$meta_data # Structural zero info struc_zero1 <- prepro$structure_zeros _____no_output_____# Run ANCOM main_var <- "Category" p_adj_method <- "BH" # number of taxa > 10, therefore use Benjamini-Hochberg correction alpha <- 0.05 adj_formula <- NULL rand_formula <- NULL t_start <- Sys.time() res <- ANCOM(feature_table1, meta_data1, struc_zero1, main_var, p_adj_method, alpha, adj_formula, rand_formula) t_end <- Sys.time() t_end - t_start # write output to file # output contains the "W" statistic for each taxa - a count of the number of times # the null hypothesis is rejected for each taxa # detected_x are logicals indicating detection at specified FDR cut-off write_csv(res$out, "2021-07-26_Metabolites_T2_PvL_ANCOM_data.csv")_____no_output_____n_taxa <- ifelse(is.null(struc_zero1), nrow(feature_table1), sum(apply(struc_zero1, 1, sum) == 0)) res$fig + scale_y_continuous(sec.axis = sec_axis(~ . * 100 / (n_taxa-1), name = 'W proportion')) ggsave(filename = paste(lubridate::today(),'volcano_metabolites_T2_PvL.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina')Saving 7 x 7 in image # save features with W > 0 non.zero <- res$fig$data %>% arrange(desc(y), desc(abs(x))) %>% left_join(indices, by = c('taxa_id' = 'OTUs')) %>% mutate(W.proportion = y/(n_taxa-1)) %>% # add W filter(W.proportion > 0) %>% rowid_to_column() write.csv(non.zero, paste(lubridate::today(),'NonZeroW_Features_metabolites_T2_PvL.csv',sep='_'))_____no_output_____# to find most significant taxa, I will sort the data # 1) y (W statistic) # 2) according to the absolute value of CLR mean difference sig <- res$fig$data %>% arrange(desc(y), desc(abs(x))) %>% left_join(indices, by = c('taxa_id' = 'OTUs')) %>% filter(y >= (0.7*n_taxa)) # keep significant taxa write.csv(sig, paste(lubridate::today(),'SigFeatures_metabolites_T2_PvL.csv',sep='_'))_____no_output_____# plot top 20 taxa sig %>% slice_head(n=20) %>% mutate(taxa_id = fct_reorder(taxa_id, (abs(x)))) %>% ggplot(aes(x, taxa_id)) + geom_point(aes(size = 1)) + theme_bw(base_size = 16) + guides(size = "none") + labs(x = 'CLR Mean Difference', y = NULL) ggsave(filename = paste(lubridate::today(),'Top20_metabolites_T2_PvL.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina')Saving 7 x 7 in image </code> ### Letrozole v. Let-co-housed_____no_output_____ <code> # Data Preprocessing # feature_table is a df/matrix with features as rownames and samples in columns feature_table <- t2.LvLCH sample_var <- "SampleID" group_var <- "Category" out_cut <- 0.05 zero_cut <- 0.90 lib_cut <- 0 neg_lb <- TRUE prepro <- feature_table_pre_process(feature_table, meta.t2.LvLCH, sample_var, group_var, out_cut, zero_cut, lib_cut, neg_lb) # Preprocessed feature table feature_table2 <- prepro$feature_table # Preprocessed metadata meta_data2 <- prepro$meta_data # Structural zero info struc_zero2 <- prepro$structure_zeros _____no_output_____# Run ANCOM main_var <- "Category" p_adj_method <- "BH" # number of taxa > 10, therefore use Benjamini-Hochberg correction alpha <- 0.05 adj_formula <- NULL rand_formula <- NULL t_start <- Sys.time() res2 <- ANCOM(feature_table2, meta_data2, struc_zero2, main_var, p_adj_method, alpha, adj_formula, rand_formula) t_end <- Sys.time() t_end - t_start # write output to file # output contains the "W" statistic for each taxa - a count of the number of times # the null hypothesis is rejected for each taxa # detected_x are logicals indicating detection at specified FDR cut-off write_csv(res2$out, "2021-07-26_Metabolites_T2_LvLCH_ANCOM_data.csv")_____no_output_____n_taxa <- ifelse(is.null(struc_zero2), nrow(feature_table2), sum(apply(struc_zero2, 1, sum) == 0)) res2$fig + scale_y_continuous(sec.axis = sec_axis(~ . * 100 / (n_taxa-1), name = 'W proportion')) ggsave(filename = paste(lubridate::today(),'volcano_metabolites_T2_LvLCH.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina')Saving 7 x 7 in image # save features with W > 0 non.zero <- res2$fig$data %>% arrange(desc(y), desc(abs(x))) %>% left_join(indices, by = c('taxa_id' = 'OTUs')) %>% mutate(W.proportion = y/(n_taxa-1)) %>% # add W filter(W.proportion > 0) %>% rowid_to_column() write.csv(non.zero, paste(lubridate::today(),'NonZeroW_Features_metabolites_T2_LvLCH.csv',sep='_'))_____no_output_____# to find most significant taxa, I will sort the data # 1) y (W statistic) # 2) according to the absolute value of CLR mean difference sig <- res2$fig$data %>% arrange(desc(y), desc(abs(x))) %>% left_join(indices, by = c('taxa_id' = 'OTUs')) %>% filter(y >= (0.7*n_taxa)) # keep significant taxa write.csv(sig, paste(lubridate::today(),'SigFeatures_metabolites_T2_LvLCH.csv',sep='_'))_____no_output_____# plot top 20 taxa sig %>% slice_head(n=20) %>% mutate(taxa_id = fct_reorder(taxa_id, (abs(x)))) %>% ggplot(aes(x, taxa_id)) + geom_point(aes(size = 1)) + theme_bw(base_size = 16) + guides(size = "none") + labs(x = 'CLR Mean Difference', y = NULL) ggsave(filename = paste(lubridate::today(),'Top20_metabolites_T2_LvLCH.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina')Saving 7 x 7 in image </code> ## Time Point 5 ### Placebo v. Letrozole_____no_output_____ <code> # Data Preprocessing # feature_table is a df/matrix with features as rownames and samples in columns feature_table <- t5.PvL sample_var <- "SampleID" group_var <- "Category" out_cut <- 0.05 zero_cut <- 0.90 lib_cut <- 0 neg_lb <- TRUE prepro <- feature_table_pre_process(feature_table, meta.t5.PvL, sample_var, group_var, out_cut, zero_cut, lib_cut, neg_lb) # Preprocessed feature table feature_table3 <- prepro$feature_table # Preprocessed metadata meta_data3 <- prepro$meta_data # Structural zero info struc_zero3 <- prepro$structure_zeros _____no_output_____# Run ANCOM main_var <- "Category" p_adj_method <- "BH" # number of taxa > 10, therefore use Benjamini-Hochberg correction alpha <- 0.05 adj_formula <- NULL rand_formula <- NULL t_start <- Sys.time() res3 <- ANCOM(feature_table3, meta_data3, struc_zero3, main_var, p_adj_method, alpha, adj_formula, rand_formula) t_end <- Sys.time() t_end - t_start # write output to file # output contains the "W" statistic for each taxa - a count of the number of times # the null hypothesis is rejected for each taxa # detected_x are logicals indicating detection at specified FDR cut-off write_csv(res3$out, "2021-07-26_Metabolites_T5_PvL_ANCOM_data.csv")_____no_output_____n_taxa <- ifelse(is.null(struc_zero3), nrow(feature_table3), sum(apply(struc_zero3, 1, sum) == 0)) res3$fig + scale_y_continuous(sec.axis = sec_axis(~ . * 100 / (n_taxa-1), name = 'W proportion')) ggsave(filename = paste(lubridate::today(),'volcano_metabolites_T5_PvL.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina')Saving 7 x 7 in image # save features with W > 0 non.zero <- res3$fig$data %>% arrange(desc(y), desc(abs(x))) %>% left_join(indices, by = c('taxa_id' = 'OTUs')) %>% mutate(W.proportion = y/(n_taxa-1)) %>% # add W filter(W.proportion > 0) %>% rowid_to_column() write.csv(non.zero, paste(lubridate::today(),'NonZeroW_Features_metabolites_T5_PvL.csv',sep='_'))_____no_output_____# to find most significant taxa, I will sort the data # 1) y (W statistic) # 2) according to the absolute value of CLR mean difference sig <- res3$fig$data %>% arrange(desc(y), desc(abs(x))) %>% left_join(indices, by = c('taxa_id' = 'OTUs')) %>% filter(y >= (0.7*n_taxa)) # keep significant taxa write.csv(sig, paste(lubridate::today(),'SigFeatures_metabolites_T5_PvL.csv',sep='_'))_____no_output_____# plot top 20 taxa sig %>% slice_head(n=20) %>% mutate(taxa_id = fct_reorder(taxa_id, (abs(x)))) %>% ggplot(aes(x, taxa_id)) + geom_point(aes(size = 1)) + theme_bw(base_size = 16) + guides(size = "none") + labs(x = 'CLR Mean Difference', y = NULL) ggsave(filename = paste(lubridate::today(),'Top20_metabolites_T5_PvL.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina')Saving 7 x 7 in image </code> ### Letrozole v. Let-co-housed_____no_output_____ <code> # Data Preprocessing # feature_table is a df/matrix with features as rownames and samples in columns feature_table <- t5.LvLCH sample_var <- "SampleID" group_var <- "Category" out_cut <- 0.05 zero_cut <- 0.90 lib_cut <- 0 neg_lb <- TRUE prepro <- feature_table_pre_process(feature_table, meta.t5.LvLCH, sample_var, group_var, out_cut, zero_cut, lib_cut, neg_lb) # Preprocessed feature table feature_table4 <- prepro$feature_table # Preprocessed metadata meta_data4 <- prepro$meta_data # Structural zero info struc_zero4 <- prepro$structure_zeros _____no_output_____# Run ANCOM main_var <- "Category" p_adj_method <- "BH" # number of taxa > 10, therefore use Benjamini-Hochberg correction alpha <- 0.05 adj_formula <- NULL rand_formula <- NULL t_start <- Sys.time() res4 <- ANCOM(feature_table4, meta_data4, struc_zero4, main_var, p_adj_method, alpha, adj_formula, rand_formula) t_end <- Sys.time() t_end - t_start # write output to file # output contains the "W" statistic for each taxa - a count of the number of times # the null hypothesis is rejected for each taxa # detected_x are logicals indicating detection at specified FDR cut-off write_csv(res4$out, "2021-07-26_Metabolites_T5_LvLCH_ANCOM_data.csv")_____no_output_____n_taxa <- ifelse(is.null(struc_zero4), nrow(feature_table4), sum(apply(struc_zero4, 1, sum) == 0)) res4$fig + scale_y_continuous(sec.axis = sec_axis(~ . * 100 / (n_taxa-1), name = 'W proportion')) ggsave(filename = paste(lubridate::today(),'volcano_metabolites_T5_LvLCH.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina')Saving 7 x 7 in image # save features with W > 0 non.zero <- res4$fig$data %>% arrange(desc(y), desc(abs(x))) %>% left_join(indices, by = c('taxa_id' = 'OTUs')) %>% mutate(W.proportion = y/(n_taxa-1)) %>% # add W filter(W.proportion > 0) %>% rowid_to_column() write.csv(non.zero, paste(lubridate::today(),'NonZeroW_Features_metabolites_T5_LvLCH.csv',sep='_'))_____no_output_____# to find most significant taxa, I will sort the data # 1) y (W statistic) # 2) according to the absolute value of CLR mean difference sig <- res4$fig$data %>% arrange(desc(y), desc(abs(x))) %>% left_join(indices, by = c('taxa_id' = 'OTUs')) %>% filter(y >= (0.7*n_taxa)) # keep significant taxa write.csv(sig, paste(lubridate::today(),'SigFeatures_metabolites_T5_LvLCH.csv',sep='_'))_____no_output_____# plot top 20 taxa sig %>% slice_head(n=20) %>% mutate(taxa_id = fct_reorder(taxa_id, (abs(x)))) %>% ggplot(aes(x, taxa_id)) + geom_point(aes(size = 1)) + theme_bw(base_size = 16) + guides(size = "none") + labs(x = 'CLR Mean Difference', y = NULL) ggsave(filename = paste(lubridate::today(),'Top20_metabolites_T5_LvLCH.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina')Saving 7 x 7 in image </code>
{ "repository": "bryansho/PCOS_WGS_16S_metabolome", "path": "Revision/ANCOM/Metabolites/Metabolites_no_BAs.ipynb", "matched_keywords": [ "DESeq2" ], "stars": 3, "size": 614870, "hexsha": "4814932841a38b2701f44396dbabd6696def41e5", "max_line_length": 82892, "avg_line_length": 516.6974789916, "alphanum_fraction": 0.9289427033 }
# Notebook from ofou/course-content Path: tutorials/W3D2_HiddenDynamics/student/W3D2_Tutorial4.ipynb <a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D2_HiddenDynamics/student/W3D2_Tutorial4.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>[![Kaggle](https://kaggle.com/static/images/open-in-kaggle.svg)](https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W3D2_HiddenDynamics/student/W3D2_Tutorial4.ipynb)_____no_output_____# Tutorial 4: 2D Kalman Filter **Week 3, Day 2: Hidden Dynamics** **By Neuromatch Academy** __Content creators:__ Caroline Haimerl and Byron Galbraith __Content reviewers:__ Jesse Livezey, Matt Krause, Michael Waskom, and Xaq Pitkow **Useful reference:** - Roweis, Ghahramani (1998): A unifying review of linear Gaussian Models - Bishop (2006): Pattern Recognition and Machine Learning **Acknowledgement** This tutorial is in part based on code originally created by Caroline Haimerl for Dr. Cristina Savin's *Probabilistic Time Series* class at the Center for Data Science, New York University_____no_output_____**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** <p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>_____no_output_____We will edit the above video to end at 2:20_____no_output_____ <code> # @title Video 1: Introduction from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1LK411J737", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="6f_51L3i5aQ", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out)_____no_output_____ </code> --- # Tutorial Objectives In the previous tutorials we got an intuition for the Kalman filter in one dimension. In this tutorial, we will examine the 2D Kalman filter and more of its mathematical foundations. In this tutorial, you will: * Review linear dynamical systems * Implement the Kalman filter * Explore how the Kalman filter can be used to smooth data from an eye-tracking experiment _____no_output_____ <code> import sys_____no_output_____!conda install -c conda-forge ipywidgets --yes_____no_output_____!conda install numpy matplotlib scipy requests --yes_____no_output_____# Install PyKalman (https://pykalman.github.io/) !pip install pykalman --quiet # Imports import numpy as np import matplotlib.pyplot as plt import pykalman from scipy import stats_____no_output_____#@title Figure settings import ipywidgets as widgets # interactive display %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")_____no_output_____#@title Data retrieval and loading import io import os import hashlib import requests fname = "W2D3_mit_eyetracking_2009.npz" url = "https://osf.io/jfk8w/download" expected_md5 = "20c7bc4a6f61f49450997e381cf5e0dd" if not os.path.isfile(fname): try: r = requests.get(url) except requests.ConnectionError: print("!!! Failed to download data !!!") else: if r.status_code != requests.codes.ok: print("!!! Failed to download data !!!") elif hashlib.md5(r.content).hexdigest() != expected_md5: print("!!! Data download appears corrupted !!!") else: with open(fname, "wb") as fid: fid.write(r.content) def load_eyetracking_data(data_fname=fname): with np.load(data_fname, allow_pickle=True) as dobj: data = dict(**dobj) images = [plt.imread(io.BytesIO(stim), format='JPG') for stim in data['stimuli']] subjects = data['subjects'] return subjects, images_____no_output_____#@title Helper functions np.set_printoptions(precision=3) def plot_kalman(state, observation, estimate=None, label='filter', color='r-', title='LDS', axes=None): if axes is None: fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(16, 6)) ax1.plot(state[:, 0], state[:, 1], 'g-', label='true latent') ax1.plot(observation[:, 0], observation[:, 1], 'k.', label='data') else: ax1, ax2 = axes if estimate is not None: ax1.plot(estimate[:, 0], estimate[:, 1], color=color, label=label) ax1.set(title=title, xlabel='X position', ylabel='Y position') ax1.legend() if estimate is None: ax2.plot(state[:, 0], observation[:, 0], '.k', label='dim 1') ax2.plot(state[:, 1], observation[:, 1], '.', color='grey', label='dim 2') ax2.set(title='correlation', xlabel='latent', ylabel='measured') else: ax2.plot(state[:, 0], estimate[:, 0], '.', color=color, label='latent dim 1') ax2.plot(state[:, 1], estimate[:, 1], 'x', color=color, label='latent dim 2') ax2.set(title='correlation', xlabel='real latent', ylabel='estimated latent') ax2.legend() return ax1, ax2 def plot_gaze_data(data, img=None, ax=None): # overlay gaze on stimulus if ax is None: fig, ax = plt.subplots(figsize=(8, 6)) xlim = None ylim = None if img is not None: ax.imshow(img, aspect='auto') ylim = (img.shape[0], 0) xlim = (0, img.shape[1]) ax.scatter(data[:, 0], data[:, 1], c='m', s=100, alpha=0.7) ax.set(xlim=xlim, ylim=ylim) return ax def plot_kf_state(kf, data, ax): mu_0 = np.ones(kf.n_dim_state) mu_0[:data.shape[1]] = data[0] kf.initial_state_mean = mu_0 mu, sigma = kf.smooth(data) ax.plot(mu[:, 0], mu[:, 1], 'limegreen', linewidth=3, zorder=1) ax.scatter(mu[0, 0], mu[0, 1], c='orange', marker='>', s=200, zorder=2) ax.scatter(mu[-1, 0], mu[-1, 1], c='orange', marker='s', s=200, zorder=2)_____no_output_____ </code> --- # Section 1: Linear Dynamical System (LDS)_____no_output_____The below video will be edited to 0:33 - 1:09, and then 2:01 - 3:32 _____no_output_____ <code> # @title Video 2: Linear Dynamical Systems from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1qa4y1a7B9", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="2SWh639YgEg", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out)_____no_output_____ </code> ## Kalman filter definitions: The latent state $s_t$ evolves as a stochastic linear dynamical system in discrete time, with a dynamics matrix $D$: $$s_t = Ds_{t-1}+w_t$$ Just as in the HMM, the structure is a Markov chain where the state at time point $t$ is conditionally independent of previous states given the state at time point $t-1$. Sensory measurements $m_t$ (observations) are noisy linear projections of the latent state: $$m_t = Hs_{t}+\eta_t$$ Both states and measurements have Gaussian variability, often called noise: 'process noise' $w_t$ for the states, and 'measurement' or 'observation noise' $\eta_t$ for the measurements. The initial state is also Gaussian distributed. These quantites have means and covariances: \begin{eqnarray} w_t & \sim & \mathcal{N}(0, Q) \\ \eta_t & \sim & \mathcal{N}(0, R) \\ s_0 & \sim & \mathcal{N}(\mu_0, \Sigma_0) \end{eqnarray} As a consequence, $s_t$, $m_t$ and their joint distributions are Gaussian. This makes all of the math analytically tractable using linear algebra, so we can easily compute the marginal and conditional distributions we will use for inferring the current state given the entire history of measurements. _**Please note**: we are trying to create uniform notation across tutorials. In some videos created in 2020, measurements $m_t$ were denoted $y_t$, and the Dynamics matrix $D$ was denoted $F$. We apologize for any confusion!_ _____no_output_____## Section 1.1: Sampling from a latent linear dynamical system The first thing we will investigate is how to generate timecourse samples from a linear dynamical system given its parameters. We will start by defining the following system:_____no_output_____ <code> # task dimensions n_dim_state = 2 n_dim_obs = 2 # initialize model parameters params = { 'D': 0.9 * np.eye(n_dim_state), # state transition matrix 'Q': np.eye(n_dim_obs), # state noise covariance 'H': np.eye(n_dim_state), # observation matrix 'R': 1.0 * np.eye(n_dim_obs), # observation noise covariance 'mu_0': np.zeros(n_dim_state), # initial state mean 'sigma_0': 0.1 * np.eye(n_dim_state), # initial state noise covariance }_____no_output_____ </code> **Coding note**: We used a parameter dictionary `params` above. As the number of parameters we need to provide to our functions increases, it can be beneficial to condense them into a data structure like this to clean up the number of inputs we pass in. The trade-off is that we have to know what is in our data structure to use those values, rather than looking at the function signature directly._____no_output_____### Exercise 1: Sampling from a linear dynamical system In this exercise you will implement the dynamics functions of a linear dynamical system to sample both a latent space trajectory (given parameters set above) and noisy measurements. _____no_output_____ <code> def sample_lds(n_timesteps, params, seed=0): """ Generate samples from a Linear Dynamical System specified by the provided parameters. Args: n_timesteps (int): the number of time steps to simulate params (dict): a dictionary of model paramters: (D, Q, H, R, mu_0, sigma_0) seed (int): a random seed to use for reproducibility checks Returns: ndarray, ndarray: the generated state and observation data """ n_dim_state = params['D'].shape[0] n_dim_obs = params['H'].shape[0] # set seed np.random.seed(seed) # precompute random samples from the provided covariance matrices # mean defaults to 0 mi = stats.multivariate_normal(cov=params['Q']).rvs(n_timesteps) eta = stats.multivariate_normal(cov=params['R']).rvs(n_timesteps) # initialize state and observation arrays state = np.zeros((n_timesteps, n_dim_state)) obs = np.zeros((n_timesteps, n_dim_obs)) ################################################################### ## TODO for students: compute the next state and observation values # Fill out function and remove raise NotImplementedError("Student excercise: compute the next state and observation values") ################################################################### # simulate the system for t in range(n_timesteps): # write the expressions for computing state values given the time step if t == 0: state[t] = ... else: state[t] = ... # write the expression for computing the observation obs[t] = ... return state, obs # Uncomment below to test your function # state, obs = sample_lds(100, params) # print('sample at t=3 ', state[3]) # plot_kalman(state, obs, title='sample')_____no_output_____ </code> [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_HiddenDynamics/solutions/W3D2_Tutorial4_Solution_d3736033.py) *Example output:* <img alt='Solution hint' align='left' width=1133 height=414 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W3D2_HiddenDynamics/static/W3D2_Tutorial4_Solution_d3736033_1.png> _____no_output_____### Interactive Demo: Adjusting System Dynamics To test your understanding of the parameters of a linear dynamical system, think about what you would expect if you made the following changes: 1. Reduce observation noise $R$ 2. Increase respective temporal dynamics $D$ Use the interactive widget below to vary the values of $R$ and $D$._____no_output_____ <code> #@title #@markdown Make sure you execute this cell to enable the widget! @widgets.interact(R=widgets.FloatLogSlider(1., min=-2, max=2), D=widgets.FloatSlider(0.9, min=0.0, max=1.0, step=.01)) def explore_dynamics(R=0.1, D=0.5): params = { 'D': D * np.eye(n_dim_state), # state transition matrix 'Q': np.eye(n_dim_obs), # state noise covariance 'H': np.eye(n_dim_state), # observation matrix 'R': R * np.eye(n_dim_obs), # observation noise covariance 'mu_0': np.zeros(n_dim_state), # initial state mean, 'sigma_0': 0.1 * np.eye(n_dim_state), # initial state noise covariance } state, obs = sample_lds(100, params) plot_kalman(state, obs, title='sample')_____no_output_____ </code> --- # Section 2: Kalman Filtering _____no_output_____ <code> # @title Video 3: Kalman Filtering from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1UD4y1m7yV", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="VboZOV9QMOI", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out)_____no_output_____ </code> **VIDEO TO BE RE-RECORDED** to improve notation and explanation, using the Information form of the Gaussians for combining likelihood with prior_____no_output_____We want to infer the latent state variable $s_t$ given the measured (observed) variable $m_t$. $$P(s_t|m_1, ..., m_t, m_{t+1}, ..., m_T)\sim \mathcal{N}(\hat{\mu}_t, \hat{\Sigma_t})$$_____no_output_____First we obtain estimates of the latent state by running the filtering from $t=0,....T$._____no_output_____$$s_t^{\rm pred}\sim \mathcal{N}(\hat{\mu}_t^{\rm pred},\hat{\Sigma}_t^{\rm pred})$$ Where $\hat{\mu}_t^{\rm pred}$ and $\hat{\Sigma}_t^{\rm pred}$ are derived as follows: \begin{eqnarray} \hat{\mu}_1^{\rm pred} & = & D\hat{\mu}_{0} \\ \hat{\mu}_t^{\rm pred} & = & D\hat{\mu}_{t-1} \end{eqnarray} This is the prediction for $s_t$ obtained simply by taking the expected value of $s_{t-1}$ and projecting it forward one step using the transition matrix $D$. We do the same for the covariance, taking into account the noise covariance $Q$ and the fact that scaling a variable by $D$ scales its covariance $\Sigma$ as $D\Sigma D^T$: \begin{eqnarray} \hat{\Sigma}_0^{\rm pred} & = & D\hat{\Sigma}_{0}D^T+Q \\ \hat{\Sigma}_t^{\rm pred} & = & D\hat{\Sigma}_{t-1}D^T+Q \end{eqnarray} We then use a Bayesian update from the newest measurements to obtain $\hat{\mu}_t^{\rm filter}$ and $\hat{\Sigma}_t^{\rm filter}$ Project our prediction to observational space: $$m_t^{\rm pred}\sim \mathcal{N}(H\hat{\mu}_t^{\rm pred}, H\hat{\Sigma}_t^{\rm pred}H^T+R)$$ update prediction by actual data: \begin{eqnarray} s_t^{\rm filter} & \sim & \mathcal{N}(\hat{\mu}_t^{\rm filter}, \hat{\Sigma}_t^{\rm filter}) \\ \hat{\mu}_t^{\rm filter} & = & \hat{\mu}_t^{\rm pred}+K_t(m_t-H\hat{\mu}_t^{\rm pred}) \\ \hat{\Sigma}_t^{\rm filter} & = & (I-K_tH)\hat{\Sigma}_t^{\rm pred} \end{eqnarray} Kalman gain matrix: $$K_t=\hat{\Sigma}_t^{\rm pred}H^T(H\hat{\Sigma}_t^{\rm pred}H^T+R)^{-1}$$ We use the latent-only prediction to project it to the observational space and compute a correction proportional to the error $m_t-HDz_{t-1}$ between prediction and data. The coefficient of this correction is the Kalman gain matrix._____no_output_____**Interpretations** If measurement noise is small and dynamics are fast, then estimation will depend mostly on currently observed data. If the measurement noise is large, then the Kalman filter uses past observations as well, combining them as long as the underlying state is at least somewhat predictable._____no_output_____In order to explore the impact of filtering, we will use the following noisy oscillatory system:_____no_output_____ <code> # task dimensions n_dim_state = 2 n_dim_obs = 2 T=100 # initialize model parameters params = { 'D': np.array([[1., 1.], [-(2*np.pi/20.)**2., .9]]), # state transition matrix 'Q': np.eye(n_dim_obs), # state noise covariance 'H': np.eye(n_dim_state), # observation matrix 'R': 100.0 * np.eye(n_dim_obs), # observation noise covariance 'mu_0': np.zeros(n_dim_state), # initial state mean 'sigma_0': 0.1 * np.eye(n_dim_state), # initial state noise covariance } state, obs = sample_lds(T, params) plot_kalman(state, obs, title='sample')_____no_output_____ </code> ## Exercise 2: Implement Kalman filtering In this exercise you will implement the Kalman filter (forward) process. Your focus will be on writing the expressions for the Kalman gain, filter mean, and filter covariance at each time step (refer to the equations above)._____no_output_____ <code> def kalman_filter(data, params): """ Perform Kalman filtering (forward pass) on the data given the provided system parameters. Args: data (ndarray): a sequence of osbervations of shape(n_timesteps, n_dim_obs) params (dict): a dictionary of model paramters: (D, Q, H, R, mu_0, sigma_0) Returns: ndarray, ndarray: the filtered system means and noise covariance values """ # pulled out of the params dict for convenience D = params['D'] Q = params['Q'] H = params['H'] R = params['R'] n_dim_state = D.shape[0] n_dim_obs = H.shape[0] I = np.eye(n_dim_state) # identity matrix # state tracking arrays mu = np.zeros((len(data), n_dim_state)) sigma = np.zeros((len(data), n_dim_state, n_dim_state)) # filter the data for t, y in enumerate(data): if t == 0: mu_pred = params['mu_0'] sigma_pred = params['sigma_0'] else: mu_pred = D @ mu[t-1] sigma_pred = D @ sigma[t-1] @ D.T + Q ########################################################################### ## TODO for students: compute the filtered state mean and covariance values # Fill out function and remove raise NotImplementedError("Student excercise: compute the filtered state mean and covariance values") ########################################################################### # write the expression for computing the Kalman gain K = ... # write the expression for computing the filtered state mean mu[t] = ... # write the expression for computing the filtered state noise covariance sigma[t] = ... return mu, sigma # Uncomment below to test your function # filtered_state_means, filtered_state_covariances = kalman_filter(obs, params) # plot_kalman(state, obs, filtered_state_means, title="my kf-filter", # color='r', label='my kf-filter')_____no_output_____ </code> [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_HiddenDynamics/solutions/W3D2_Tutorial4_Solution_ceb931e6.py) *Example output:* <img alt='Solution hint' align='left' width=1134 height=414 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W3D2_HiddenDynamics/static/W3D2_Tutorial4_Solution_ceb931e6_0.png> _____no_output_____--- # Section 3: Fitting Eye Gaze Data_____no_output_____ <code> # @title Video 4: Fitting Eye Gaze Data from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV14t4y1X7eb", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="M7OuXmVWHGI", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out)_____no_output_____ </code> Tracking eye gaze is used in both experimental and user interface applications. Getting an accurate estimation of where someone is looking on a screen in pixel coordinates can be challenging, however, due to the various sources of noise inherent in obtaining these measurements. A main source of noise is the general accuracy of the eye tracker device itself and how well it maintains calibration over time. Changes in ambient light or subject position can further reduce accuracy of the sensor. Eye blinks introduce a different form of noise as interruptions in the data stream which also need to be addressed. Fortunately we have a candidate solution for handling noisy eye gaze data in the Kalman filter we just learned about. Let's look at how we can apply these methods to a small subset of data taken from the [MIT Eyetracking Database](http://people.csail.mit.edu/tjudd/WherePeopleLook/index.html) [[Judd et al. 2009](http://people.csail.mit.edu/tjudd/WherePeopleLook/Docs/wherepeoplelook.pdf)]. This data was collected as part of an effort to model [visual saliency](http://www.scholarpedia.org/article/Visual_salience) -- given an image, can we predict where a person is most likely going to look._____no_output_____ <code> # load eyetracking data subjects, images = load_eyetracking_data()_____no_output_____ </code> ## Interactive Demo: Tracking Eye Gaze We have three stimulus images and five different subjects' gaze data. Each subject fixated in the center of the screen before the image appeared, then had a few seconds to freely look around. You can use the widget below to see how different subjects visually scanned the presented image. A subject ID of -1 will show the stimulus images without any overlayed gaze trace. Note that the images are rescaled below for display purposes, they were in their original aspect ratio during the task itself._____no_output_____ <code> #@title #@markdown Make sure you execute this cell to enable the widget! @widgets.interact(subject_id=widgets.IntSlider(-1, min=-1, max=4), image_id=widgets.IntSlider(0, min=0, max=2)) def plot_subject_trace(subject_id=-1, image_id=0): if subject_id == -1: subject = np.zeros((3, 0, 2)) else: subject = subjects[subject_id] data = subject[image_id] img = images[image_id] fig, ax = plt.subplots() ax.imshow(img, aspect='auto') ax.scatter(data[:, 0], data[:, 1], c='m', s=100, alpha=0.7) ax.set(xlim=(0, img.shape[1]), ylim=(img.shape[0], 0))_____no_output_____ </code> ## Section 3.1: Fitting data with `pykalman` Now that we have data, we'd like to use Kalman filtering to give us a better estimate of the true gaze. Up until this point we've known the parameters of our LDS, but here we need to estimate them from data directly. We will use the `pykalman` package to handle this estimation using the EM algorithm, a useful and influential learning algorithm described briefly in the bonus material. Before exploring fitting models with `pykalman` it's worth pointing out some naming conventions used by the library: $$ \begin{align} D&: \texttt{transition_matrices} & Q &: \texttt{transition_covariance}\\ H &:\texttt{observation_matrices} & R &:\texttt{observation_covariance}\\ \mu_0 &: \texttt{initial_state_mean} & \Sigma_0 &: \texttt{initial_state_covariance} \end{align} $$_____no_output_____The first thing we need to do is provide a guess at the dimensionality of the latent state. Let's start by assuming the dynamics line-up directly with the observation data (pixel x,y-coordinates), and so we have a state dimension of 2. We also need to decide which parameters we want the EM algorithm to fit. In this case, we will let the EM algorithm discover the dynamics parameters i.e. the $D$, $Q$, $H$, and $R$ matrices. We set up our `pykalman` `KalmanFilter` object with these settings using the code below._____no_output_____ <code> # set up our KalmanFilter object and tell it which parameters we want to # estimate np.random.seed(1) n_dim_obs = 2 n_dim_state = 2 kf = pykalman.KalmanFilter( n_dim_state=n_dim_state, n_dim_obs=n_dim_obs, em_vars=['transition_matrices', 'transition_covariance', 'observation_matrices', 'observation_covariance'] )_____no_output_____ </code> Because we know from the reported experimental design that subjects fixated in the center of the screen right before the image appears, we can set the initial starting state estimate $\mu_0$ as being the center pixel of the stimulus image (the first data point in this sample dataset) with a correspondingly low initial noise covariance $\Sigma_0$. Once we have everything set, it's time to fit some data._____no_output_____ <code> # Choose a subject and stimulus image subject_id = 1 image_id = 2 data = subjects[subject_id][image_id] # Provide the initial states kf.initial_state_mean = data[0] kf.initial_state_covariance = 0.1*np.eye(n_dim_state) # Estimate the parameters from data using the EM algorithm kf.em(data) print(f'D=\n{kf.transition_matrices}') print(f'Q =\n{kf.transition_covariance}') print(f'H =\n{kf.observation_matrices}') print(f'R =\n{kf.observation_covariance}')_____no_output_____ </code> We see that the EM algorithm has found fits for the various dynamics parameters. One thing you will note is that both the state and observation matrices are close to the identity matrix, which means the x- and y-coordinate dynamics are independent of each other and primarily impacted by the noise covariances. We can now use this model to smooth the observed data from the subject. In addition to the source image, we can also see how this model will work with the gaze recorded by the same subject on the other images as well, or even with different subjects. Below are the three stimulus images overlayed with recorded gaze in magenta and smoothed state from the filter in green, with gaze begin (orange triangle) and gaze end (orange square) markers. _____no_output_____ <code> #@title #@markdown Make sure you execute this cell to enable the widget! @widgets.interact(subject_id=widgets.IntSlider(1, min=0, max=4)) def plot_smoothed_traces(subject_id=0): subject = subjects[subject_id] fig, axes = plt.subplots(ncols=3, figsize=(18, 4)) for data, img, ax in zip(subject, images, axes): ax = plot_gaze_data(data, img=img, ax=ax) plot_kf_state(kf, data, ax)_____no_output_____ </code> ## Discussion questions: Why do you think one trace from one subject was sufficient to provide a decent fit across all subjects? If you were to go back and change the subject_id and/or image_id for when we fit the data using EM, do you think the fits would be different? We don't think the eye is exactly following a linear dynamical system. Nonetheless that is what we assumed for this exercise when we applied a Kalman filter. Despite the mismatch, these algorithms do perform well. Discuss what differences we might find between the true and assumed processes. What mistakes might be likely consequences of these differences? Finally, recall that the original task was to use this data to help devlop models of visual salience. While our Kalman filter is able to provide smooth estimates of observed gaze data, it's not telling us anything about *why* the gaze is going in a certain direction. In fact, if we sample data from our parameters and plot them, we get what amounts to a random walk._____no_output_____ <code> kf_state, kf_data = kf.sample(len(data)) ax = plot_gaze_data(kf_data, img=images[2]) plot_kf_state(kf, kf_data, ax)_____no_output_____ </code> This should not be surprising, as we have given the model no other observed data beyond the pixels at which gaze was detected. We expect there is some other aspect driving the latent state of where to look next other than just the previous fixation location. In summary, while the Kalman filter is a good option for smoothing the gaze trajectory itself, especially if using a lower-quality eye tracker or in noisy environmental conditions, a linear dynamical system may not be the right way to approach the much more challenging task of modeling visual saliency. _____no_output_____## Handling Eye Blinks_____no_output_____In the MIT Eyetracking Database, raw tracking data includes times when the subject blinked. The way this is represented in the data stream is via negative pixel coordinate values. We could try to mitigate these samples by simply deleting them from the stream, though this introduces other issues. For instance, if each sample corresponds to a fixed time step, and you arbitrarily remove some samples, the integrity of that consistent timestep between samples is lost. It's sometimes better to flag data as missing rather than to pretend it was never there at all, especially with time series data. Another solution is to used masked arrays. In `numpy`, a [masked array](https://numpy.org/doc/stable/reference/maskedarray.generic.html#what-is-a-masked-array) is an `ndarray` with an additional embedded boolean masking array that indicates which elements should be masked. When computation is performed on the array, the masked elements are ignored. Both `matplotlib` and `pykalman` work with masked arrays, and, in fact, this is the approach taken with the data we explore in this notebook. In preparing the dataset for this noteook, the original dataset was preprocessed to set all gaze data as masked arrays, with the mask enabled for any pixel with a negative x or y coordinate. _____no_output_____# Bonus_____no_output_____## Review on Gaussian joint, marginal and conditional distributions_____no_output_____Assume \begin{eqnarray} z & = & \begin{bmatrix}x \\y\end{bmatrix}\sim N\left(\begin{bmatrix}a \\b\end{bmatrix}, \begin{bmatrix}A & C \\C^T & B\end{bmatrix}\right) \end{eqnarray} then the marginal distributions are \begin{eqnarray} x & \sim & \mathcal{N}(a, A) \\ y & \sim & \mathcal{N}(b,B) \end{eqnarray} and the conditional distributions are \begin{eqnarray} x|y & \sim & \mathcal{N}(a+CB^{-1}(y-b), A-CB^{-1}C^T) \\ y|x & \sim & \mathcal{N}(b+C^TA^{-1}(x-a), B-C^TA^{-1}C) \end{eqnarray} *important take away: given the joint Gaussian distribution we can derive the conditionals*_____no_output_____## Kalman Smoothing_____no_output_____ <code> # @title Video 5: Kalman Smoothing and the EM Algorithm from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV14k4y1B79Q", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="4Ar2mYz1Nms", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out)_____no_output_____ </code> Obtain estimates by propagating from $y_T$ back to $y_0$ using results of forward pass ($\hat{\mu}_t^{\rm filter}, \hat{\Sigma}_t^{\rm filter}, P_t=\hat{\Sigma}_{t+1}^{\rm pred}$) \begin{eqnarray} s_t & \sim & \mathcal{N}(\hat{\mu}_t^{\rm smooth}, \hat{\Sigma}_t^{\rm smooth}) \\ \hat{\mu}_t^{\rm smooth} & = & \hat{\mu}_t^{\rm filter}+J_t(\hat{\mu}_{t+1}^{\rm smooth}-D\hat{\mu}_t^{\rm filter}) \\ \hat{\Sigma}_t^{\rm smooth} & = & \hat{\Sigma}_t^{\rm filter}+J_t(\hat{\Sigma}_{t+1}^{\rm smooth}-P_t)J_t^T \\ J_t & = & \hat{\Sigma}_t^{\rm filter}D^T P_t^{-1} \end{eqnarray} This gives us the final estimate for $z_t$. \begin{eqnarray} \hat{\mu}_t & = & \hat{\mu}_t^{\rm smooth} \\ \hat{\Sigma}_t & = & \hat{\Sigma}_t^{\rm smooth} \end{eqnarray}_____no_output_____### Exercise 3: Implement Kalman smoothing In this exercise you will implement the Kalman smoothing (backward) process. Again you will focus on writing the expressions for computing the smoothed mean, smoothed covariance, and $J_t$ values._____no_output_____ <code> def kalman_smooth(data, params): """ Perform Kalman smoothing (backward pass) on the data given the provided system parameters. Args: data (ndarray): a sequence of osbervations of shape(n_timesteps, n_dim_obs) params (dict): a dictionary of model paramters: (D, Q, H, R, mu_0, sigma_0) Returns: ndarray, ndarray: the smoothed system means and noise covariance values """ # pulled out of the params dict for convenience D= params['D'] Q = params['Q'] H = params['H'] R = params['R'] n_dim_state = D.shape[0] n_dim_obs = H.shape[0] # first run the forward pass to get the filtered means and covariances mu, sigma = kalman_filter(data, params) # initialize state mean and covariance estimates mu_hat = np.zeros_like(mu) sigma_hat = np.zeros_like(sigma) mu_hat[-1] = mu[-1] sigma_hat[-1] = sigma[-1] # smooth the data for t in reversed(range(len(data)-1)): sigma_pred = D@ sigma[t] @ D.T + Q # sigma_pred at t+1 ########################################################################### ## TODO for students: compute the smoothed state mean and covariance values # Fill out function and remove raise NotImplementedError("Student excercise: compute the smoothed state mean and covariance values") ########################################################################### # write the expression to compute the Kalman gain for the backward process J = ... # write the expression to compute the smoothed state mean estimate mu_hat[t] = ... # write the expression to compute the smoothed state noise covariance estimate sigma_hat[t] = ... return mu_hat, sigma_hat # Uncomment once the kalman_smooth function is complete # smoothed_state_means, smoothed_state_covariances = kalman_smooth(obs, params) # axes = plot_kalman(state, obs, filtered_state_means, color="r", # label="my kf-filter") # plot_kalman(state, obs, smoothed_state_means, color="b", # label="my kf-smoothed", axes=axes)_____no_output_____ </code> [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_HiddenDynamics/solutions/W3D2_Tutorial4_Solution_bed5fceb.py) *Example output:* <img alt='Solution hint' align='left' width=1134 height=414 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W3D2_HiddenDynamics/static/W3D2_Tutorial4_Solution_bed5fceb_0.png> _____no_output_____**Forward vs Backward** Now that we have implementations for both, let's compare their peformance by computing the MSE between the filtered (forward) and smoothed (backward) estimated states and the true latent state._____no_output_____ <code> print(f"Filtered MSE: {np.mean((state - filtered_state_means)**2):.3f}") print(f"Smoothed MSE: {np.mean((state - smoothed_state_means)**2):.3f}")_____no_output_____ </code> In this example, the smoothed estimate is clearly superior to the filtered one. This makes sense as the forward pass uses only the past measurements, whereas the backward pass can use future measurement too, correcting the forward pass estimates given all the data we've collected. So why would you ever use Kalman filtering alone, without smoothing? As Kalman filtering only depends on already observed data (i.e. the past) it can be run in a streaming, or on-line, setting. Kalman smoothing relies on future data as it were, and as such can only be applied in a batch, or off-line, setting. So use Kalman filtering if you need real-time corrections and Kalman smoothing if you are considering already-collected data. This online case is typically what the brain faces._____no_output_____## The Expectation-Maximization (EM) Algorithm_____no_output_____- want to maximize $\log p(m|\theta)$ - need to marginalize out latent state *(which is not tractable)* $$p(m|\theta)=\int p(m,s|\theta)dz$$ - add a probability distribution $q(s)$ which will approximate the latent state distribution $$\log p(m|\theta)\int_s q(s)dz$$ - can be rewritten as $$\mathcal{L}(q,\theta)+KL\left(q(s)||p(s|m),\theta\right)$$ - $\mathcal{L}(q,\theta)$ contains the joint distribution of $m$ and $s$ - $KL(q||p)$ contains the conditional distribution of $s|m$ #### Expectation step - parameters are kept fixed - find a good approximation $q(s)$: maximize lower bound $\mathcal{L}(q,\theta)$ with respect to $q(s)$ - (already implemented Kalman filter+smoother) #### Maximization step - keep distribution $q(s)$ fixed - change parameters to maximize the lower bound $\mathcal{L}(q,\theta)$ As mentioned, we have already effectively solved for the E-Step with our Kalman filter and smoother. The M-step requires further derivation, which is covered in the Appendix. Rather than having you implement the M-Step yourselves, let's instead turn to using a library that has already implemented EM for exploring some experimental data from cognitive neuroscience. _____no_output_____### The M-step for a LDS *(see Bishop, chapter 13.3.2 Learning in LDS)* Update parameters of the probability distribution *For the updates in the M-step we will need the following posterior marginals obtained from the Kalman smoothing results* $\hat{\mu}_t^{\rm smooth}, \hat{\Sigma}_t^{\rm smooth}$ $$ \begin{eqnarray} E(s_t) &=& \hat{\mu}_t \\ E(s_ts_{t-1}^T) &=& J_{t-1}\hat{\Sigma}_t+\hat{\mu}_t\hat{\mu}_{t-1}^T\\ E(s_ts_{t}^T) &=& \hat{\Sigma}_t+\hat{\mu}_t\hat{\mu}_{t}^T \end{eqnarray} $$ **Update parameters** Initial parameters $$ \begin{eqnarray} \mu_0^{\rm new}&=& E(s_0)\\ Q_0^{\rm new} &=& E(s_0s_0^T)-E(s_0)E(s_0^T) \\ \end{eqnarray} $$ Hidden (latent) state parameters $$ \begin{eqnarray} D^{\rm new} &=& \left(\sum_{t=2}^N E(s_ts_{t-1}^T)\right)\left(\sum_{t=2}^N E(s_{t-1}s_{t-1}^T)\right)^{-1} \\ Q^{\rm new} &=& \frac{1}{T-1} \sum_{t=2}^N E\big(s_ts_t^T\big) - D^{\rm new}E\big(s_{t-1}s_{t}^T\big) - E\big(s_ts_{t-1}^T\big)D^{\rm new}+D^{\rm new}E\big(s_{t-1}s_{t-1}^T\big)\big(D^{\rm new}\big)^{T}\\ \end{eqnarray} $$ Observable (measured) space parameters $$H^{\rm new}=\left(\sum_{t=1}^N y_t E(s_t^T)\right)\left(\sum_{t=1}^N E(s_t s_t^T)\right)^{-1}$$ $$R^{\rm new}=\frac{1}{T}\sum_{t=1}^Ny_ty_t^T-H^{\rm new}E(s_t)y_t^T-y_tE(s_t^T)H^{\rm new}+H^{\rm new}E(s_ts_t^T)H_{\rm new}$$_____no_output_____
{ "repository": "ofou/course-content", "path": "tutorials/W3D2_HiddenDynamics/student/W3D2_Tutorial4.ipynb", "matched_keywords": [ "neuroscience" ], "stars": null, "size": 61076, "hexsha": "4814ca9e877bbb7d4b8e378870fe6e260642aa7b", "max_line_length": 620, "avg_line_length": 39.3277527366, "alphanum_fraction": 0.592376711 }
# Notebook from MatthiKrauss/qusco_school_2019_03_krotov_exercise Path: exercise_03_three_level_system.ipynb <img src="QuSCo_Logo_CMYK.jpg" alt="Here should be the qusco logo!" width="500"> ---_____no_output_____ <code> import numpy as np import scipy import matplotlib import matplotlib.pylab as plt import krotov import qutip from exercise_03_utils import *_____no_output_____ </code> ---_____no_output_____# Exercises [Model](#Model) [Exercise 3.1: Implementing the System ](#Exercise-3.1:-Implementing-the-System) [Exercise 3.2: Objective](#Exercise-3.2:-Objective) [Exercise 3.3: Shaping our guess pulses](#Exercise-3.3:-Shaping-our-guess-pulses) [Exercise 3.4: Specifying the pulse options](#Exercise-3.4:-Specifying-the-pulse-options) [Exercise 3.5: The optimization](#Exercise-3.5:-The-optimization) [Exercise 3.6: Analysing the results](#Exercise-3.6:-Analysing-the-results) [Bonus Exercise: Adding dissipation](#Bonus-exercise)_____no_output_____ --- _____no_output_____# Model _____no_output_____$\newcommand{tr}[0]{\operatorname{tr}} \newcommand{diag}[0]{\operatorname{diag}} \newcommand{abs}[0]{\operatorname{abs}} \newcommand{pop}[0]{\operatorname{pop}} \newcommand{aux}[0]{\text{aux}} \newcommand{opt}[0]{\text{opt}} \newcommand{tgt}[0]{\text{tgt}} \newcommand{init}[0]{\text{init}} \newcommand{lab}[0]{\text{lab}} \newcommand{rwa}[0]{\text{rwa}} \newcommand{bra}[1]{\langle#1\vert} \newcommand{ket}[1]{\vert#1\rangle} \newcommand{Bra}[1]{\left\langle#1\right\vert} \newcommand{Ket}[1]{\left\vert#1\right\rangle} \newcommand{Braket}[2]{\left\langle #1\vphantom{#2} \mid #2\vphantom{#1}\right\rangle} \newcommand{Ketbra}[2]{\left\vert#1\vphantom{#2} \right\rangle \hspace{-0.2em} \left\langle #2\vphantom{#1}\right\vert} \newcommand{e}[1]{\mathrm{e}^{#1}} \newcommand{op}[1]{\hat{#1}} \newcommand{Op}[1]{\hat{#1}} \newcommand{dd}[0]{\,\text{d}} \newcommand{Liouville}[0]{\mathcal{L}} \newcommand{DynMap}[0]{\mathcal{E}} \newcommand{identity}[0]{\mathbf{1}} \newcommand{Norm}[1]{\lVert#1\rVert} \newcommand{Abs}[1]{\left\vert#1\right\vert} \newcommand{avg}[1]{\langle#1\rangle} \newcommand{Avg}[1]{\left\langle#1\right\rangle} \newcommand{AbsSq}[1]{\left\vert#1\right\vert^2} \newcommand{Re}[0]{\operatorname{Re}} \newcommand{Im}[0]{\operatorname{Im}} \newcommand{toP}[0]{\omega_{12}} \newcommand{toS}[0]{\omega_{23}} \newcommand{oft}[0]{\left(t\right)}$ Our model consists of a "Lambda system" as shown below. These levels interact with two pulses with the base frequency $\omega_{\mathrm{P}}$ ("Pump"-pulse) and $\omega_{\mathrm{S}}$ ("Stokes"-pulse), respectively. These pulses have time-dependent envelopes \begin{align*} \epsilon_{\mathrm{P}}(t) &= \frac{\Omega_{\mathrm{P}}^{(1)}(t)}{\mu_{12}}\cos({\omega_{\mathrm{P}}}t) +\frac{\Omega_{\mathrm{P}}^{(2)}(t)}{\mu_{12}}\sin({\omega_{\mathrm{P}}}t) \\ \epsilon_{\mathrm{S}}(t) &= \frac{\Omega_{\mathrm{S}}^{(1)}(t)}{\mu_{23}}\cos({\omega_{\mathrm{S}}}t) +\frac{\Omega_{\mathrm{S}}^{(2)}(t)}{\mu_{23}}\sin({\omega_{\mathrm{S}}}t), \end{align*} With the coupling strength $\mu_{ij}$ between the levels $i$ and $j$. The frequencies are chosen, such that they are close to the transition frequencies $\ket{1}\rightarrow\ket{2}$ ($\omega_{12}$) and $\ket{3} \rightarrow\ket{2}$ ($\omega_{32}$). To represent the evolution in the rotating frame, we use the free evolution operator \begin{equation*} U_{0} = \begin{pmatrix} \e{-i(\omega_2-\omega_{\mathrm{P}})t} & 0 & 0 \\ 0 & \e{-i \omega_2 t} & 0 \\ 0 & 0 & \e{-i(\omega_2-\omega_{\mathrm{S}})t} \end{pmatrix}, \end{equation*} with $\omega_2 = E_2/\hbar$ the frequency fo the energy level $\ket{2}$. With this we can transform the Hamiltonian of the system \begin{equation*} \hat{H} = \hat{H}_{0} + \hat{H}_{1} = \begin{pmatrix} E_1 & 0 & 0 \\ 0 & E_2 & 0 \\ 0 & 0 & E_3 \end{pmatrix} - \begin{pmatrix} 0 & \mu_{12}\epsilon_{\mathrm{P}}(t) & 0 \\ \mu_{12}\epsilon_{\mathrm{P}}(t) & 0 & \mu_{23}\epsilon_{\mathrm{P}}(t) \\ 0 & \mu_{23}\epsilon_{\mathrm{P}}(t) & 0 \end{pmatrix}, \end{equation*} to \begin{equation*} \hat{H}' = \hbar \begin{pmatrix} -\Delta_{\mathrm{P}} & \Omega^{\ast}_{\mathrm{P}}(t) & 0 \\ \Omega_{\mathrm{P}}(t) & 0 & \Omega^{\ast}_{\mathrm{S}}(t) \\ 0 & \Omega_{\mathrm{S}}(t) & -\Delta_{\mathrm{S}} \end{pmatrix}, \end{equation*} with $\Delta_{\mathrm{P}} = E_1 + \omega_{\mathrm{P}} - E_2$ and $\Delta_{\mathrm{S}} = E_3 + \omega_{\mathrm{S}} - E_2$. The envelopes become complex with $\Omega_{\mathrm{P}} = \Omega^{(1)}_{\mathrm{P}} + i\Omega^{(2)}_{\mathrm{P}}$ and $\Omega_{\mathrm{S}} = \Omega^{(1)}_{\mathrm{S}} + i\Omega^{(2)}_{\mathrm{S}}$. In the following, we will optimize the real and imaginary part of $\Omega_{\mathrm{S}}$ and $\Omega_{\mathrm{P}}$ independently. _____no_output_____<img src="tikzpics/energylevels.png" alt="Lambda system considered in this notebook" width="500">_____no_output_____---_____no_output_____# Exercise 3.1: Implementing the System_____no_output_____&nbsp;&nbsp;&nbsp; **a)** Set up H0 as described above._____no_output_____ <code> #Parameters E1 = 0. E2 = 10. E3 = 5. ω_P = 9.5 ω_S = 4.5 Ω_init = 5. tlist = np.linspace(0.,5,500)_____no_output_____H0 = '---'_____no_output_____ </code> &nbsp;&nbsp;&nbsp; **b)** Set up the real and imaginary part of $\mathrm{H}_{1P}$ and $\mathrm{H}_{1S}$, according to the definition above._____no_output_____ <code> # Qutip objects holding the real and imaginary part of the Hamiltoninan H_1P H1P_re = '---' H1P_im = '---' # initial funtions, which will later be contorls ΩP_re = lambda t, args: Ω_init ΩP_im = lambda t, args: Ω_init_____no_output_____# Qutip objects holding the real and imaginary part of the Hamiltoninan H_1S H1S_re = '---' H1S_im = '---' # initial funtions, which will later be contorls ΩS_re = lambda t, args: Ω_init ΩS_im = lambda t, args: Ω_init_____no_output_____ </code> &nbsp;&nbsp;&nbsp; **c)** Specify the initial $\big(\Psi_0 = \ket{1}\big)$ and the target state $\big(\Psi_1 = \ket{3}\big)$ for the optimization._____no_output_____ <code> """Initial and target states""" psi0 = '---' psi1 = '---' _____no_output_____#q1c.hint()_____no_output_____ </code> &nbsp;&nbsp;&nbsp; **d)** Define the overall Hamiltonian by combining the partial Hamiltoians defined in *b)* with the corresponding control._____no_output_____ <code> """Final Hamiltonian""" Ham = [H0, "..."]_____no_output_____#q1d.hint()_____no_output_____ </code> &nbsp;&nbsp;&nbsp; **e)** Finally specify the projectors $\hat{P}_i = \ket{i}\bra{i}$._____no_output_____ <code> proj1 = '---' proj2 = '---' proj3 = '---'_____no_output_____ </code> ------_____no_output_____# Exercise 3.2: Objective_____no_output_____As already mentioned in the first notebook, krotov's [`optimize_pulse`](https://krotov.readthedocs.io/en/stable/API/krotov.optimize.html#krotov.optimize.optimize_pulses) method takes so called objectives. These hold all the information about the goal of the optimization. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Define the objective corresponding to our optimization goal_____no_output_____ <code> objective = '---'_____no_output_____#q2.hint()_____no_output_____ </code> ---_____no_output_____# Exercise 3.3: Shaping our guess pulses_____no_output_____In order to demonstrate Krotov’s optimization method, we choose an initial guess consisting of two low intensity and real Blackman pulses which are temporally disjoint. These should look like the following pulses: <img src="plots/ex2_guess_pulse.png" alt="Here should actually be the Guess Pulses...Ask your advisor if you see this text..." width="700">_____no_output_____&nbsp;&nbsp;&nbsp; **a)** Write two functions, which will be used as guess pulses for the real and imaginary controls. Try to reproduce the pulses in the plots above. &nbsp;&nbsp;&nbsp; *Hint1: Krotov's [blackman function](https://krotov.readthedocs.io/en/stable/API/krotov.shapes.html#krotov.shapes.blackman) in the krotov.shape module might be usefull here.* &nbsp;&nbsp;&nbsp; *Hint2: If you have everything set up, go on to b) and finally plot your results in c). &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; If it does not look as indented, come back to a).* &nbsp;&nbsp;&nbsp; *Note: The functions return again a function. This one is the one, that is used for calculating the control!*_____no_output_____ <code> def shape_field_real('--arguments-you-need--'): #Note, that the function needs to take 2 arguments. #You can omitt the 'args' one def field_shaped(t, args): ### how should the field look like pass ### insert the function here that calculates it return field_shaped _____no_output_____def shape_field_imag('--arguments-you-need--'): #Note, that the function needs to take 2 arguments. #You can omitt the 'args' one def field_shaped(t, args): ### how should the field look like pass ### insert the function here that calculates it return field_shaped_____no_output_____#q3a.hint()_____no_output_____ </code> &nbsp;&nbsp;&nbsp; **b)** When done, assign the functions to the individual parts of the Hamiltonian_____no_output_____ <code> Ham[1][1] = '...' Ham[2][1] = '...' Ham[3][1] = '...' Ham[4][1] = '...'_____no_output_____ </code> &nbsp;&nbsp;&nbsp; **c)** Verify that everything works as expected by executing the cell below. &nbsp;&nbsp;&nbsp; *Note: You might want to choose more expressive titles in the `plot_pulse` routine*_____no_output_____ <code> def plot_pulse(pulse, tlist, ax, title): if callable(pulse): pulse = np.array([pulse(t, args=None) for t in tlist]) ax.plot(tlist, pulse) ax.set_xlabel('time') ax.set_ylabel('pulse amplitude') ax.set_title(title) fig, ax = plt.subplots(2,2) plot_pulse(Ham[1][1], tlist, ax[0,0], title='Ham[1][1]') plot_pulse(Ham[2][1], tlist, ax[0,1], title='Ham[2][1]') plot_pulse(Ham[3][1], tlist, ax[1,0], title='Ham[3][1]') plot_pulse(Ham[4][1], tlist, ax[1,1], title='Ham[4][1]') plt.tight_layout() plt.show(fig)_____no_output_____ </code> After having set up everything, let's see how good our guess is! Therefore we need to simulate the dynamics of the pulse. &nbsp;&nbsp;&nbsp; **d)** Use the [`mesolve` function](https://krotov.readthedocs.io/en/stable/API/krotov.objectives.html#krotov.objectives.Objective.mesolve) of your objective to calculate the resulting populations of the individual states. &nbsp;&nbsp;&nbsp; Make sure you give the right expectation operators to the function!_____no_output_____ <code> guess_dynamics = '---'_____no_output_____#q3d.hint()_____no_output_____ </code> Let's see what comes out:_____no_output_____ <code> fig, ax = plt.subplots() ax.plot(guess_dynamics.times, guess_dynamics.expect[0], label='Projector 1') ax.plot(guess_dynamics.times, guess_dynamics.expect[1], label='Projector 2') ax.plot(guess_dynamics.times, guess_dynamics.expect[2], label='Projector 3') ax.legend() ax.set_xlabel('time') ax.set_ylabel('population') plt.show(fig)_____no_output_____ </code> &nbsp;&nbsp;&nbsp; **e)** Does this make sense?_____no_output_____---_____no_output_____# Exercise 3.4: Specifying the pulse options_____no_output_____Now that our Hamiltonian is completely set up and the objective for our optimization is clear, we need to specify the parameters for the krotov algorithm. Therefore, we need to set the pulse options for the optimization._____no_output_____First of all, we define the pulse shape. This needs to be between 0 and 1 and should go to 0 at the beginning and at the end of the time interval, which we take into account. As a first guess, we choose:_____no_output_____ <code> def update_shape(t): """Scales the Krotov methods update of the pulse value at the time t""" return krotov.shapes.flattop(t,0.,5.,t_rise=.0001,func='sinsq')_____no_output_____ </code> &nbsp;&nbsp;&nbsp; **a)** Play around with the `t_rise` parameter and plot the update shape with the following cell. Choose a resonable value. &nbsp;&nbsp;&nbsp; You can later also play around with that and see how this changes you optimization._____no_output_____ <code> fig, ax = plt.subplots() ax.plot(tlist, np.vectorize(update_shape)(tlist)) ax.set_xlabel('time') ax.set_ylabel('Update shape') plt.show(fig)_____no_output_____ </code> Now let us continue to and define the pulse options. Unfortunatelly $\lambda_a$ was very cautious estimated and might leed to very slow convergence. &nbsp;&nbsp;&nbsp; **b)** Are you aware of how to change $\lambda_a$? Carefully do that and play around with it after the first optimization in the next exercise_____no_output_____ <code> opt_lambda = 100 pulse_options = { Ham[1][1]: dict(lambda_a=opt_lambda, update_shape=update_shape), Ham[2][1]: dict(lambda_a=opt_lambda, update_shape=update_shape), Ham[3][1]: dict(lambda_a=opt_lambda, update_shape=update_shape), Ham[4][1]: dict(lambda_a=opt_lambda, update_shape=update_shape) }_____no_output_____#q4b.hint()_____no_output_____ </code> ---_____no_output_____# Exercise 3.5: The optimization_____no_output_____Finally we can use krotov's optimize_pulses with all the information we build up in the previous examples. Fill in the following missing values, which are indicated by `'###############'`. Proceed as follows: &nbsp;&nbsp;&nbsp; **a)** Recall the structure of the function by using the [docs](https://krotov.readthedocs.io/en/stable/API/krotov.optimize.html#krotov.optimize.optimize_pulses). &nbsp;&nbsp;&nbsp; **b)** Which functional (and therefore which `chi_constructor`) do we need here? &nbsp;&nbsp;&nbsp; Check the corresponding section in [Krotov's method](https://krotov.readthedocs.io/en/stable/06_krotovs_method.html#functionals) and choose from the [functionals module](https://krotov.readthedocs.io/en/stable/API/krotov.functionals.html). &nbsp;&nbsp;&nbsp; **c)** What do the values for the `check_convergence` and `iter_stop` argument mean? &nbsp;&nbsp;&nbsp; Make a resonable choice here. &nbsp;&nbsp;&nbsp; **d)** Maybe your optimization takes quite some time! Adjust the relevant paramters to obtain a better convergence (and thus better results less time). &nbsp;&nbsp;&nbsp; However, take care, that the changes you make are resonable (maybe we want to optimize for an experiment). _____no_output_____ <code> oct_result = krotov.optimize_pulses( '#######a#######', '#######a#######', '#######a#######', propagator=krotov.propagators.expm, # chi_constructor='#######b#######', # info_hook=krotov.info_hooks.print_table( J_T='#######b#######', unicode=True, ), check_convergence=krotov.convergence.Or( krotov.convergence.value_below('#######c#######', name='J_T'), krotov.convergence.delta_below('#######c#######'), krotov.convergence.check_motonic_error, ), iter_stop='#######c#######', ) _____no_output_____#q5b.hint()_____no_output_____#q5c.hint()_____no_output_____#q5d.hint()_____no_output_____oct_result_____no_output_____ </code> ---_____no_output_____# Exercise 3.6: Analysing the results_____no_output_____So now let's see, how our solution looks like_____no_output_____&nbsp;&nbsp;&nbsp; **a)** Get the resulting objectives from the [oct_result](https://krotov.readthedocs.io/en/stable/API/krotov.result.html) and use mesolve to simulate the dynamics under the optimized pulse (as in 3d))._____no_output_____ <code> opt_dynamics = '-------'_____no_output_____#q6a.hint()_____no_output_____ </code> After simulating the optimized dynamics we can plot them via_____no_output_____ <code> fig, ax = plt.subplots() ax.plot(opt_dynamics.times, opt_dynamics.expect[0], label='Projector 1') ax.plot(opt_dynamics.times, opt_dynamics.expect[1], label='Projector 2') ax.plot(opt_dynamics.times, opt_dynamics.expect[2], label='Projector 3') ax.legend() ax.set_xlabel('time') ax.set_ylabel('population') plt.show(fig)_____no_output_____ </code> --- Now we can also extract the optimized pulses and plot the amplitudes and phases of the pulses. To do this, you can use the following function, which takes the real and the imaginary part of the pulse and plot the amplitude and the phase:_____no_output_____ <code> def plot_pulse_amplitude_and_phase(pulse_real, pulse_imaginary,tlist): ax1 = plt.subplot(211) ax2 = plt.subplot(212) amplitudes = [np.sqrt(x*x + y*y) for x,y in zip(pulse_real,pulse_imaginary)] phases = [np.arctan2(y,x)/np.pi for x,y in zip(pulse_real,pulse_imaginary)] ax1.plot(tlist,amplitudes) ax1.set_xlabel('time') ax1.set_ylabel('pulse amplitude') ax2.plot(tlist,phases) ax2.set_xlabel('time') ax2.set_ylabel('pulse phase (π)') plt.show() _____no_output_____ </code> --- &nbsp;&nbsp;&nbsp; **b)** Plot the optimized controls, which are contained in the [oct_result](https://krotov.readthedocs.io/en/stable/API/krotov.result.html)._____no_output_____ <code> print("pump pulse amplitude and phase:") plot_pulse_amplitude_and_phase( "--real-pump-controls--", "--imag-pump-controls--", tlist ) _____no_output_____ print("Stokes pulse amplitude and phase:") plot_pulse_amplitude_and_phase( "--real-stokes-controls--", "--imag-stokes-controls--", tlist ) _____no_output_____ </code> ---_____no_output_____---_____no_output_____# Bonus exercise In more realistic physical system, we often have to deal with dissipation. Let's for example consider a spontaneous decay in level $\ket{2}$. This can be a good approximation if the levels $\ket{1}$ and $\ket{3}$ have a decay time, which is much longer than the duration of the pulse. To prevent the "loss" of population from level $\ket{2}$, we need will need to "tell" krotov how, that $\ket{2}$ should be avoided. We can do this by adding a phenomenological decay $-i\gamma\ket{2}\bra{2}$ to the Hamiltonian. Add the dissipation with a loss of 0.5 to the Hamiltonian and do the optimization with again! *Hint: Since we have a non-hermitian Hamiltonian, we can no longer use the mesolve routine for the dynamics from QuTiP! Therefore, you need to use `obj.propagate` instead. You can use the code below:*_____no_output_____ <code> dynamics = "--Add-objective-here--".propagate( tlist, propagator=krotov.propagators.expm, e_ops='--List-of-projectors--' ) states = "--Add-objective-here--".propagate( tlist, propagator=krotov.propagators.expm )_____no_output_____ </code> You can also use the following block to plot the norm and its loss over time: _____no_output_____ <code> state_norm = lambda i: states.states[i].norm() states_norm=np.vectorize(state_norm) fig, ax = plt.subplots() ax.plot(states.times, states_norm(np.arange(len(states.states)))) ax.set_title('Norm loss', fontsize = 15) ax.set_xlabel('time') ax.set_ylabel('state norm') plt.show(fig)_____no_output_____ </code>
{ "repository": "MatthiKrauss/qusco_school_2019_03_krotov_exercise", "path": "exercise_03_three_level_system.ipynb", "matched_keywords": [ "evolution" ], "stars": 2, "size": 29036, "hexsha": "4816033c42f8723cc5aa8e8d40f1405b0b0b68ad", "max_line_length": 296, "avg_line_length": 28.8628230616, "alphanum_fraction": 0.5419823667 }
# Notebook from daekeun-ml/aws-deepcomposer-samples Path: Lab 2/GAN.ipynb ## Introduction_____no_output_____This tutorial is a brief introduction to music generation using **Generative Adversarial Networks** (**GAN**s). The goal of this tutorial is to train a machine learning model using a dataset of Bach compositions so that the model learns to add accompaniments to a single track input melody. In other words, if the user provides a single piano track of a song such as "twinkle twinkle little star", the GAN model would add three other piano tracks to make the music sound more Bach-inspired. The proposed algorithm consists of two competing networks: a generator and a critic (discriminator). A generator is a deep neural network that learns to create new synthetic data that resembles the distribution of the dataset on which it was trained. A critic is another deep neural network that is trained to differentiate between real and synthetic data. The generator and the critic are trained in alternating cycles such that the generator learns to produce more and more realistic data (Bach-like music in this use case) while the critic iteratively gets better at learning to differentiate real data (Bach music) from the synthetic ones. As a result, the quality of music produced by the generator gets more and more realistic with time._____no_output_____![High level WGAN-GP architecture](images/dgan.png "WGAN-GP architecture")_____no_output_____## Dependencies First, let's import all of the python packages we will use throughout the tutorial. _____no_output_____ <code> # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. # Permission is hereby granted, free of charge, to any person obtaining a copy of # this software and associated documentation files (the "Software"), to deal in # the Software without restriction, including without limitation the rights to # use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of # the Software, and to permit persons to whom the Software is furnished to do so. # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS # FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR # COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER # IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN # CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # Create the environment !conda update --all --y !pip install tensorflow-gpu==1.14.0 !pip install numpy==1.16.4 !pip install pretty_midi !pip install pypianoroll !pip install music21 !pip install seaborn !pip install --ignore-installed moviepy_____no_output_____# IMPORTS import os import numpy as np from PIL import Image import logging import pypianoroll import scipy.stats import pickle import music21 from IPython import display import matplotlib.pyplot as plt # Configure Tensorflow import tensorflow as tf print(tf.__version__) tf.logging.set_verbosity(tf.logging.ERROR) tf.enable_eager_execution() # Use this command to make a subset of GPUS visible to the jupyter notebook. os.environ['CUDA_VISIBLE_DEVICES'] = '0' os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # Utils library for plotting, loading and saving midi among other functions from utils import display_utils, metrics_utils, path_utils, inference_utils, midi_utils LOGGER = logging.getLogger("gan.train") %matplotlib inline_____no_output_____ </code> ## Configuration_____no_output_____Here we configure paths to retrieve our dataset and save our experiments._____no_output_____ <code> root_dir = './Experiments' # Directory to save checkpoints model_dir = os.path.join(root_dir,'2Bar') # JSP: 229, Bach: 19199 # Directory to save pianorolls during training train_dir = os.path.join(model_dir, 'train') # Directory to save checkpoint generated during training check_dir = os.path.join(model_dir, 'preload') # Directory to save midi during training sample_dir = os.path.join(model_dir, 'sample') # Directory to save samples generated during inference eval_dir = os.path.join(model_dir, 'eval') os.makedirs(train_dir, exist_ok=True) os.makedirs(eval_dir, exist_ok=True) os.makedirs(sample_dir, exist_ok=True) _____no_output_____ </code> ## Data Preparation ### Dataset summary In this tutorial, we use the [`JSB-Chorales-dataset`](http://www-etud.iro.umontreal.ca/~boulanni/icml2012), comprising 229 chorale snippets. A chorale is a hymn that is usually sung with a single voice playing a simple melody and three lower voices providing harmony. In this dataset, these voices are represented by four piano tracks. Let's listen to a song from this dataset._____no_output_____ <code> display_utils.playmidi('./original_midi/MIDI-0.mid')_____no_output_____ </code> ### Data format - piano roll_____no_output_____For the purpose of this tutorial, we represent music from the JSB-Chorales dataset in the piano roll format. **Piano roll** is a discrete representation of music which is intelligible by many machine learning algorithms. Piano rolls can be viewed as a two-dimensional grid with "Time" on the horizontal axis and "Pitch" on the vertical axis. A one or zero in any particular cell in this grid indicates if a note was played or not at that time for that pitch. Let us look at a few piano rolls in our dataset. In this example, a single piano roll track has 32 discrete time steps and 128 pitches. We see four piano rolls here, each one representing a separate piano track in the song._____no_output_____<img src="images/pianoroll2.png" alt="Dataset summary" width="800"> You might notice this representation looks similar to an image. While the sequence of notes is often the natural way that people view music, many modern machine learning models instead treat music as images and leverage existing techniques within the computer vision domain. You will see such techniques used in our architecture later in this tutorial._____no_output_____**Why 32 time steps?** For the purpose of this tutorial, we sample two non-empty bars (https://en.wikipedia.org/wiki/Bar_(music)) from each song in the JSB-Chorales dataset. A **bar** (or **measure**) is a unit of composition and contains four beats for songs in our particular dataset (our songs are all in 4/4 time) : We’ve found that using a resolution of four time steps per beat captures enough of the musical detail in this dataset. This yields... $$ \frac{4\;timesteps}{1\;beat} * \frac{4\;beats}{1\;bar} * \frac{2\;bars}{1} = 32\;timesteps $$ Let us now load our dataset as a numpy array. Our dataset comprises 229 samples of 4 tracks (all tracks are piano). Each sample is a 32 time-step snippet of a song, so our dataset has a shape of... (num_samples, time_steps, pitch_range, tracks) = (229, 32, 128, 4)._____no_output_____ <code> training_data = np.load('./dataset/train.npy') print(training_data.shape)_____no_output_____ </code> Let's see a sample of the data we'll feed into our model. The four graphs represent the four tracks._____no_output_____ <code> display_utils.show_pianoroll(training_data)_____no_output_____ </code> ### Load data _____no_output_____We now create a Tensorflow dataset object from our numpy array to feed into our model. The dataset object helps us feed batches of data into our model. A batch is a subset of the data that is passed through the deep learning network before the weights are updated. Batching data is necessary in most training scenarios as our training environment might not be able to load the entire dataset into memory at once._____no_output_____ <code> #Number of input data samples in a batch BATCH_SIZE = 64 #Shuffle buffer size for shuffling data SHUFFLE_BUFFER_SIZE = 1000 #Preloads PREFETCH_SIZE batches so that there is no idle time between batches PREFETCH_SIZE = 4_____no_output_____def prepare_dataset(filename): """Load the samples used for training.""" data = np.load(filename) data = np.asarray(data, dtype=np.float32) # {-1, 1} print('data shape = {}'.format(data.shape)) dataset = tf.data.Dataset.from_tensor_slices(data) dataset = dataset.shuffle(SHUFFLE_BUFFER_SIZE).repeat() dataset = dataset.batch(BATCH_SIZE, drop_remainder=True) dataset = dataset.prefetch(PREFETCH_SIZE) return dataset dataset = prepare_dataset('./dataset/train.npy')_____no_output_____ </code> ## Model architecture In this section, we will walk through the architecture of the proposed GAN. The model consists of two networks, a generator and a critic. These two networks work in a tight loop as following: * Generator: 1. The generator takes in a batch of single-track piano rolls (melody) as the input and generates a batch of multi-track piano rolls as the output by adding accompaniments to each of the input music tracks. 2. The critic then takes these generated music tracks and predicts how far it deviates from the real data present in your training dataset. 3. This feedback from the critic is used by the generator to update its weights. * Critic: As the generator gets better at creating better music accompaniments using the feedback from the critic, the critic needs to be retrained as well. 1. Train the critic with the music tracks just generated by the generator as fake inputs and an equivalent number of songs from the original dataset as the real input. * Alternate between training these two networks until the model converges and produces realistic music, beginning with the critic on the first iteration. We use a special type of GAN called the **Wasserstein GAN with Gradient Penalty** (or **WGAN-GP**) to generate music. While the underlying architecture of a WGAN-GP is very similar to vanilla variants of GAN, WGAN-GPs help overcome some of the commonly seen defects in GANs such as the vanishing gradient problem and mode collapse (see appendix for more details). Note our "critic" network is more generally called a "discriminator" network in the more general context of vanilla GANs._____no_output_____### Generator_____no_output_____The generator is adapted from the U-Net architecture (a popular CNN that is used extensively in the computer vision domain), consisting of an “encoder” that maps the single track music data (represented as piano roll images) to a relatively lower dimensional “latent space“ and a ”decoder“ that maps the latent space back to multi-track music data. Here are the inputs provided to the generator: **Single-track piano roll input**: A single melody track of size (32, 128, 1) => (TimeStep, NumPitches, NumTracks) is provided as the input to the generator. **Latent noise vector**: A latent noise vector z of dimension (2, 8, 512) is also passed in as input and this is responsible for ensuring that there is a distinctive flavor to each output generated by the generator, even when the same input is provided. Notice from the figure below that the encoding layers of the generator on the left side and decoder layer on on the right side are connected to create a U-shape, thereby giving the name U-Net to this architecture._____no_output_____<img src="images/dgen.png" alt="Generator architecture" width="800">_____no_output_____In this implementation, we build the generator following a simple four-level Unet architecture by combining `_conv2d`s and `_deconv2d`, where `_conv2d` compose the contracting path and `_deconv2d` forms the expansive path. _____no_output_____ <code> def _conv2d(layer_input, filters, f_size=4, bn=True): """Generator Basic Downsampling Block""" d = tf.keras.layers.Conv2D(filters, kernel_size=f_size, strides=2, padding='same')(layer_input) d = tf.keras.layers.LeakyReLU(alpha=0.2)(d) if bn: d = tf.keras.layers.BatchNormalization(momentum=0.8)(d) return d def _deconv2d(layer_input, pre_input, filters, f_size=4, dropout_rate=0): """Generator Basic Upsampling Block""" u = tf.keras.layers.UpSampling2D(size=2)(layer_input) u = tf.keras.layers.Conv2D(filters, kernel_size=f_size, strides=1, padding='same')(u) u = tf.keras.layers.BatchNormalization(momentum=0.8)(u) u = tf.keras.layers.ReLU()(u) if dropout_rate: u = tf.keras.layers.Dropout(dropout_rate)(u) u = tf.keras.layers.Concatenate()([u, pre_input]) return u def build_generator(condition_input_shape=(32, 128, 1), filters=64, instruments=4, latent_shape=(2, 8, 512)): """Buld Generator""" c_input = tf.keras.layers.Input(shape=condition_input_shape) z_input = tf.keras.layers.Input(shape=latent_shape) d1 = _conv2d(c_input, filters, bn=False) d2 = _conv2d(d1, filters * 2) d3 = _conv2d(d2, filters * 4) d4 = _conv2d(d3, filters * 8) d4 = tf.keras.layers.Concatenate(axis=-1)([d4, z_input]) u4 = _deconv2d(d4, d3, filters * 4) u5 = _deconv2d(u4, d2, filters * 2) u6 = _deconv2d(u5, d1, filters) u7 = tf.keras.layers.UpSampling2D(size=2)(u6) output = tf.keras.layers.Conv2D(instruments, kernel_size=4, strides=1, padding='same', activation='tanh')(u7) # 32, 128, 4 generator = tf.keras.models.Model([c_input, z_input], output, name='Generator') return generator_____no_output_____ </code> Let us now dive into each layer of the generator to see the inputs/outputs at each layer._____no_output_____ <code> # Models generator = build_generator() generator.summary()_____no_output_____ </code> ### Critic (Discriminator)_____no_output_____The goal of the critic is to provide feedback to the generator about how realistic the generated piano rolls are, so that the generator can learn to produce more realistic data. The critic provides this feedback by outputting a scalar that represents how “real” or “fake” a piano roll is. Since the critic tries to classify data as “real” or “fake”, it is not very different from commonly used binary classifiers. We use a simple architecture for the critic, composed of four convolutional layers and a dense layer at the end._____no_output_____<img src="images/ddis.png" alt="Discriminator architecture" width="800">_____no_output_____ <code> def _build_critic_layer(layer_input, filters, f_size=4): """ This layer decreases the spatial resolution by 2: input: [batch_size, in_channels, H, W] output: [batch_size, out_channels, H/2, W/2] """ d = tf.keras.layers.Conv2D(filters, kernel_size=f_size, strides=2, padding='same')(layer_input) # Critic does not use batch-norm d = tf.keras.layers.LeakyReLU(alpha=0.2)(d) return d def build_critic(pianoroll_shape=(32, 128, 4), filters=64): """WGAN critic.""" condition_input_shape = (32,128,1) groundtruth_pianoroll = tf.keras.layers.Input(shape=pianoroll_shape) condition_input = tf.keras.layers.Input(shape=condition_input_shape) combined_imgs = tf.keras.layers.Concatenate(axis=-1)([groundtruth_pianoroll, condition_input]) d1 = _build_critic_layer(combined_imgs, filters) d2 = _build_critic_layer(d1, filters * 2) d3 = _build_critic_layer(d2, filters * 4) d4 = _build_critic_layer(d3, filters * 8) x = tf.keras.layers.Flatten()(d4) logit = tf.keras.layers.Dense(1)(x) critic = tf.keras.models.Model([groundtruth_pianoroll,condition_input], logit, name='Critic') return critic_____no_output_____# Create the Discriminator critic = build_critic() critic.summary() # View discriminator architecture._____no_output_____ </code> ## Training We train our models by searching for model parameters which optimize an objective function. For our WGAN-GP, we have special loss functions that we minimize as we alternate between training our generator and critic networks: *Generator Loss:* * We use the Wasserstein (Generator) loss function which is negative of the Critic Loss function. The generator is trained to bring the generated pianoroll as close to the real pianoroll as possible. * $\frac{1}{m} \sum_{i=1}^{m} -D_w(G(z^{i}|c^{i})|c^{i})$ *Critic Loss:* * We begin with the Wasserstein (Critic) loss function designed to maximize the distance between the real piano roll distribution and generated (fake) piano roll distribution. * $\frac{1}{m} \sum_{i=1}^{m} [D_w(G(z^{i}|c^{i})|c^{i}) - D_w(x^{i}|c^{i})]$ * We add a gradient penalty loss function term designed to control how the gradient of the critic with respect to its input behaves. This makes optimization of the generator easier. * $\frac{1}{m} \sum_{i=1}^{m}(\lVert \nabla_{\hat{x}^i}D_w(\hat{x}^i|c^{i}) \rVert_2 - 1)^2 $_____no_output_____ <code> # Define the different loss functions def generator_loss(critic_fake_output): """ Wasserstein GAN loss (Generator) -D(G(z|c)) """ return -tf.reduce_mean(critic_fake_output) def wasserstein_loss(critic_real_output, critic_fake_output): """ Wasserstein GAN loss (Critic) D(G(z|c)) - D(x|c) """ return tf.reduce_mean(critic_fake_output) - tf.reduce_mean( critic_real_output) def compute_gradient_penalty(critic, x, fake_x): c = tf.expand_dims(x[..., 0], -1) batch_size = x.get_shape().as_list()[0] eps_x = tf.random.uniform( [batch_size] + [1] * (len(x.get_shape()) - 1)) # B, 1, 1, 1, 1 inter = eps_x * x + (1.0 - eps_x) * fake_x with tf.GradientTape() as g: g.watch(inter) disc_inter_output = critic((inter,c), training=True) grads = g.gradient(disc_inter_output, inter) slopes = tf.sqrt(1e-8 + tf.reduce_sum( tf.square(grads), reduction_indices=tf.range(1, grads.get_shape().ndims))) gradient_penalty = tf.reduce_mean(tf.square(slopes - 1.0)) return gradient_penalty _____no_output_____ </code> With our loss functions defined, we associate them with Tensorflow optimizers to define how our model will search for a good set of model parameters. We use the *Adam* algorithm, a commonly used general-purpose optimizer. We also set up checkpoints to save our progress as we train._____no_output_____ <code> # Setup Adam optimizers for both G and D generator_optimizer = tf.keras.optimizers.Adam(1e-3, beta_1=0.5, beta_2=0.9) critic_optimizer = tf.keras.optimizers.Adam(1e-3, beta_1=0.5, beta_2=0.9) # We define our checkpoint directory and where to save trained checkpoints ckpt = tf.train.Checkpoint(generator=generator, generator_optimizer=generator_optimizer, critic=critic, critic_optimizer=critic_optimizer) ckpt_manager = tf.train.CheckpointManager(ckpt, check_dir, max_to_keep=5)_____no_output_____ </code> Now we define the `generator_train_step` and `critic_train_step` functions, each of which performs a single forward pass on a batch and returns the corresponding loss._____no_output_____ <code> @tf.function def generator_train_step(x, condition_track_idx=0): ############################################ #(1) Update G network: maximize D(G(z|c)) ############################################ # Extract condition track to make real batches pianoroll c = tf.expand_dims(x[..., condition_track_idx], -1) # Generate batch of latent vectors z = tf.random.truncated_normal([BATCH_SIZE, 2, 8, 512]) with tf.GradientTape() as tape: fake_x = generator((c, z), training=True) fake_output = critic((fake_x,c), training=False) # Calculate Generator's loss based on this generated output gen_loss = generator_loss(fake_output) # Calculate gradients for Generator gradients_of_generator = tape.gradient(gen_loss, generator.trainable_variables) # Update Generator generator_optimizer.apply_gradients( zip(gradients_of_generator, generator.trainable_variables)) return gen_loss [email protected] def critic_train_step(x, condition_track_idx=0): ############################################################################ #(2) Update D network: maximize (D(x|c)) + (1 - D(G(z|c))|c) + GradientPenality() ############################################################################ # Extract condition track to make real batches pianoroll c = tf.expand_dims(x[..., condition_track_idx], -1) # Generate batch of latent vectors z = tf.random.truncated_normal([BATCH_SIZE, 2, 8, 512]) # Generated fake pianoroll fake_x = generator((c, z), training=False) # Update critic parameters with tf.GradientTape() as tape: real_output = critic((x,c), training=True) fake_output = critic((fake_x,c), training=True) critic_loss = wasserstein_loss(real_output, fake_output) # Caculate the gradients from the real and fake batches grads_of_critic = tape.gradient(critic_loss, critic.trainable_variables) with tf.GradientTape() as tape: gp_loss = compute_gradient_penalty(critic, x, fake_x) gp_loss *= 10.0 # Calculate the gradients penalty from the real and fake batches grads_gp = tape.gradient(gp_loss, critic.trainable_variables) gradients_of_critic = [g + ggp for g, ggp in zip(grads_of_critic, grads_gp) if ggp is not None] # Update Critic critic_optimizer.apply_gradients( zip(gradients_of_critic, critic.trainable_variables)) return critic_loss + gp_loss _____no_output_____ </code> Before we begin training, let's define some training configuration parameters and prepare to monitor important quantities. Here we log the losses and metrics which we can use to determine when to stop training. Consider coming back here to tweak these parameters and explore how your model responds. _____no_output_____ <code> # We use load_melody_samples() to load 10 input data samples from our dataset into sample_x # and 10 random noise latent vectors into sample_z sample_x, sample_z = inference_utils.load_melody_samples(n_sample=10)_____no_output_____# Number of iterations to train for iterations = 1000 # Update critic n times per generator update n_dis_updates_per_gen_update = 5 # Determine input track in sample_x that we condition on condition_track_idx = 0 sample_c = tf.expand_dims(sample_x[..., condition_track_idx], -1)_____no_output_____ </code> Let us now train our model!_____no_output_____ <code> # Clear out any old metrics we've collected metrics_utils.metrics_manager.initialize() # Keep a running list of various quantities: c_losses = [] g_losses = [] # Data iterator to iterate over our dataset it = iter(dataset) for iteration in range(iterations): # Train critic for _ in range(n_dis_updates_per_gen_update): c_loss = critic_train_step(next(it)) # Train generator g_loss = generator_train_step(next(it)) # Save Losses for plotting later c_losses.append(c_loss) g_losses.append(g_loss) display.clear_output(wait=True) fig = plt.figure(figsize=(15, 5)) line1, = plt.plot(range(iteration+1), c_losses, 'r') line2, = plt.plot(range(iteration+1), g_losses, 'k') plt.xlabel('Iterations') plt.ylabel('Losses') plt.legend((line1, line2), ('C-loss', 'G-loss')) display.display(fig) plt.close(fig) # Output training stats print('Iteration {}, c_loss={:.2f}, g_loss={:.2f}'.format(iteration, c_loss, g_loss)) # Save checkpoints, music metrics, generated output if iteration < 100 or iteration % 50 == 0 : # Check how the generator is doing by saving G's samples on fixed_noise fake_sample_x = generator((sample_c, sample_z), training=False) metrics_utils.metrics_manager.append_metrics_for_iteration(fake_sample_x.numpy(), iteration) if iteration % 50 == 0: # Save the checkpoint to disk. ckpt_manager.save(checkpoint_number=iteration) fake_sample_x = fake_sample_x.numpy() # plot the pianoroll display_utils.plot_pianoroll(iteration, sample_x[:4], fake_sample_x[:4], save_dir=train_dir) # generate the midi destination_path = path_utils.generated_midi_path_for_iteration(iteration, saveto_dir=sample_dir) midi_utils.save_pianoroll_as_midi(fake_sample_x[:4], destination_path=destination_path) _____no_output_____ </code> ### We have started training! When using the Wasserstein loss function, we should train the critic to converge to ensure that the gradients for the generator update are accurate. This is in contrast to a standard GAN, where it is important not to let the critic get too strong, to avoid vanishing gradients. Therefore, using the Wasserstein loss removes one of the key difficulties of training GANs—how to balance the training of the discriminator and generator. With WGANs, we can simply train the critic several times between generator updates, to ensure it is close to convergence. A typical ratio used is five critic updates to one generator update. ### "Babysitting" the learning process Given that training these models can be an investment in time and resources, we must to continuously monitor training in order to catch and address anomalies if/when they occur. Here are some things to look out for: **What should the losses look like?** The adversarial learning process is highly dynamic and high-frequency oscillations are quite common. However if either loss (critic or generator) skyrockets to huge values, plunges to 0, or get stuck on a single value, there is likely an issue somewhere. **Is my model learning?** - Monitor the critic loss and other music quality metrics (if applicable). Are they following the expected trajectories? - Monitor the generated samples (piano rolls). Are they improving over time? Do you see evidence of mode collapse? Have you tried listening to your samples? **How do I know when to stop?** - If the samples meet your expectations - Critic loss no longer improving - The expected value of the musical quality metrics converge to the corresponding expected value of the same metric on the training data_____no_output_____### How to measure sample quality during training Typically, when training any sort of neural networks, it is standard practice to monitor the value of the loss function throughout the duration of the training. The critic loss in WGANs has been found to correlate well with sample quality. While standard mechanisms exist for evaluating the accuracy of more traditional models like classifiers or regressors, evaluating generative models is an active area of research. Within the domain of music generation, this hard problem is even less well-understood. To address this, we take high-level measurements of our data and show how well our model produces music that aligns with those measurements. If our model produces music which is close to the mean value of these measurements for our training dataset, our music should match on general “shape”. We’ll look at three such measurements: - **Empty bar rate:** The ratio of empty bars to total number of bars. - **Pitch histogram distance:** A metric that captures the distribution and position of pitches. - **In Scale Ratio:** Ratio of the number of notes that are in C major key, which is a common key found in music, to the total number of notes. _____no_output_____## Evaluate results Now that we have finished training, let's find out how we did. We will analyze our model in several ways: 1. Examine how the generator and critic losses changed while training 2. Understand how certain musical metrics changed while training 3. Visualize generated piano roll output for a fixed input at every iteration and create a video _____no_output_____Let us first restore our last saved checkpoint. If you did not complete training but still want to continue with a pre-trained version, set `TRAIN = False`._____no_output_____ <code> ckpt = tf.train.Checkpoint(generator=generator) ckpt_manager = tf.train.CheckpointManager(ckpt, check_dir, max_to_keep=5) ckpt.restore(ckpt_manager.latest_checkpoint).expect_partial() print('Latest checkpoint {} restored.'.format(ckpt_manager.latest_checkpoint))_____no_output_____ </code> ### Plot losses_____no_output_____ <code> display_utils.plot_loss_logs(g_losses, c_losses, figsize=(15, 5), smoothing=0.01)_____no_output_____ </code> Observe how the critic loss (C_loss in the graph) decays to zero as we train. In WGAN-GPs, the critic loss decreases (almost) monotonically as you train._____no_output_____### Plot metrics_____no_output_____ <code> metrics_utils.metrics_manager.set_reference_metrics(training_data) metrics_utils.metrics_manager.plot_metrics()_____no_output_____ </code> Each row here corresponds to a different music quality metric and each column denotes an instrument track. Observe how the expected value of the different metrics (blue scatter) approach the corresponding training set expected values (red) as the number of iterations increase. You might expect to see diminishing returns as the model converges. _____no_output_____### Generated samples during training The function below helps you probe intermediate samples generated in the training process. Remember that the conditioned input here is sampled from our training data. Let's start by listening to and observing a sample at iteration 0 and then iteration 100. Notice the difference! _____no_output_____ <code> # Enter an iteration number (can be divided by 50) and listen to the midi at that iteration iteration = 50 midi_file = os.path.join(sample_dir, 'iteration-{}.mid'.format(iteration)) display_utils.playmidi(midi_file) _____no_output_____# Enter an iteration number (can be divided by 50) and look at the generated pianorolls at that iteration iteration = 50 pianoroll_png = os.path.join(train_dir, 'sample_iteration_%05d.png' % iteration) display.Image(filename=pianoroll_png)_____no_output_____ </code> Let's see how the generated piano rolls change with the number of iterations._____no_output_____ <code> from IPython.display import Video display_utils.make_training_video(train_dir) video_path = "movie.mp4" Video(video_path)_____no_output_____ </code> ## Inference _____no_output_____### Generating accompaniment for custom input Congratulations! You have trained your very own WGAN-GP to generate music. Let us see how our generator performs on a custom input. The function below generates a new song based on "Twinkle Twinkle Little Star"._____no_output_____ <code> latest_midi = inference_utils.generate_midi(generator, eval_dir, input_midi_file='./input_twinkle_twinkle.mid')_____no_output_____display_utils.playmidi(latest_midi)_____no_output_____ </code> We can also take a look at the generated piano rolls for a certain sample, to see how diverse they are!_____no_output_____ <code> inference_utils.show_generated_pianorolls(generator, eval_dir, input_midi_file='./input_twinkle_twinkle.mid')_____no_output_____ </code> # What's next?_____no_output_____### Using your own data (Optional) _____no_output_____To create your own dataset you can extract the piano roll from MIDI data. An example of creating a piano roll from a MIDI file is given below_____no_output_____ <code> import numpy as np from pypianoroll import Multitrack midi_data = Multitrack('./input_twinkle_twinkle.mid') tracks = [track.pianoroll for track in midi_data.tracks] sample = np.stack(tracks, axis=-1) print(sample.shape)_____no_output_____ </code> # Appendix_____no_output_____### Open source implementations For more open-source implementations of generative models for music, check out: - [MuseGAN](https://github.com/salu133445/musegan): Official TensorFlow Implementation that uses GANs to generate multi track polyphonic music - [GANSynth](https://github.com/tensorflow/magenta/tree/master/magenta/models/gansynth): GANSynth uses a Progressive GAN architecture to incrementally upsample with convolution from a single vector to the full audio spectrogram - [Music Transformer](https://github.com/tensorflow/magenta/tree/master/magenta/models/score2perf): Uses transformers to generate music! GANs have also achieved state of the generative modeling in several other domains including cross domain image tranfer, celebrity face generation, super resolution text to image and image inpainting. - [Keras-GAN](https://github.com/eriklindernoren/Keras-GAN): Library of reference implementations in Keras for image generation(good for educational purposes). There's an ocean of literatures out there that use GANs for modeling distributions across fields! If you are interested, [Gan Zoo](https://github.com/hindupuravinash/the-gan-zoo) is a good place to start._____no_output_____### References <a id='references'></a> 1. [Dong, H.W., Hsiao, W.Y., Yang, L.C. and Yang, Y.H., 2018, April. MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment. In Thirty-Second AAAI Conference on Artificial Intelligence.](https://arxiv.org/abs/1709.06298) 2. [Ishaan, G., Faruk, A., Martin, A., Vincent, D. and Aaron, C., 2017. Improved training of wasserstein gans. In Advances in Neural Information Processing Systems.](https://arxiv.org/abs/1704.00028) 3. [Arjovsky, M., Chintala, S. and Bottou, L., 2017. Wasserstein gan. arXiv preprint arXiv:1701.07875.](https://arxiv.org/abs/1701.07875) 4. [Foster, D., 2019. Generative Deep Learning: Teaching Machines to Paint, Write, Compose, and Play. O'Reilly Media.](https://www.amazon.com/Generative-Deep-Learning-Teaching-Machines/dp/1492041947)_____no_output_____### More on Wassertein GAN with Gradient Penalty (optional) While GANs are a major breakthrough for generative modeling, plain GANs are also notoriously difficult to train. Some common problems encountered are: * **Oscillating loss:** The loss of the discriminator and generator can start to oscillate without exhibiting any long term stability. * **Mode collapse:** The generator may get stuck on a small set of samples that always fool the discriminator. This reduces the capability of the network to produce novel samples. * **Uninformative loss:** The lack of correlation between the generator loss and quality of generated output makes plain GAN training difficult to interpret. The [Wasserstein GAN](#references) was a major advancement in GANs and helped mitigate to some of these issues. Some of its features are: 1. It significantly improves the interpretability of loss functions and provides clearer stopping criteria 2. WGANs generally produce results of higher quality (demonstrated within the image generation domain) **Mathematics of Wasserstein GAN with Gradient Penalty** The [Wasserstein distance](https://en.wikipedia.org/wiki/Wasserstein_metric) between the true distribution $P_r$ and generated piano roll distribution $P_g$ is defined as follows: $$\mathbb{W}(P_{r},P_{g})=\sup_{\lVert{f} \rVert_{L} \le 1} \mathbb{E}_{x \sim \mathbb{P}_r}(f(x)) - \mathbb{E}_{x \sim \mathbb{P}_g}(f(x)) $$ In this equation we are trying to minimize the distance between the expectation of the real distribution and the expectation of the generation distribution. $f$ is subject to a technical constraint in that it must be [1-Lipschitz](https://en.wikipedia.org/wiki/Lipschitz_continuity). To enforce the 1-Lipschitz condition that basically constraints the gradients from varying too rapidly we use the gradient penalty. **Gradient penalty**: We want to penalize the gradients of the critic. We implicitly define $P_{\hat{x}}$ by sampling uniformly along straight lines between pairs of points sampled from the data distribution $P_r$ and the generator distribution $P_g$. This was originally motivated by the fact that the optimal critic contains straight lines with gradient norm 1 connecting coupled points from $P_r$ and $P_g$. We use a penalty coefficient $\lambda$= 10 as was recommended in the original paper. The loss with gradient penalty is: $$\mathbb{L}(P_{r},P_{g},P_{\hat{x}} )= \mathbb{W}(P_{r},P_{g}) + \lambda \mathbb{E}_{\hat{x} \sim \mathbb{P}_\hat{x}}[(\lVert \nabla_{\hat{x}}D(\hat{x}) \rVert_2 - 1)^2]$$ | This loss can be parametrized in terms of $w$ and $\theta$. We then use neural networks to learn the functions $f_w$ (discriminator) and $g_\theta$ (generator). $$\mathbb{W}(P_{r},P_{\theta})=\max_{w \in \mathbb{W}} \mathbb{E}_{x \sim \mathbb{P}_r}(D_w(x)) - \mathbb{E}_{z \sim p(z)}(D_w(G_{\theta}(z)) $$ $$\mathbb{L}(P_{r},P_{\theta},P_{\hat{x}})=\max_{w \in \mathbb{W}} \mathbb{E}_{x \sim \mathbb{P}_r}(D_w(x)) - \mathbb{E}_{z \sim p(z)}(D_w(G_{\theta}(z)) + \lambda \mathbb{E}_{\hat{x} \sim \mathbb{P}_\hat{x}}[(\lVert \nabla_{\hat{x}}D_w(\hat{x}) \rVert_2 - 1)^2]$$ where $$ \hat{x} = \epsilon x + (1- \epsilon) G(z) $$ and $$\epsilon \sim Unif(0,1)$$ The basic procedure to train is as following: 1. We draw real_x from the real distribution $P_r$ and fake_x from the generated distribution $G_{\theta}(z)$ where $z \sim p(z)$ 2. The latent vectors are sampled from z and then tranformed using the generator $G_{\theta}$ to get the fake samples fake_x. They are evaluated using the critic function $D_w$ 3. We are trying to minimize the Wasserstein distance between the two distributions Both the generator and critic are conditioned on the input pianoroll melody._____no_output_____
{ "repository": "daekeun-ml/aws-deepcomposer-samples", "path": "Lab 2/GAN.ipynb", "matched_keywords": [ "STAR" ], "stars": 1, "size": 49101, "hexsha": "4816c8d732003db5e8dab0ba2b5bd8d507089c1f", "max_line_length": 652, "avg_line_length": 41.3308080808, "alphanum_fraction": 0.6207816541 }
# Notebook from ptpro3/ptpro3.github.io Path: Projects/Challenges/Challenge09/challenge_set_9ii_prashant.ipynb ``` Topic: Challenge Set 9 Part II Subject: SQL Date: 02/20/2017 Name: Prashant Tatineni ```_____no_output_____ <code> from sqlalchemy import create_engine import pandas as pd cnx = create_engine('postgresql://prashant:[email protected]:5432/prashant') #port ~ 5432_____no_output_____pd.read_sql_query('''SELECT * FROM allstarfull LIMIT 5''',cnx)_____no_output_____pd.read_sql_query('''SELECT * FROM schools LIMIT 5''',cnx)_____no_output_____pd.read_sql_query('''SELECT * FROM salaries LIMIT 5''',cnx)_____no_output_____pd.read_sql_query('''SELECT schoolstate,Count(schoolid) as ct FROM schools Group By schoolstate ORDER BY ct DESC LIMIT 5''',cnx)_____no_output_____pd.read_sql_query('''SELECT playerid,salary FROM Salaries WHERE yearid = '1985' and salary > '500000' LIMIT 5;''',cnx)_____no_output_____ </code> **1. What was the total spent on salaries by each team, each year?**_____no_output_____ <code> pd.read_sql_query('''SELECT yearid, teamid, SUM(salary) FROM salaries GROUP BY 1,2 ORDER BY 1 DESC LIMIT 10 ''',cnx)_____no_output_____ </code> **2. What is the first and last year played for each player? Hint: Create a new table from 'Fielding.csv'.**_____no_output_____ <code> pd.read_sql_query('''SELECT playerid, min(yearid), max(yearid) FROM fielding GROUP BY 1 LIMIT 10 ''',cnx)_____no_output_____ </code> **3. Who has played the most all star games?**_____no_output_____ <code> pd.read_sql_query('''SELECT playerid, COUNT(*) FROM allstarfull GROUP BY 1 ORDER BY 2 DESC LIMIT 1 ''',cnx)_____no_output_____ </code> **4. Which school has generated the most distinct players? Hint: Create new table from 'CollegePlaying.csv'.**_____no_output_____ <code> pd.read_sql_query('''SELECT schoolid, count(distinct playerid) FROM schoolsplayers GROUP BY 1 ORDER BY 2 DESC LIMIT 1 ''',cnx)_____no_output_____ </code> **5. Which players have the longest career? Assume that the debut and finalGame columns comprise the start and end, respectively, of a player's career. Hint: Create a new table from 'Master.csv'. Also note that strings can be converted to dates using the DATE function and can then be subtracted from each other yielding their difference in days.**_____no_output_____ <code> pd.read_sql_query('''SELECT playerid, finalgame, debut, (finalgame-debut) AS days FROM master WHERE finalgame IS NOT NULL and debut IS NOT NULL ORDER BY 4 DESC LIMIT 5''',cnx)_____no_output_____ </code> **6. What is the distribution of debut months? Hint: Look at the DATE and EXTRACT functions.**_____no_output_____ <code> pd.read_sql_query('''SELECT EXTRACT(MONTH FROM debut) AS debut_month, COUNT(*) FROM master GROUP BY 1 ORDER BY 1 ASC''',cnx)_____no_output_____ </code> **7. What is the effect of table join order on mean salary for the players listed in the main (master) table? Hint: Perform two different queries, one that joins on playerID in the salary table and other that joins on the same column in the master table. You will have to use left joins for each since right joins are not currently supported with SQLalchemy.**_____no_output_____ <code> pd.read_sql_query('''SELECT S.playerid, AVG(salary) FROM Salaries S LEFT JOIN Master M ON S.playerid = M.playerid GROUP BY 1 LIMIT 5''',cnx)_____no_output_____pd.read_sql_query('''SELECT M.playerid, AVG(salary) FROM Master M LEFT JOIN Salaries S ON M.playerid = S.playerid GROUP BY 1 LIMIT 5''',cnx)_____no_output_____ </code> By joining to the Master table, we can see that there's many players with no salary data._____no_output_____
{ "repository": "ptpro3/ptpro3.github.io", "path": "Projects/Challenges/Challenge09/challenge_set_9ii_prashant.ipynb", "matched_keywords": [ "STAR" ], "stars": 2, "size": 36293, "hexsha": "4816e57d757ea6430a3c89227b643923ff06ecae", "max_line_length": 366, "avg_line_length": 27.1451009723, "alphanum_fraction": 0.3589948475 }
# Notebook from Vixk2021/Foody Path: projet_foody_analyse_VKO.ipynb # PROJET FOODY_ Data Analyse_____no_output_____ <code> # Import des dépendances import numpy as np import pandas as pd import matplotlib.pyplot as plt import statsmodels.api as sm import pymysql as sql import seaborn as sns sns.set()_____no_output_____# Se Connecter à la BDD Foody_data conn = sql.connect(host='Localhost',user='root',passwd='XXXXXXXXXX', database='Foody_data') if conn: print("Connection Successful!") else: print("Connection Failed!") cur = conn.cursor()Connection Successful! def sql_to_df(sql_query): # Use pandas to pass sql query using connection form mySQL df = pd.read_sql(sql_query, conn) # Show the resulting DataFrame return df_____no_output_____%matplotlib inline_____no_output_____ </code> ## Quel est le top 5 des pays ayant passés le plus de commande ? _____no_output_____ <code> query1 = '''SELECT distinct PaysLiv, count(NoCom) as Nb_commande FROM Commande group by PaysLiv ORDER BY Nb_commande DESC LIMIT 5 ''' sql_to_df(query1)_____no_output_____df1= sql_to_df(query1) sns.set_style("white") plt.figure(figsize = (9, 6)) plt.bar(x = df1["PaysLiv"] , height =df1["Nb_commande"], color = "midnightblue") plt.xticks( fontsize = 13) plt.yticks( fontsize = 13) plt.title("Top 5 Pays ", fontsize = 16, fontweight = "bold") plt.ylabel("Nb_commandes", fontsize = 13 ) plt.tight_layout() plt.savefig("Top5_pays_cmd.png") plt.show()_____no_output_____ </code> => Réponse : dans le graphe ci dessus: avec l'Allemagne, les USA, le Brésil, la France et l'Angleterre_____no_output_____## Comment ont évolués les commandes des pays de ce top 5 entre 2006, 2007 et 2008 ? _____no_output_____ <code> query2 = '''SELECT PaysLiv, count(DateCom) as nb_cmd_2006 FROM Commande WHERE DateCom like "%2006%" group by PaysLiv order By nb_cmd_2006 desc LIMIT 5;''' sql_to_df(query2)_____no_output_____df2 = sql_to_df(query2) sns.set_style("white") plt.figure(figsize = (9, 6)) plt.bar(x = df2["PaysLiv"] , height =df2["nb_cmd_2006"], color = "orange") plt.xticks( fontsize = 15) plt.yticks( fontsize = 15) plt.title("Nombre commande des pays du top 5 en 2006", fontsize = 20, fontweight = "bold") plt.ylabel("Nombre commandes", fontsize = 18 ) plt.tight_layout() plt.savefig("Cmd_paystop5_2006.png") plt.show()_____no_output_____query3='''SELECT PaysLiv, YEAR(DateCom) as Annees , count(DateCom) as nb_cmd FROM Commande WHERE (DateCom like "%2006%" OR DateCom Like "%2007%" OR DateCom like "%2008%") AND PaysLiv IN ("Germany", "USA" , "Brazil", "France", "UK") group by PaysLiv, YEAR (DateCom) ORDER BY PaysLiv;''' sql_to_df(query3)_____no_output_____plt.style.use('ggplot') sns.set_style("white") n = 5 x_2006 = (df3.nb_cmd[0::3]) x_2007 = (df3.nb_cmd[1::3]) x_2008 = (df3.nb_cmd[2::3]) fig, ax = plt.subplots(figsize = (9, 6)) index = np.arange(n) bar_width = 0.3 opacity = 0.9 ax.bar(index, x_2006, bar_width, alpha=opacity, color='orange',label='2006') ax.bar(index+bar_width, x_2007, bar_width, alpha=opacity, color='blue',label='2007') ax.bar(index+2*bar_width, x_2008, bar_width, alpha=opacity,color='green', label='2008') ax.set_xlabel('Pays du Top 5 ', size=15) ax.set_ylabel('nombre de commande', size=18) ax.set_title('Evolution des commandes entre 2006 et 2008', size=18,fontweight='bold') ax.set_xticks(index + bar_width) ax.set_xticklabels((df3.PaysLiv[0::3]), size = 15)#("Brazil","France","Germany", "UK","USA")) ax.legend(ncol=3) plt.savefig("Evolution_top5.png") plt.show()_____no_output_____ </code> ### Exemple de "Grouped bar chart" à adapter pour notre dataframe_____no_output_____ <code> categorical_1 = ['A', 'B', 'C', 'D'] colors = ['green', 'red', 'blue', 'orange'] numerical = [[6, 9, 2, 7], [6, 7, 3, 8], [9, 11, 13, 15], [3, 5, 9, 6]] number_groups = len(categorical_1) bin_width = 1.0/(number_groups+1) fig, ax = plt.subplots(figsize=(6,6)) for i in range(number_groups): ax.bar(x=np.arange(len(categorical_1)) + i*bin_width, height=numerical[i], width=bin_width, color=colors[i], align='center') ax.set_xticks(np.arange(len(categorical_1)) + number_groups/(2*(number_groups+1))) # number_groups/(2*(number_groups+1)): décalage du xticklabel ax.set_xticklabels(categorical_1) ax.legend(categorical_1, facecolor='w') plt.show()_____no_output_____ </code> ### Test Stacked bar chart :_____no_output_____ <code> labels = ['Brazil','France','Germany', 'UK','USA'] x_2006 = [13, 15, 24, 10, 23] x_2007 = [42, 39, 64, 30, 60] x_2008= [28, 23, 39, 16, 39] fig, ax = plt.subplots() ax.bar(labels, x_2006, label='2006') ax.bar(labels, x_2007, bottom=x_2007, label='2007') ax.bar(labels, x_2008, bottom=x_2008, label='2008') ax.set_ylabel('Nombre commandes', fontsize=15) ax.set_xlabel('Pays du top5', fontsize=15) ax.set_title('Commandes réparties entre 2006, 2007 et 2008', fontsize=16, fontweight='bold') ax.legend() plt.savefig("Stacked_bar_char.png") plt.show()_____no_output_____query4= '''SELECT CodeCateg, NomCateg, count(NomProd) as nb_prod From produit natural join Categorie group by CodeCateg;''' sql_to_df(query4)_____no_output_____df4 = sql_to_df(query4) # Pie chart, where the slices will be ordered # and plotted counter-clockwise: labels = df4['NomCateg'] sizes = df4['nb_prod'] #explode = (0, 0.1, 0, 0) # only "explode" the 2nd slice (i.e. 'Hogs') fig1, ax1 = plt.subplots(figsize = (6,6)) ax1.pie(sizes, labels=labels, autopct='%1.1f%%', shadow=True, startangle=90) ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle. plt.savefig("Pie_chart.png") plt.show()_____no_output_____ </code>
{ "repository": "Vixk2021/Foody", "path": "projet_foody_analyse_VKO.ipynb", "matched_keywords": [ "evolution" ], "stars": null, "size": 180351, "hexsha": "48172b2cd5ae2a3a9e869c7f46e91ef311545b94", "max_line_length": 49668, "avg_line_length": 228.8718274112, "alphanum_fraction": 0.9001336283 }
# Notebook from kne42/starfish Path: notebooks/BaristaSeq.ipynb <code> %matplotlib inline_____no_output_____ </code> # BaristaSeq BaristaSeq is an assay that sequences padlock-probe initiated rolling circle amplified spots using a one-hot codebook. The publication for this assay can be found [here](https://www.ncbi.nlm.nih.gov/pubmed/29190363) This example processes a single field of view extracted from a tissue slide that measures gene expression in mouse primary visual cortex._____no_output_____ <code> import matplotlib import matplotlib.pyplot as plt import numpy as np import pandas as pd import starfish import starfish.data from starfish.types import Axes from starfish.util.plot import ( imshow_plane, intensity_histogram, overlay_spot_calls ) matplotlib.rcParams["figure.dpi"] = 150_____no_output_____ </code> Load Data --------- Import starfish and extract a single field of view._____no_output_____ <code> experiment_json = ( "https://d2nhj9g34unfro.cloudfront.net/browse/formatted/20190319/baristaseq" "/experiment.json" ) exp = starfish.Experiment.from_json(experiment_json) nissl = exp['fov_000'].get_image('dots') img = exp['fov_000'].get_image('primary')_____no_output_____ </code> starfish data are 5-dimensional, but to demonstrate what they look like in a non-interactive fashion, it's best to visualize the data in 2-d. There are better ways to look at these data using the `starfish.display` method, which allows the user to page through each axis of the tensor_____no_output_____ <code> # for this vignette, we'll pick one plane and track it through the processing # steps plane_selector = {Axes.CH: 0, Axes.ROUND: 0, Axes.ZPLANE: 8} f, (ax1, ax2) = plt.subplots(ncols=2) imshow_plane(img, sel=plane_selector, ax=ax1, title="primary image") imshow_plane(nissl, sel=plane_selector, ax=ax2, title="nissl image")_____no_output_____ </code> Register the data ----------------- The first step in BaristaSeq is to do some rough registration. For this data, the rough registration has been done for us by the authors, so it is omitted from this notebook._____no_output_____Project into 2D --------------- BaristaSeq is typically processed in 2d. Starfish exposes `ImageStack.max_proj` to enable a user to max-project any axes. Here we max project Z for both the nissl images and the primary images._____no_output_____ <code> z_projected_image = img.max_proj(Axes.ZPLANE) z_projected_nissl = nissl.max_proj(Axes.ZPLANE) # show the projected data f, (ax1, ax2) = plt.subplots(ncols=2) imshow_plane(img, sel=plane_selector, ax=ax1, title="primary image") imshow_plane(nissl, sel=plane_selector, ax=ax2, title="nissl image")_____no_output_____ </code> Correct Channel Misalignment ---------------------------- There is a slight miss-alignment of the C channel in the microscope used to acquire the data. This has been corrected for this data, but here is how it could be transformed using python code for future datasets._____no_output_____ <code> # from skimage.feature import register_translation # from skimage.transform import warp # from skimage.transform import SimilarityTransform # from functools import partial # # Define the translation # transform = SimilarityTransform(translation=(1.9, -0.4)) # # C is channel 0 # channels = (0,) # # The channel should be transformed in all rounds # rounds = np.arange(img.num_rounds) # # apply the transformation in place # slice_indices = product(channels, rounds) # for ch, round_, in slice_indices: # selector = {Axes.ROUND: round_, Axes.CH: ch, Axes.ZPLANE: 0} # tile = z_projected_image.get_slice(selector)[0] # transformed = warp(tile, transform) # z_projected_image.set_slice( # selector=selector, # data=transformed.astype(np.float32), # )_____no_output_____ </code> Remove Registration Artefacts ----------------------------- There are some minor registration errors along the pixels for which y < 100 and x < 50. Those pixels are dropped from this analysis_____no_output_____ <code> registration_corrected: starfish.ImageStack = z_projected_image.sel( {Axes.Y: (100, -1), Axes.X: (50, -1)} )_____no_output_____ </code> Correct for bleed-through from Illumina SBS reagents ---------------------------------------------------- The following matrix contains bleed correction factors for Illumina sequencing-by-synthesis reagents. Starfish provides a LinearUnmixing method that will unmix the fluorescence intensities_____no_output_____ <code> data = np.array( [[ 1. , -0.05, 0. , 0. ], [-0.35, 1. , 0. , 0. ], [ 0. , -0.02, 1. , -0.84], [ 0. , 0. , -0.05, 1. ]] ) rows = pd.Index(np.arange(4), name='bleed_from') cols = pd.Index(np.arange(4), name='bleed_to') unmixing_coeff = pd.DataFrame(data, rows, cols) lum = starfish.image.Filter.LinearUnmixing(unmixing_coeff) bleed_corrected = lum.run(registration_corrected, in_place=False)_____no_output_____ </code> the matrix shows that (zero-based!) channel 2 bleeds particularly heavily into channel 3. To demonstrate the effect of unmixing, we'll plot channels 2 and 3 of round 0 before and after unmixing. Channel 2 should look relative unchanged, as it only receives a bleed through of 5% of channel 3. However, Channel 3 should look dramatically sparser after spots from Channel 2 have been subtracted_____no_output_____ <code> # TODO ambrosejcarr fix this. ch2_r0 = {Axes.CH: 2, Axes.ROUND: 0, Axes.X: (500, 700), Axes.Y: (500, 700)} ch3_r0 = {Axes.CH: 3, Axes.ROUND: 0, Axes.X: (500, 700), Axes.Y: (500, 700)} f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2) imshow_plane( registration_corrected, sel=ch2_r0, ax=ax1, title="Channel 2\nBefore Unmixing" ) imshow_plane( registration_corrected, sel=ch3_r0, ax=ax2, title="Channel 3\nBefore Unmixing" ) imshow_plane( bleed_corrected, sel=ch2_r0, ax=ax3, title="Channel 2\nAfter Unmixing" ) imshow_plane( bleed_corrected, sel=ch3_r0, ax=ax4, title="Channel 3\nAfter Unmixing" ) f.tight_layout()_____no_output_____ </code> Remove image background ----------------------- To remove image background, BaristaSeq uses a White Tophat filter, which measures the background with a rolling disk morphological element and subtracts it from the image._____no_output_____ <code> from skimage.morphology import opening, dilation, disk from functools import partial # calculate the background opening = partial(opening, selem=disk(5)) background = bleed_corrected.apply( opening, group_by={Axes.ROUND, Axes.CH, Axes.ZPLANE}, verbose=False, in_place=False ) wth = starfish.image.Filter.WhiteTophat(masking_radius=5) background_corrected = wth.run(bleed_corrected, in_place=False) f, (ax1, ax2, ax3) = plt.subplots(ncols=3) selector = {Axes.CH: 0, Axes.ROUND: 0, Axes.X: (500, 700), Axes.Y: (500, 700)} imshow_plane(bleed_corrected, sel=selector, ax=ax1, title="template\nimage") imshow_plane(background, sel=selector, ax=ax2, title="background") imshow_plane( background_corrected, sel=selector, ax=ax3, title="background\ncorrected" ) f.tight_layout()_____no_output_____ </code> Scale images to equalize spot intensities across channels --------------------------------------------------------- The number of peaks are not uniform across rounds and channels, which prevents histogram matching across channels. Instead, a percentile value is identified and set as the maximum across channels, and the dynamic range is extended to equalize the channel intensities. We first demonatrate what scaling by the max value does._____no_output_____ <code> sbp = starfish.image.Filter.Clip(p_max=100, expand_dynamic_range=True) scaled = sbp.run(background_corrected, n_processes=1, in_place=False)_____no_output_____ </code> The easiest way to visualize this is to calculate the intensity histograms before and after this scaling and plot their log-transformed values. This should see that the histograms are better aligned in terms of intensities. It gets most of what we want, but the histograms are still slightly shifted; a result of high-value outliers._____no_output_____ <code> def plot_scaling_result( template: starfish.ImageStack, scaled: starfish.ImageStack ): f, (before, after) = plt.subplots(ncols=4, nrows=2) for channel, ax in enumerate(before): title = f'Before scaling\nChannel {channel}' intensity_histogram( template, sel={Axes.CH: channel, Axes.ROUND: 0}, ax=ax, title=title, log=True, bins=50, ) ax.set_xlim(0, 0.007) for channel, ax in enumerate(after): title = f'After scaling\nChannel {channel}' intensity_histogram( scaled, sel={Axes.CH: channel, Axes.ROUND: 0}, ax=ax, title=title, log=True, bins=50, ) f.tight_layout() return f f = plot_scaling_result(background_corrected, scaled)_____no_output_____ </code> We repeat this scaling by the 99.8th percentile value, which does a better job of equalizing the intensity distributions. It should also be visible that exactly 0.2% of values take on the max value of 1. This is a result of setting any value above the 99.8th percentile to 1, and is a trade-off made to eliminate large-value outliers._____no_output_____ <code> sbp = starfish.image.Filter.Clip(p_max=99.8, expand_dynamic_range=True) scaled = sbp.run(background_corrected, n_processes=1, in_place=False) f = plot_scaling_result(background_corrected, scaled)_____no_output_____ </code> ## Detect Spots We use a pixel spot decoder to identify the gene target for each spot._____no_output_____ <code> psd = starfish.spots.DetectPixels.PixelSpotDecoder( codebook=exp.codebook, metric='euclidean', distance_threshold=0.5, magnitude_threshold=0.1, min_area=7, max_area=50 ) pixel_decoded, ccdr = psd.run(scaled)_____no_output_____ </code> plot a mask that shows where pixels have decoded to genes._____no_output_____ <code> f, ax = plt.subplots() ax.imshow(np.squeeze(ccdr.decoded_image), cmap=plt.cm.nipy_spectral) ax.axis("off") ax.set_title("Pixel Decoding Results")_____no_output_____ </code> Get the total counts for each gene from each spot detector. Do the below values make sense for this tissue and this probeset?_____no_output_____ <code> pixel_decoded_gene_counts = pd.Series( *np.unique(pixel_decoded['target'], return_counts=True)[::-1] ) print(pixel_decoded_gene_counts.sort_values(ascending=False)[:20])_____no_output_____ </code>
{ "repository": "kne42/starfish", "path": "notebooks/BaristaSeq.ipynb", "matched_keywords": [ "gene expression" ], "stars": 2, "size": 15104, "hexsha": "481754056d5b46811073a2e3cfbe459f56048155", "max_line_length": 91, "avg_line_length": 31.2712215321, "alphanum_fraction": 0.5816340042 }
# Notebook from martin-fabbri/colab-notebooks Path: 01_fitting_gaussian_process_model.ipynb <a href="https://colab.research.google.com/github/martin-fabbri/colab-notebooks/blob/master/01_fitting_gaussian_process_model.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____# Fitting Gaussian Process Model in PyMC3 A commong applied task involves building regression models to characterize non-linear relatioshipts between variables. It is possible to fit such models by assuming a particular non-linear structure, such as sinusoidal, exponential or pulynomical function, to descrive a given response by one variable to another. A non-parametric approach can be adopted be defining a set of knots accross the variable space and use a spline or kernel regression to describe arbitrary non-linear relationships. Alternatively, a Bayesian non-parametric strategy can be adopted and directl model the unknown underlying function. Use of the term "non-parametric" in the context of Bayesian analysis is something of a misnomer. This is because the fundamental first step in Bayesian modeling is to specify a full probability model for the problem at hand, assigning probability densities to all unknown quantities of interest. So, it is difficult to explicitly state a full probability model without the use of probability functions, which are parametric! It turns out that Bayesian non-parametric methods do not imply that there are no parameters, but rather that the number of parameters grows with the size of the dataset. In fact, Bayesian non-parametric models are infinitely parametric. ## Building models with Gaussians What if we chose to use Gaussian distributions to model our data? $$p(x \mid \pi, \Sigma) = (2\pi)^{-k/2}|\Sigma|^{-1/2} \exp\left\{ -\frac{1}{2} (x-\mu)^{\prime}\Sigma^{-1}(x-\mu) \right\}$$ There would not seem to be an advantage to doing this, because normal distributions are not particularly flexible distributions in and of themselves. However, adopting a set of Gaussians (a multivariate normal vector) confers a number of advantages. First, the marginal distribution of any subset of elements from a multivariate normal distribution is also normal: $$p(x,y) = \mathcal{N}\left(\left[{ \begin{array}{c} {\mu_x} \\ {\mu_y} \\ \end{array} }\right], \left[{ \begin{array}{c} {\Sigma_x} &amp; {\Sigma_{xy}} \\ {\Sigma_{xy}^T} &amp; {\Sigma_y} \\ \end{array} }\right]\right)$$$$p(x) = \int p(x,y) dy = \mathcal{N}(\mu_x, \Sigma_x)$$ Also, conditionals distributions of a subset of a multivariate normal distribution (conditional on the remaining elements) are normal too: $$p(x|y) = \mathcal{N}(\mu_x + \Sigma_{xy}\Sigma_y^{-1}(y-\mu_y), \Sigma_x-\Sigma_{xy}\Sigma_y^{-1}\Sigma_{xy}^T)$$ A Gaussian process generalizes the multivariate normal to infinite dimension. It is defined as an infinite collection of random variables, any finite subset of which have a Gaussian distribution. Thus, the marginalization property is explicit in its definition. Another way of thinking about an infinite vector is as a function. When we write a function that takes continuous values as inputs, we are essentially specifying an infinte vector that only returns values (indexed by the inputs) when the function is called upon to do so. By the same token, this notion of an infinite-dimensional Gaussian as a function allows us to work with them computationally: we are never required to store all the elements of the Gaussian process, only to calculate them on demand. So, we can describe a Gaussian process as a disribution over functions. Just as a multivariate normal distribution is completely specified by a mean vector and covariance matrix, a GP is fully specified by a mean function and a covariance function: $$p(x) \sim \mathcal{GP}(m(x), k(x,x^{\prime}))$$ It is the marginalization property that makes working with a Gaussian process feasible: we can marginalize over the infinitely-many variables that we are not interested in, or have not observed. For example, one specification of a GP might be as follows: $$\begin{aligned} m(x) &amp;=0 \\ k(x,x^{\prime}) &amp;= \theta_1\exp\left(-\frac{\theta_2}{2}(x-x^{\prime})^2\right) \end{aligned}$$ here, the covariance function is a squared exponential, for which values of $x$ and $x^{\prime}$ that are close together result in values of $k$ closer to 1 and those that are far apart return values closer to zero. It may seem odd to simply adopt the zero function to represent the mean function of the Gaussian process -- surely we can do better than that! It turns out that most of the learning in the GP involves the covariance function and its parameters, so very little is gained in specifying a complicated mean function. For a finite number of points, the GP becomes a multivariate normal, with the mean and covariance as the mean functon and covariance function evaluated at those points._____no_output_____ <code> !nvidia-smiFri Oct 4 02:59:35 2019 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 430.40 Driver Version: 418.67 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 | | N/A 51C P8 31W / 149W | 0MiB / 11441MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ !git clone https://github.com/martin-fabbri/colab-notebooksCloning into 'colab-notebooks'... remote: Enumerating objects: 94, done. remote: Counting objects: 100% (94/94), done. remote: Compressing objects: 100% (89/89), done. remote: Total 458 (delta 58), reused 13 (delta 5), pack-reused 364 Receiving objects: 100% (458/458), 10.38 MiB | 11.60 MiB/s, done. Resolving deltas: 100% (257/257), done. !pip uninstall arviz !pip uninstall pymc3WARNING: Skipping arviz as it is not installed. Uninstalling pymc3-3.7: Would remove: /usr/local/lib/python3.6/dist-packages/pymc3-3.7.dist-info/* /usr/local/lib/python3.6/dist-packages/pymc3/* Proceed (y/n)? y Successfully uninstalled pymc3-3.7 !pip install arviz !pip install pymc3Collecting arviz [?25l Downloading https://files.pythonhosted.org/packages/fa/de/7ee2d4da966097029ed40216674b7b84e55c8fbc3bbf8fb0080f930de46c/arviz-0.5.1-py3-none-any.whl (1.4MB)  |████████████████████████████████| 1.4MB 6.3MB/s [?25hRequirement already satisfied: pandas>=0.23 in /usr/local/lib/python3.6/dist-packages (from arviz) (0.24.2) Requirement already satisfied: scipy>=0.19 in /usr/local/lib/python3.6/dist-packages (from arviz) (1.3.1) Requirement already satisfied: numpy>=1.12 in /usr/local/lib/python3.6/dist-packages (from arviz) (1.16.5) Requirement already satisfied: xarray>=0.11 in /usr/local/lib/python3.6/dist-packages (from arviz) (0.11.3) Requirement already satisfied: matplotlib>=3.0 in /usr/local/lib/python3.6/dist-packages (from arviz) (3.0.3) Collecting netcdf4 (from arviz) [?25l Downloading https://files.pythonhosted.org/packages/e4/bd/689b5f9194a47240dad6cd1fd5854ab5253a7702b3bfcf4f5132db8344c8/netCDF4-1.5.2-cp36-cp36m-manylinux1_x86_64.whl (4.1MB)  |████████████████████████████████| 4.1MB 39.8MB/s [?25hRequirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas>=0.23->arviz) (2018.9) Requirement already satisfied: python-dateutil>=2.5.0 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.23->arviz) (2.5.3) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.0->arviz) (2.4.2) Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.0->arviz) (1.1.0) Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.0->arviz) (0.10.0) Collecting cftime (from netcdf4->arviz) [?25l Downloading https://files.pythonhosted.org/packages/70/64/8ceadda42af3c1b27ee77005807e38c6d77baef28a8f9216b60577fddd71/cftime-1.0.3.4-cp36-cp36m-manylinux1_x86_64.whl (305kB)  |████████████████████████████████| 307kB 47.7MB/s [?25hRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.5.0->pandas>=0.23->arviz) (1.12.0) Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib>=3.0->arviz) (41.2.0) Installing collected packages: cftime, netcdf4, arviz Successfully installed arviz-0.5.1 cftime-1.0.3.4 netcdf4-1.5.2 Collecting pymc3 [?25l Downloading https://files.pythonhosted.org/packages/42/c2/86e8be42b99d64932fa12611b502882a5f4d834b6d1d126bf3f956ad6428/pymc3-3.7-py3-none-any.whl (856kB)  |████████████████████████████████| 860kB 6.4MB/s [?25hRequirement already satisfied: theano>=1.0.4 in /usr/local/lib/python3.6/dist-packages (from pymc3) (1.0.4) Requirement already satisfied: numpy>=1.13.0 in /usr/local/lib/python3.6/dist-packages (from pymc3) (1.16.5) Requirement already satisfied: pandas>=0.18.0 in /usr/local/lib/python3.6/dist-packages (from pymc3) (0.24.2) Requirement already satisfied: h5py>=2.7.0 in /usr/local/lib/python3.6/dist-packages (from pymc3) (2.8.0) Requirement already satisfied: scipy>=0.18.1 in /usr/local/lib/python3.6/dist-packages (from pymc3) (1.3.1) Requirement already satisfied: tqdm>=4.8.4 in /usr/local/lib/python3.6/dist-packages (from pymc3) (4.28.1) Requirement already satisfied: patsy>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from pymc3) (0.5.1) Requirement already satisfied: six>=1.9.0 in /usr/local/lib/python3.6/dist-packages (from theano>=1.0.4->pymc3) (1.12.0) Requirement already satisfied: python-dateutil>=2.5.0 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.18.0->pymc3) (2.5.3) Requirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas>=0.18.0->pymc3) (2018.9) Installing collected packages: pymc3 Successfully installed pymc3-3.7 import numpy as np import matplotlib.pyplot as plt import matplotlib.cm as cmap import pymc3 as pm import scipy as sp import pandas as pd import seaborn as sns import theano.tensor as tt import theano import arviz %matplotlib inline sns.set_context('talk') np.random.seed(42)_____no_output_____ </code> ### Squared Exponential Covariance $$\begin{aligned} k(x,x^{\prime}) = \theta_1\exp\left(-\frac{\theta_2}{2}(x-x^{\prime})^2\right) \end{aligned}$$_____no_output_____ <code> def exponential_cov(x, y, params): """ params -> theta(i) """ return params[0] * np.exp(-0.5 * params[1] * np.subtract.outer(x, y)**2) _____no_output_____ </code> ### Sampling from a Gaussian Process We are going generate realizations sequentially, point by point, using the lovely conditioning property of mutlivariate Gaussian distributions. Here is that conditional: $$p(x|y) = \mathcal{N}(\mu_x + \Sigma_{xy}\Sigma_y^{-1}(y-\mu_y), \Sigma_x-\Sigma_{xy}\Sigma_y^{-1}\Sigma_{xy}^T)$$ And this the function that implements it:_____no_output_____ <code> def conditional(x_new, x, y, params): B = exponential_cov(x_new, x, params) C = exponential_cov(x, x, params) A = exponential_cov(x_new, x_new, params) mu = np.linalg.inv(C).dot(B.T).T.dot(y) sigma = A - B.dot(np.linalg.inv(C).dot(B.T)) return(mu.squeeze(), sigma.squeeze()) _____no_output_____ </code> We will start with a Gaussian process prior with hyperparameters $\theta_0=1, \theta_1=10$. We will also assume a zero function as the mean, so we can plot a band that represents one standard deviation from the mean._____no_output_____ <code> θ = [1, 10] σ_0 = exponential_cov(0, 0, θ) xpts = np.arange(-3, 3, step=0.01) plt.errorbar(xpts, np.zeros(len(xpts)), yerr=σ_0, capsize=0) plt.ylim(-3, 3)_____no_output_____ </code> Let's select an arbitrary starting point to sample, say $x=1$. Since there are no prevous points, we can sample from an unconditional Gaussian:_____no_output_____ <code> x = [1.] y = [np.random.normal(scale=σ_0)] y_____no_output_____ </code> We can now update our confidence band, given the point that we just sampled, using the covariance function to generate new point-wise intervals, conditional on the value $[x_0, y_0]$._____no_output_____ <code> σ_1 = exponential_cov(x, x, θ) σ_1_____no_output_____def predict(x, data, kernel, params, sigma, t): k = [kernel(x, y, params) for y in data] Sinv = np.linalg.inv(sigma) y_pred = np.dot(k, Sinv).dot(t) sigma_new = kernel(x, x, params) - np.dot(k, Sinv).dot(k) return y_pred, sigma_new _____no_output_____x_pred = np.linspace(-3, 3, 1000) predictions = [predict(i, x, exponential_cov, θ, σ_1, y) for i in x_pred] predictions[:3]_____no_output_____y_pred, sigmas = np.transpose(predictions) plt.errorbar(x_pred, y_pred, yerr=sigmas, capsize=0) plt.plot(x, y, "ro") plt.xlim(-3, 3); plt.ylim(-3, 3);_____no_output_____ </code> So conditional on this point, and the covariance structure we gave specified, we have essentialy contrained the probable location of additional points. Let's sample another. $$p(x|y) = \mathcal{N}(\mu_x + \Sigma_{xy}\Sigma_y^{-1}(y-\mu_y), \Sigma_x-\Sigma_{xy}\Sigma_y^{-1}\Sigma_{xy}^T)$$ _____no_output_____ <code> m, s = conditional([-0.7], x, y, θ) print('m, s:', m, s) y2 = np.random.normal(m, s) print('y2:', y2)m, s: 2.633608839077871e-07 0.9999999999997189 y2: -0.1382640378102619 </code> This point is added to the realization, and can ve used to further update the location of the next point._____no_output_____ <code> if len(x) < 2: x.append(-0.7) y.append(y2) assert len(x) == 2_____no_output_____σ_2 = exponential_cov(x, x, θ) print('σ_2', σ_2) predictions = [predict(i, x, exponential_cov, θ, σ_2, y) for i in x_pred] print('predictions:', predictions[:3])σ_2 [[1.0000000e+00 5.3020612e-07] [5.3020612e-07 1.0000000e+00]] predictions: [(-4.504234747533777e-13, 1.0), (-5.170533045927396e-13, 1.0), (-5.93325427072544e-13, 1.0)] y_pred, sigmas = np.transpose(predictions) plt.errorbar(x_pred, y_pred, yerr=sigmas, capsize=0) plt.plot(x, y, "ro") plt.xlim(-3, 3) plt.ylim(-3, 3)_____no_output_____ </code> Sampling sequentially is just a heuristic to demonstrate how the covariance structure works. We can just as easily sample several points at once:_____no_output_____ <code> x_more = [-2.1, -1.5, 0.3, 1.8, 2.5] mu, s = conditional(x_more, x, y, θ) print('mu, s:', mu, s) y_more = np.random.multivariate_normal(mu, s) y_moremu, s: [-7.66697664e-06 -5.63595765e-03 4.19316345e-02 2.02471666e-02 6.46090979e-06] [[ 9.99999997e-01 1.65296628e-01 -3.73627090e-07 1.19843900e-12 3.82424663e-16] [ 1.65296628e-01 9.98338443e-01 -2.74559569e-04 8.80965650e-10 2.81118181e-13] [-3.73627090e-07 -2.74559569e-04 9.92508018e-01 -3.50450933e-03 -1.12241541e-06] [ 1.19843900e-12 8.80965650e-10 -3.50450933e-03 9.98338443e-01 8.62930563e-02] [ 3.82424663e-16 2.81118181e-13 -1.12241541e-06 8.62930563e-02 1.00000000e+00]] if len(x) == 2: x += x_more y += y_more.tolist() σ_new = exponential_cov(x, x, θ) predictions = [predict(i, x, exponential_cov, θ, σ_new, y) for i in x_pred] y_pred, sigmas = np.transpose(predictions) plt.errorbar(x_pred, y_pred, yerr=sigmas, capsize=0) plt.plot(x, y, "ro") plt.ylim(-3, 3)_____no_output_____ </code> So as the density of point becomes high, the result will be one realization **function** from the prior GP._____no_output_____## Fitting Gaussian Proccess in PyMC3 PyMC library alternatives to fit gaussian processes. * GPy * GPflow * Stan * Edward TODO: experiment with some of these libraries and enrich this document with additional insights._____no_output_____### Advantages of using PyMC3 on fitting Gaussian Processes * Fits Bayesian statistical model with Markov chain Monte Carlo, variotional inference and other algorithms. * Includes a large suite of well-docummented statistical distributions. * Creates summaries including tables and plots. * Includes several convergence diagnostics and model checking methods. * Extensible: easily incorporates custom step methods and unusual probability distributions. * MCMC loops can be embedded in larger programs._____no_output_____ <code> x = np.array([-5, -4.9, -4.8, -4.7, -4.6, -4.5, -4.4, -4.3, -4.2, -4.1, -4, -3.9, -3.8, -3.7, -3.6, -3.5, -3.4, -3.3, -3.2, -3.1, -3, -2.9, -2.8, -2.7, -2.6, -2.5, -2.4, -2.3, -2.2, -2.1, -2, -1.9, -1.8, -1.7, -1.6, -1.5, -1.4, -1.3, -1.2, -1.1, -1, -0.9, -0.8, -0.7, -0.6, -0.5, -0.4, -0.3, -0.2, -0.1, 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 5]) y = np.array([1.04442478194401, 0.948306088493654, 0.357037759697332, 0.492336514646604, 0.520651364364746, 0.112629866592809, 0.470995468454158, -0.168442254267804, 0.0720344402575861, -0.188108980535916, -0.0160163306512027, -0.0388792158617705, -0.0600673630622568, 0.113568725264636, 0.447160403837629, 0.664421188556779, -0.139510743820276, 0.458823971660986, 0.141214654640904, -0.286957663528091, -0.466537724021695, -0.308185884317105, -1.57664872694079, -1.44463024170082, -1.51206214603847, -1.49393593601901, -2.02292464164487, -1.57047488853653, -1.22973445533419, -1.51502367058357, -1.41493587255224, -1.10140254663611, -0.591866485375275, -1.08781838696462, -0.800375653733931, -1.00764767602679, -0.0471028950122742, -0.536820626879737, -0.151688056391446, -0.176771681318393, -0.240094952335518, -1.16827876746502, -0.493597351974992, -0.831683011472805, -0.152347043914137, 0.0190364158178343, -1.09355955218051, -0.328157917911376, -0.585575679802941, -0.472837120425201, -0.503633622750049, -0.0124446353828312, -0.465529814250314, -0.101621725887347, -0.26988462590405, 0.398726664193302, 0.113805181040188, 0.331353802465398, 0.383592361618461, 0.431647298655434, 0.580036473774238, 0.830404669466897, 1.17919105883462, 0.871037583886711, 1.12290553424174, 0.752564860804382, 0.76897960270623, 1.14738839410786, 0.773151715269892, 0.700611498974798, 0.0412951045437818, 0.303526087747629, -0.139399513324585, -0.862987735433697, -1.23399179134008, -1.58924289116396, -1.35105117911049, -0.990144529089174, -1.91175364127672, -1.31836236129543, -1.65955735224704, -1.83516148300526, -2.03817062501248, -1.66764011409214, -0.552154350554687, -0.547807883952654, -0.905389222477036, -0.737156477425302, -0.40211249920415, 0.129669958952991, 0.271142753510592, 0.176311762529962, 0.283580281859344, 0.635808289696458, 1.69976647982837, 1.10748978734239, 0.365412229181044, 0.788821368082444, 0.879731888124867, 1.02180766619069, 0.551526067300283]) N = len(y)_____no_output_____sns.regplot(x, y, fit_reg=False)_____no_output_____ </code> Along with the fit method, each supervised learning class retains a predict method that generates predicted outcomes ($y^*$) given a new set of predictors ($X^*$) distinct from those used to fit the model. For a Gaussian process, this is fulfulled by the posterior predictive distribution, which is the Gaussian process with the mean and covariance functions updated to their posterior forms, after having been fit. $$p(y^*|y, x, x^*) = \mathcal{GP}(m^*(x^*), k^*(x^*))$$ where the posterior mean and covariance functions are calculated as: $$\begin{aligned} m^*(x^*) &amp;= k(x^*,x)^T[k(x,x) + \sigma^2I]^{-1}y \\ k^*(x^*) &amp;= k(x^*,x^*)+\sigma^2 - k(x^*,x)^T[k(x,x) + \sigma^2I]^{-1}k(x^*,x) \end{aligned}$$ Covariance functions PyMC3 includes a library of covariance functions to choose from. A flexible choice to start with is the Matèrn covariance. $$k_{M}(x) = \frac{\sigma^2}{\Gamma(\nu)2^{\nu-1}} \left(\frac{\sqrt{2 \nu} x}{l}\right)^{\nu} K_{\nu}\left(\frac{\sqrt{2 \nu} x}{l}\right)$$ where where $\Gamma$ is the gamma function and $K$ is a modified Bessel function. The form of covariance matrices sampled from this function is governed by three parameters, each of which controls a property of the covariance. **amplitude** ($\sigma$) controls the scaling of the output along the y-axis. This parameter is just a scalar multiplier, and is therefore usually left out of implementations of the Matèrn function (i.e. set to one) **lengthscale** ($l$) complements the amplitude by scaling realizations on the x-axis. Larger values make points appear closer together. **roughness** ($\nu$) controls the sharpness of ridges in the covariance function, which ultimately affect the roughness (smoothness) of realizations. Though in general all the parameters are non-negative real-valued, when $\nu = p + 1/2$ for integer-valued $p$, the function can be expressed partly as a polynomial function of order $p$ and generates realizations that are $p$-times differentiable, so values $\nu \in \{3/2, 5/2\}$ are extremely common. To provide an idea regarding the variety of forms or covariance functions, here's small selection of available ones:_____no_output_____ <code> X = np.linspace(0, 2, 200)[:, None] # fucntion to display covariance matrices def plot_cov(X, K, stationary=True): K = K + 1e-8*np.eye(X.shape[0]) x = X.flatten() with sns.axes_style("white"): fig = plt.figure(figsize=(14,5)) ax1 = fig.add_subplot(121) m = ax1.imshow(K, cmap="inferno", interpolation='none', extent=(np.min(X), np.max(X), np.max(X), np.min(X))); plt.colorbar(m); ax1.set_title("Covariance Matrix") ax1.set_xlabel("X") ax1.set_ylabel("X") ax2 = fig.add_subplot(122) if not stationary: ax2.plot(x, np.diag(K), "k", lw=2, alpha=0.8) ax2.set_title("The Diagonal of K") ax2.set_ylabel("k(x,x)") else: ax2.plot(x, K[:,0], "k", lw=2, alpha=0.8) ax2.set_title("K as a function of x - x'") ax2.set_ylabel("k(x,x')") ax2.set_xlabel("X") fig = plt.figure(figsize=(14,4)) ax = fig.add_subplot(111) samples = np.random.multivariate_normal(np.zeros(200), K, 5).T; for i in range(samples.shape[1]): ax.plot(x, samples[:,i], color=cmap.inferno(i*0.2), lw=2); ax.set_title("Samples from GP Prior") ax.set_xlabel("X")_____no_output_____## Quradratic exponential covariance with pm.Model() as model: l = 0.2 tau = 2.0 b = 0.5 cov = b + tau * pm.gp.cov.ExpQuad(1, l) K = theano.function([], cov(X))() plot_cov(X, K) _____no_output_____ </code> ### Matern $\nu=3/2$ covariance_____no_output_____ <code> with pm.Model() as model: l = 0.2 tau = 2.0 cov = tau * pm.gp.cov.Matern32(1, 1) K = theano.function([], cov(X))() plot_cov(X, K)_____no_output_____ </code> ### Cosine covariance_____no_output_____ <code> with pm.Model() as model: l = 0.2 tau = 2.0 cov = tau * pm.gp.cov.Cosine(1, 1) K = theano.function([], cov(X))() plot_cov(X, K)_____no_output_____def squared_distance(x, y): return np.array([[(x[i] -y[j]) ** 2 for i in range(len(x))] for j in range(len(y))]) _____no_output_____N = len(y) with pm.Model() as gp_fit: μ = np.zeros(N) η_sq = pm.HalfCauchy('η_sq', 5) ρ_sq = pm.HalfCauchy('ρ_sq', 5) σ_sq = pm.HalfCauchy('σ_sq', 5) D = squared_distance(x, x) # Squared exponential Σ = tt.fill_diagonal(η_sq * tt.exp(-ρ_sq * D), η_sq + σ_sq) obs = pm.MvNormal('obs', μ, Σ, observed=y)_____no_output_____from theano.tensor.nlinalg import matrix_inverse with gp_fit: # Prediction over grid xgrid = np.linspace(-6, 6) D_pred = squared_distance(xgrid, xgrid) D_off_diag = squared_distance(x, xgrid) # Covariance matrices for prediction Σ_pred = η_sq * tt.exp(-ρ_sq * D_pred) Σ_off_diag = η_sq * tt.exp(-ρ_sq * D_off_diag) # Posterior mean μ_post = pm.Deterministic('μ_post', tt.dot(tt.dot(Σ_off_diag, matrix_inverse(Σ)), y)) # Posterior covariance Σ_post = pm.Deterministic('Σ_post', Σ_pred - tt.dot(tt.dot(Σ_off_diag, matrix_inverse(Σ)), Σ_off_diag.T))_____no_output_____ </code> Here, we will use variational inference to fit the model, namely automatic differentiation variational inference._____no_output_____ <code> with gp_fit: approx = pm.fit(1000, method='fullrank_advi') gp_trace = approx.sample(1000)_____no_output_____pm.traceplot(gp_trace, var_names=['η_sq', 'ρ_sq', 'σ_sq']);_____no_output_____y_pred = [np.random.multivariate_normal(m, S) for m, S in zip(gp_trace['μ_post'], gp_trace['Σ_post'])]_____no_output_____for yp in y_pred: plt.plot(np.linspace(-6, 6), yp, 'c-', alpha=0.1); plt.plot(x, y, 'r.') plt.ylim(-5, 5)_____no_output_____ </code> Now let's use PyMC's purpose-built GP classes to fit the same model:_____no_output_____ <code> with pm.Model() as gp_fit: p = pm.HalfCauchy('ρ', 1) η = pm.HalfCauchy('η', 1) K = η * pm.gp.cov.Matern32(1, p) _____no_output_____ </code> We can continue to build upon our model by speficying a mean function (this is redundant here, since a zero function is assumed when not specified) and an observation noise variable, which we will give a half-Cauchy prior:_____no_output_____ <code> with gp_fit: M = pm.gp.mean.Zero() σ = pm.HalfCauchy('σ', 2.5)_____no_output_____X = x.reshape(-1, 1) with gp_fit: gp = pm.gp.Marginal(mean_func=M, cov_func=K) y_obs = gp.marginal_likelihood('y_obs', X=X, y=y, noise=σ)/usr/local/lib/python3.6/dist-packages/theano/tensor/basic.py:6611: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. result[diagonal_slice] = x with gp_fit: trace = pm.sample(1000, tune=1000)Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... /usr/local/lib/python3.6/dist-packages/theano/tensor/basic.py:6611: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. result[diagonal_slice] = x WARNING (theano.tensor.blas): We did not find a dynamic library in the library_dir of the library we use for blas. If you use ATLAS, make sure to compile it with dynamics library. WARNING (theano.tensor.blas): We did not find a dynamic library in the library_dir of the library we use for blas. If you use ATLAS, make sure to compile it with dynamics library. /usr/local/lib/python3.6/dist-packages/theano/tensor/basic.py:6611: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. result[diagonal_slice] = x Sequential sampling (2 chains in 1 job) NUTS: [σ, η, ρ] 0%| | 0/2000 [00:00<?, ?it/s]/usr/local/lib/python3.6/dist-packages/theano/tensor/basic.py:6611: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. result[diagonal_slice] = x 100%|██████████| 2000/2000 [01:24<00:00, 21.72it/s] 100%|██████████| 2000/2000 [01:11<00:00, 28.00it/s] axes = pm.traceplot(trace, varnames=['ρ', 'σ', 'η'])/usr/local/lib/python3.6/dist-packages/pymc3/plots/__init__.py:40: UserWarning: Keyword argument `varnames` renamed to `var_names`, and will be removed in pymc3 3.8 warnings.warn('Keyword argument `{old}` renamed to `{new}`, and will be removed in pymc3 3.8'.format(old=old, new=new)) /usr/local/lib/python3.6/dist-packages/theano/tensor/basic.py:6611: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. result[diagonal_slice] = x </code> In addition to fitting the model, we would like to be able to generate predictions. This implies sampling from the posterior predictive distribution, which if you recall is just some linear algebra: $$\begin{aligned} m^*(x^*) &amp;= k(x^*,x)^T[k(x,x) + \sigma^2I]^{-1}y \\ k^*(x^*) &amp;= k(x^*,x^*)+\sigma^2 - k(x^*,x)^T[k(x,x) + \sigma^2I]^{-1}k(x^*,x) \end{aligned}$$ PyMC3 allows for predictive sampling after the model is fit, using the recorded values of the model parameters to generate samples. The sample_ppc function implements the predictive GP above, called with the sample trace, a grid of points over which to generate realizations, and a conditional GP on these points:_____no_output_____ <code> Z = np.linspace(-6, 6, 100).reshape(-1, 1) with gp_fit: y_pred = gp.conditional('y_pred', Z, pred_noise=True) y_samples = pm.sample_ppc(trace, vars=[y_pred], samples=10)/usr/local/lib/python3.6/dist-packages/theano/tensor/basic.py:6611: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. result[diagonal_slice] = x /usr/local/lib/python3.6/dist-packages/theano/tensor/basic.py:6611: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. result[diagonal_slice] = x /usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:5: DeprecationWarning: sample_ppc() is deprecated. Please use sample_posterior_predictive() """ 0%| | 0/10 [00:00<?, ?it/s]/usr/local/lib/python3.6/dist-packages/theano/tensor/basic.py:6611: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. result[diagonal_slice] = x /usr/local/lib/python3.6/dist-packages/theano/tensor/basic.py:6611: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. result[diagonal_slice] = x 100%|██████████| 10/10 [00:05<00:00, 1.86s/it] fig, ax = plt.subplots(figsize=(14,5)) [ax.plot(Z, x, color='SeaGreen', alpha=0.3) for x in y_samples['y_pred']] # overlay the observed data ax.plot(X, y, 'o', color="k", ms=10); ax.set_xlabel("x"); ax.set_ylabel("f(x)"); ax.set_title("Posterior predictive distribution")_____no_output_____ </code> For models being fit to very large datasets, one often finds MCMC fitting to work too slowly, as the log-probability of the model needs to be evaluated at every iteration of the sampling algorithm. In these situations, it may be worth using variational inference methods, which replace the true posterior with a simpler approximation, and use optimization to parameterize the approximation so that it is as close as possible to the target distribution. Thus, the posterior is only an approximation, and sometimes an unacceptably coarse one, but is a viable alternative for many problems. Newer variational inference algorithms are emerging that improve the quality of the approximation, and these will eventually find their way into software. In the meantime, Variational Gaussian Approximation and Automatic Differentiation Variational Inference are available now in GPflow and PyMC3, respectively. Let's fit the same model using just variational inference (ADVI):_____no_output_____ <code> with gp_fit: approx = pm.fit(20000) trace_advi = approx.sample(1000, include_transformed=True) y_samples_advi = pm.sample_ppc(trace_advi, vars=[y_pred], samples=10) 0%| | 0/20000 [00:00<?, ?it/s]/usr/local/lib/python3.6/dist-packages/theano/tensor/basic.py:6611: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. result[diagonal_slice] = x Average Loss = 85.067: 100%|██████████| 20000/20000 [06:03<00:00, 54.97it/s] Finished [100%]: Average Loss = 85.054 /usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:4: DeprecationWarning: sample_ppc() is deprecated. Please use sample_posterior_predictive() after removing the cwd from sys.path. 100%|██████████| 10/10 [00:00<00:00, 2976.16it/s] fig, ax = plt.subplots(figsize=(14,5)) [ax.plot(Z, x, color='SeaGreen', alpha=0.3) for x in y_samples_advi['y_pred']] # overlay the observed data ax.plot(X, y, 'o', color="k", ms=10); ax.set_xlabel("x"); ax.set_ylabel("f(x)"); ax.set_title("Posterior predictive distribution")_____no_output_____ </code> ### Real-world example: Spawning salmon That was contrived data; let's try applying Gaussian processes to a real problem. The plot below shows the relationship between the number of spawning salmon in a particular stream and the number of fry that are recruited into the population in the spring. We would like to model this relationship, which appears to be non-linear (we have biological knowledge that suggests it should be non-linear too)._____no_output_____ <code> salmon_data = pd.read_table('colab-notebooks/data/salmon.txt', sep='\s+', index_col=0) salmon_data.plot.scatter(x='spawners', y='recruits', s=50) /usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:1: FutureWarning: read_table is deprecated, use read_csv instead. """Entry point for launching an IPython kernel. with pm.Model() as salmon_model: ρ = pm.HalfCauchy() η = M = K = gp = σ = recruits = _____no_output_____ </code>
{ "repository": "martin-fabbri/colab-notebooks", "path": "01_fitting_gaussian_process_model.ipynb", "matched_keywords": [ "Salmon" ], "stars": 8, "size": 990567, "hexsha": "4817b7330a89fd429f9aa6377101156b71bd32e4", "max_line_length": 166476, "avg_line_length": 531.7053140097, "alphanum_fraction": 0.9219931615 }
# Notebook from bmcs-group/bmcs_tutorial Path: tour3_nonlinear_bond/3_1_nonlinear_bond.ipynb <a id="top"></a> # **3.1 Nonlinear bond - softening and hardening** [![title](../fig/bmcs_video.png)](https://moodle.rwth-aachen.de/mod/page/view.php?id=551816)&nbsp; part 1_____no_output_____<div style="background-color:lightgray;text-align:left"> <img src="../icons/start_flag.png" alt="Previous trip" width="40" height="40"> &nbsp; &nbsp; <b>Starting point</b> </div> _____no_output_____By saying that we want to capture the _material behavior_ we mean that we realistically describe the **constitutive relation** between the strain and stress which is **valid for any material point** of the considered volume. With the focus on a one-dimensional interface between two material components we can reduce this task to the relation between bond stress and slip. In Tour 2, we assumed the constitutive bond-slip relation constant. However, as we have learned in trip [2.1 Pull-out of elastic fiber from rigid matrix](../pull_out/2_1_1_PO_observation.ipynb) this stick-slip interface behavior cannot realistically describe the experimentally measured response of steel-concrete pull-out with varied length of the bond length $L_b$._____no_output_____<div style="background-color:lightgray;text-align:left"> <img src="../icons/destination.png" alt="Previous trip" width="40" height="40"> &nbsp; &nbsp; <b>Where are we heading</b> </div> _____no_output_____To improve the quality of the model, in this notebook we introduce and investigate more complex shapes of bond slip laws and their effect on the observed pullout response. This extension will enable a more **realistic prediction of a wide range of pull-out and crack bridge tests**, including steel rebars, carbon textile fabrics or carbon fiber reinforced polymer (CFRP) sheets. Using the models, we will perform automated studies of the pull-out response that can demonstrate the different phenomenology behind hardening and softening constitutive behavior. These studies indicate how validated models can support the definition of engineering design rules. _____no_output_____To proceed in small steps we consider two shapes of constant bond-slip law, referred to as **bond-hardening and bond softening**. ![image.png](attachment:9086d2ee-b436-406a-aae8-44f90e71f5b3.png)_____no_output_____The increasing/hardening or decreasing/softening trend of the bond-slip law in the second branch introduces the question, what kind of **material structure** within the bond zone can induce such type of behavior. An example of an idealized bond system leading to hardening or softening can be provided using a rough surface with an increasing or decreasing number of asperities. A more detailed classification of the bond systems will be shown in Tour 3 which provides a more physically based description of the debonding process. The question studied in this notebook is **what is the qualitative effect of the second bond-slip slope on the pull-out response.**_____no_output_____# **Numerical support necessary** To solve a pullout problem for a generally nonlinear bond-slip law, we have to solve the initial boundary value problem numerically. In this notebook, we will use a finite-element code implemented within the BMCS tool to study the behavior for two examples of qualitatively different bond-slip laws. _____no_output_____**Solution algorithm:** To study of the effect of the nonlinear bond-slip law on the pullout response we will use the finite-element method solving the nonlinear response of the pull-out test by stepping through the loading history. Let us therefore briefly touch the topic of the solution algorithm needed to solve such a nonlinear problem boundary value problem of continuum mechanics. Generally, a non-linear finite element solver includes the solution of two separate tasks: - **Time stepping** algorithm that can identify the material state variables satisfying the constitutive law for a prescribed loadincrement in all points of the domain using an iterative Newton-type algorithm. - Find the **spatial distribution** of the displacement field satisfying the equilibrium, compatibility and boundary conditions using the finite-element discretization._____no_output_____<div style="background-color:lightgray;text-align:left"> <img src="../icons/bus.png" alt="Diver" width="40" height="40"> &nbsp; &nbsp; <b>Short sidetrip</b> </div> ## Time-stepping - Newton method to solve a set of nonlinear equations The Newton method is the basis of all nonlinear time-stepping algorithms used in finite-element codes. Let us explain the solution procedure by considering a very short bond length $L_\mathrm{b}$x and denote it as a material point $m$ for which a constant profile of the shear stress $\tau(x) = \tau_m$ and slip $s(x) = s_m$ can be assumed. ![image.png](attachment:fcf8bea4-2c06-4931-b39d-47e53c0d6dda.png) The iterative time-stepping algorithm with increasing load levels can now be displayed for single unknown displacement variable $w$ which must satisfy the equilibrium condition $\bar{P}(t) = P(w)$, where $\bar(P)$ is a prescribed history of loading. A simple implementation of the time stepping procedure exemplifying the solution procedure for a nonlinear equation is provided for an interested tourist in an Annex notebook [A.2 Newton method](../extras/newton_method.ipynb). ![image.png](../fig/newton_iteration.png) In a real simulation of the pull-out problem, the unknown variable is not a slip but the displacement fields $u_\mathrm{m}, u_\mathrm{f}$ are the primary unknowns. They are transformed to corresponding component strains $\varepsilon_\mathrm{m}=u_{\mathrm{m},x}, \varepsilon_\mathrm{f}=u_{\mathrm{f},x}$, and slip $s = u_\mathrm{m} - u_\mathrm{f}$. In the following examples, the component strains are still assumed linear elastic while the bond/shear stress is assumed generally nonlinear. With the known stress fields, the corresponding forces are obtained using numerical integration which deliver the residuum of the global equilibrium condition. The solution scheme described for a single variable in the notebook [A.2](../extras/newton_method.ipynb#newton_iteration_example) remains the same._____no_output_____<div style="background-color:lightgray;text-align:left"> <img src="../icons/diver.png" alt="Diver" width="40" height="40"> &nbsp; &nbsp; <b>Deep dive</b> </div>_____no_output_____## Spatial solver - boundary value problem solved using the finite element method The identification of the displacements within each equilibrium iteration includes the same conditions that we have applied to derive the analytical solution of the pull-out problem with a constant bond slip law. However, the discrete solution satisfies the equilibrium conditions only approximately in a _week sense_. This means that the local differential equilibrium condition is not satisfied everywhere but only in integration points._____no_output_____To provide an insight into the way how do the finite-element tools solve the problem, an open implementation of the nonlinear solver used in this and later notebooks is described completely with a running example, plots and animation in a notebook [A.3 Finite element solver for a pull-out problem](../extras/pullout1d.ipynb). This notebook is an Annex to the course and is meant for ambitious adventurers who want to see how the most finite-element programs available on the market are implemented. Detailed explanation of the theoretical background is provided in the Master's courses on linear structural analysis focused on the theoretical background of the finite-element method and on the nonlinear structural analysis._____no_output_____<div style="background-color:lightgray;text-align:left"> <img src="../icons/binoculars.png" alt="Traveler in a hurry" width="40" height="40"> &nbsp; &nbsp; <b>Distant view</b> </div>_____no_output_____## Example of the finite-element pull-out simulation To understand the functionality of the finite-element model implemented in the referenced notebook [A.3](../extras/pullout1d.ipynb), its output is provided here in form of the pull-out curve and of the fields along the bond zone. The applied boundary conditions are given as follows, the free length $L_\mathrm{f}=0$, the matrix is supported at the loaded end._____no_output_____![image.png](attachment:image.png)_____no_output_____ <code> from IPython.display import HTML html_video_file = open('../extras/pull_out_animation.html','r') HTML(html_video_file.read())_____no_output_____ </code> ## What constitutive law can induce such a debonding process?_____no_output_____A closer look at the simulated evolution of the shear stress along the bond zone in the bottom right diagram provides an important phenomenological observation. The level of shear increases at the right, loaded end in the first stage. After reaching the peak shear stress of $N = 2~\mathrm{N}$ , it diminishes slowly to a low value of approximately 0.1 N. The constitutive law valid at each material point has thus a first ascending and second descending branch. Such kind of behavior is called **softening**. Constitutive behavior exhibiting softening has a severe impact on the structural behavior by introducing the phenomena of strain localization to discrete shear and tensile cracks, accompanied with stress redistribution during the debonding or crack propagation process. The pull-out problem can be conveniently used to visualize the correspondence between the **softening** material law and the structural response with a debonding propagation, as opposed to **hardening** material law._____no_output_____# **Setting up the model components - new material model**_____no_output_____For the purpose of this comparison, let us introduce a simple piece-wise linear bond-slip law, that can be inserted into the non-linear finite-element code to investigate the effect of the type of nonlinearity on the pull-out response._____no_output_____<a id="trilinear_material_model"></a> <div style="background-color:lightgray;text-align:left"> <img src="../icons/work.png" alt="Coding intermezzo" width="40" height="40"> &nbsp; &nbsp; <b>Coding intermezzo</b> </div>_____no_output_____## Construct a material model with tri-linear bond-slip law To indicate how the below examples are implemented let us define a a piece-wise linear function with three branches constituting the bond-slip behavior. It can be used to exemplify how to implement material models in standard non-linear finite-element codes for structural analysis. In codes like `ANSYS, Abaqus, ATENA, Diana`, the spatial integration of the stresses and stiffnesses is based on the so called **predictor**, **corrector** scheme._____no_output_____This simply means that the material model must provide two functions 1. the stress evaluation for a given strain increment 2. the derivative of stress with respect to the strain increment, i.e. the material stiffness. In our case of a bond-slip law, we need to provide two functions \begin{align} \tau(s) \\ \frac{\mathrm{d} \tau}{ \mathrm{d} s} \end{align}._____no_output_____**Let's import the packages:**_____no_output_____ <code> %matplotlib widget import sympy as sp # symbolic algebra package import numpy as np # numerical package import matplotlib.pyplot as plt # plotting package sp.init_printing() # enable nice formating of the derived expressions_____no_output_____ </code> The tri-linear function can be readily constructed using the already known `Piecewise` function provied in `sympy`_____no_output_____ <code> s = sp.symbols('s') tau_1, s_1, tau_2, s_2 = sp.symbols(r'tau_1, s_1, tau_2, s_2') tau_s = sp.Piecewise( (tau_1 / s_1 * s, s <= s_1), # value, condition (tau_1 + (tau_2-tau_1) / (s_2-s_1) * (s - s_1), s <= s_2), # value, condition (tau_2, True) # value, otherwise ) tau_s_____no_output_____ </code> The derivative is obtained as_____no_output_____ <code> d_tau_s = sp.diff(tau_s, s) d_tau_s_____no_output_____ </code> <div style="background-color:lightgray;text-align:left"> <img src="../icons/evaluate.png" alt="Evaluate" width="40" height="40"> &nbsp; &nbsp; <b>How to get numbers?</b> </div>_____no_output_____**The above results are symbols! How to transform them to numbers and graphs?** `sympy` offers the possibility to generate executable code from symbolic expression (`C`, `Fortran`, or `Python`). To get `Python` functions that accept the characteristic points `tau_1`, `tau_2`, `s_1`, `s_2` and evaluating the above defined expressions `tau_s` and `d_tau_s`, we need the following two lines:_____no_output_____ <code> get_tau_s = sp.lambdify((s, tau_1, tau_2, s_1, s_2), tau_s, 'numpy') get_d_tau_s = sp.lambdify((s, tau_1, tau_2, s_1, s_2), d_tau_s, 'numpy')_____no_output_____ </code> The parameter `numpy` enables us to evaluate both functions for arrays of values, not only for a single number. As a result, an array of slip values can be directly sent to the function `get_tau_s` to obtain an array of corresponding stresses_____no_output_____ <code> get_tau_s(np.array([0, 0.5, 1, 1.5, 2]), 1, 0.1, 1, 2)_____no_output_____ </code> <div style="background-color:lightgray;text-align:left"> <img src="../icons/view.png" alt="Evaluate" width="40" height="40"> &nbsp; &nbsp; <b>How to to plot it?</b> </div>_____no_output_____Let us now show that the implemented bond-slip function provides a sufficient range of qualitative shapes to demonstrate and discuss the effect of softening and hardening behavior of the interface material. Let us setup a figure `fig` with two axes `ax1` and `ax2` to verify if the defined function is implemented correctly_____no_output_____ <code> fig, (ax1, ax2) = plt.subplots(1,2, figsize=(8,3), tight_layout=True) fig.canvas.header_visible = False s_range = np.linspace(0, 3, 1050) for tau_2 in [0, 0.5, 1, 1.5, 2]: ax1.plot(s_range, get_tau_s(s_range, 1, tau_2, 0.1, 2)); ax2.plot(s_range, get_d_tau_s(s_range, 1, tau_2, 0.1, 2)); ax1.set_xlabel(r'$s$ [mm]'); ax1.set_ylabel(r'$\tau$ [MPa]'); ax2.set_xlabel(r'$s$ [mm]'); ax2.set_ylabel(r'$\mathrm{d}\tau/\mathrm{d}s$ [MPa/mm]');_____no_output_____ </code> ## Preconfigured pullout model provided in BMCS Tool Suite The presented function is the simplest model provided in a general-purpose nonlinear finite-element simulator `BMCS-Tool-Suite`. The package `bmcs_cross_section` provides several preconfigured models that can be used to analyze and visualize the behavior of a composite cross-section. The analysis of the pullout problem discussed here can be done using the class `PullOutModel1D` that can be imported as follows_____no_output_____ <code> from bmcs_cross_section.pullout import PullOutModel1D_____no_output_____ </code> An instance of the pullout model can be constructed using the following line_____no_output_____ <code> po = PullOutModel1D()_____no_output_____ </code> For convenience, let us summarize the model parameters before showing how to assign them to the model instance_____no_output_____**Geometrical variables:** | Python | Parameter | Description | | :- | :-: | :- | | `A_f` | $A_\mathrm{f}$ | Cross section area modulus of the reinforcement | | `A_m` | $A_\mathrm{m}$ | Cross section area modulus of the matrix | | `P_b` | $p_\mathrm{b}$ | Perimeter of the reinforcement | | `L_b` | $L_\mathrm{b}$ | Length of the bond zone of the pulled-out bar | **Material parameters of a tri-linear bond law:** | Python | Parameter | Description | | :- | :-: | :- | | `E_f` | $E_\mathrm{f}$ | Young's modulus of the reinforcement | | `E_m` | $E_\mathrm{m}$ | Young's modulus of the matrix | | `tau_1` | $\tau_1$ | bond strength | | `tau_2` | $\tau_2$ | bond stress at plateu | | `s_1` | $s_1$ | slip at bond strengh | | `s_2` | $s_1$ | slip at plateau stress |_____no_output_____**Fixed support positions:** | Python | | :- | | `non-loaded end (matrix)` | | `loaded end (matrix)` | | `non-loaded end (reinf)` | | `clamped left` |_____no_output_____Even more conveniently, let us render the interaction window generated by the model to directly see the structure and the naming of the parameters_____no_output_____ <code> po.interact()_____no_output_____ </code> The tree structure at the top-left frame shows the individual model components. Parameters of each component are shown in the bottom-left frame. By nagivating through tree, the parameter frame and the plotting frame are updated to see the corresponding part of the model. The control bar at the bottom can be used to start, stop and reset the simulation._____no_output_____**Example interaction:** Develop some confidence into the correctness of the model. Change the stiffness of the components such that they have the same area and stiffness modulus. Run the simulation and watch the profile of the shear flow along the bond length. Increase the bond length, reset the calculation and run it anew. Change the position support and verify the profile of the displacements._____no_output_____# **Studies 1: Hardening bond-slip law**_____no_output_____[![title](../fig/bmcs_video.png)](https://moodle.rwth-aachen.de/mod/page/view.php?id=551816)&nbsp; part 2_____no_output_____## RILEM Pull-Out Test revisited_____no_output_____![image.png](attachment:image.png)_____no_output_____ <code> po_rilem = PullOutModel1D(n_e_x=300, w_max=0.12) # n_e_x - number of finite elements along the bond zone po_rilem.n_e_x=400Exception occurred in traits notification handler for object: t_n: 0, t_n1: 0 U_n[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] U_k[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.], trait: state_changed, old value: <undefined>, new value: True Traceback (most recent call last): File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/trait_notifiers.py", line 524, in _dispatch_change_event self.dispatch(handler, *args) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/trait_notifiers.py", line 486, in dispatch handler(*args) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 999, in wrapper0 return function(arg) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 3329, in notify self.trait_property_changed(name, old) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 927, in decorator self.__dict__[name] = result = function(self) File "/home/rch/PycharmProjects/bmcs_cross_section/bmcs_cross_section/pullout/pullout_sim.py", line 870, in _get_bc return [self.control_bc] + self.fixed_bc_list File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 927, in decorator self.__dict__[name] = result = function(self) File "/home/rch/PycharmProjects/bmcs_cross_section/bmcs_cross_section/pullout/pullout_sim.py", line 861, in _get_control_bc return BCDof(node_name='pull-out displacement', File "/home/rch/PycharmProjects/bmcs_utils/bmcs_utils/model.py", line 20, in __init__ super().__init__(*args, **kw) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/base_trait_handler.py", line 74, in error raise TraitError( traits.trait_errors.TraitError: The 'time_function' trait of a BCDof instance must be a TimeFunction or None, but a value of <ibvpy.tfunction.loading_scenario.MonotonicLoadingScenario object at 0x7fc7492e14f0> <class 'ibvpy.tfunction.loading_scenario.MonotonicLoadingScenario'> was specified. Exception occurred in traits notification handler for object: t_n: 0, t_n1: 0 U_n[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] U_k[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.], trait: state_changed, old value: <undefined>, new value: True Traceback (most recent call last): File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/trait_notifiers.py", line 524, in _dispatch_change_event self.dispatch(handler, *args) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/trait_notifiers.py", line 486, in dispatch handler(*args) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 999, in wrapper0 return function(arg) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 3329, in notify self.trait_property_changed(name, old) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 927, in decorator self.__dict__[name] = result = function(self) File "/home/rch/PycharmProjects/bmcs_cross_section/bmcs_cross_section/pullout/pullout_sim.py", line 870, in _get_bc return [self.control_bc] + self.fixed_bc_list File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 927, in decorator self.__dict__[name] = result = function(self) File "/home/rch/PycharmProjects/bmcs_cross_section/bmcs_cross_section/pullout/pullout_sim.py", line 861, in _get_control_bc return BCDof(node_name='pull-out displacement', File "/home/rch/PycharmProjects/bmcs_utils/bmcs_utils/model.py", line 20, in __init__ super().__init__(*args, **kw) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/base_trait_handler.py", line 74, in error raise TraitError( traits.trait_errors.TraitError: The 'time_function' trait of a BCDof instance must be a TimeFunction or None, but a value of <ibvpy.tfunction.loading_scenario.MonotonicLoadingScenario object at 0x7fc7492e14f0> <class 'ibvpy.tfunction.loading_scenario.MonotonicLoadingScenario'> was specified. Exception occurred in traits notification handler for object: t_n: 0, t_n1: 0 U_n[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] U_k[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.], trait: state_changed, old value: <undefined>, new value: True Traceback (most recent call last): File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/trait_notifiers.py", line 524, in _dispatch_change_event self.dispatch(handler, *args) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/trait_notifiers.py", line 486, in dispatch handler(*args) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 999, in wrapper0 return function(arg) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 3329, in notify self.trait_property_changed(name, old) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 927, in decorator self.__dict__[name] = result = function(self) File "/home/rch/PycharmProjects/bmcs_cross_section/bmcs_cross_section/pullout/pullout_sim.py", line 870, in _get_bc return [self.control_bc] + self.fixed_bc_list File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 927, in decorator self.__dict__[name] = result = function(self) File "/home/rch/PycharmProjects/bmcs_cross_section/bmcs_cross_section/pullout/pullout_sim.py", line 861, in _get_control_bc return BCDof(node_name='pull-out displacement', File "/home/rch/PycharmProjects/bmcs_utils/bmcs_utils/model.py", line 20, in __init__ super().__init__(*args, **kw) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/base_trait_handler.py", line 74, in error raise TraitError( traits.trait_errors.TraitError: The 'time_function' trait of a BCDof instance must be a TimeFunction or None, but a value of <ibvpy.tfunction.loading_scenario.MonotonicLoadingScenario object at 0x7fc7492e14f0> <class 'ibvpy.tfunction.loading_scenario.MonotonicLoadingScenario'> was specified. </code> To configure the model such that it reflects the RILEM test we can either use the interactive editor above, or assign the attributes directly. As apparent from the editor frame above, attributes `fixed_boundary` and `material model` are dropdown boxes offering several options. To assign these parameters we can use the following scheme - assign one of the options available in the dropdown box to the attribute `attribute` as a string - the option object is then available as an attribute with the name `attribute_` with the trailing underscore. Thus, to define a trilinear bond-slip law we can proceed as follows_____no_output_____ <code> po_rilem.material_model = 'trilinear' # polymorphis attribute - there are several options to be chosen from # set the parameters of the above defined tri-linear bond-slip law - add also the matrix and fiber stiffness po_rilem.material_model_.E_m=28000 # [MPa] po_rilem.material_model_.E_f=210000 # [MPa] po_rilem.material_model_.tau_1=4 po_rilem.material_model_.s_1=1e-3 po_rilem.material_model_.tau_2=8 po_rilem.material_model_.s_2=0.12Exception occurred in traits notification handler for object: t_n: 0, t_n1: 0 U_n[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] U_k[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.], trait: state_changed, old value: <undefined>, new value: True Traceback (most recent call last): File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/trait_notifiers.py", line 524, in _dispatch_change_event self.dispatch(handler, *args) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/trait_notifiers.py", line 486, in dispatch handler(*args) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 999, in wrapper0 return function(arg) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 3329, in notify self.trait_property_changed(name, old) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 927, in decorator self.__dict__[name] = result = function(self) File "/home/rch/PycharmProjects/bmcs_cross_section/bmcs_cross_section/pullout/pullout_sim.py", line 870, in _get_bc return [self.control_bc] + self.fixed_bc_list File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 927, in decorator self.__dict__[name] = result = function(self) File "/home/rch/PycharmProjects/bmcs_cross_section/bmcs_cross_section/pullout/pullout_sim.py", line 861, in _get_control_bc return BCDof(node_name='pull-out displacement', File "/home/rch/PycharmProjects/bmcs_utils/bmcs_utils/model.py", line 20, in __init__ super().__init__(*args, **kw) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/base_trait_handler.py", line 74, in error raise TraitError( traits.trait_errors.TraitError: The 'time_function' trait of a BCDof instance must be a TimeFunction or None, but a value of <ibvpy.tfunction.loading_scenario.MonotonicLoadingScenario object at 0x7fc7492e14f0> <class 'ibvpy.tfunction.loading_scenario.MonotonicLoadingScenario'> was specified. </code> To set several parameters of the model component at once, the `trait_set` method can be used as an alternative to one-by-one assignement_____no_output_____ <code> d = 16.0 # [mm] po_rilem.cross_section.trait_set(A_m=100*100, A_f=3.14*(d/2)**2, P_b=3.14*d) po_rilem.geometry.L_x=5*d po_rilem.fixed_boundary='loaded end (matrix)'Exception occurred in traits notification handler for object: t_n: 0, t_n1: 0 U_n[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] U_k[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.], trait: state_changed, old value: <undefined>, new value: True Traceback (most recent call last): File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/trait_notifiers.py", line 524, in _dispatch_change_event self.dispatch(handler, *args) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/trait_notifiers.py", line 486, in dispatch handler(*args) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 999, in wrapper0 return function(arg) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 3329, in notify self.trait_property_changed(name, old) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 927, in decorator self.__dict__[name] = result = function(self) File "/home/rch/PycharmProjects/bmcs_cross_section/bmcs_cross_section/pullout/pullout_sim.py", line 870, in _get_bc return [self.control_bc] + self.fixed_bc_list File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 927, in decorator self.__dict__[name] = result = function(self) File "/home/rch/PycharmProjects/bmcs_cross_section/bmcs_cross_section/pullout/pullout_sim.py", line 861, in _get_control_bc return BCDof(node_name='pull-out displacement', File "/home/rch/PycharmProjects/bmcs_utils/bmcs_utils/model.py", line 20, in __init__ super().__init__(*args, **kw) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/base_trait_handler.py", line 74, in error raise TraitError( traits.trait_errors.TraitError: The 'time_function' trait of a BCDof instance must be a TimeFunction or None, but a value of <ibvpy.tfunction.loading_scenario.MonotonicLoadingScenario object at 0x7fc7492e14f0> <class 'ibvpy.tfunction.loading_scenario.MonotonicLoadingScenario'> was specified. Exception occurred in traits notification handler for object: t_n: 0, t_n1: 0 U_n[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] U_k[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.], trait: state_changed, old value: <undefined>, new value: True Traceback (most recent call last): File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/trait_notifiers.py", line 524, in _dispatch_change_event self.dispatch(handler, *args) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/trait_notifiers.py", line 486, in dispatch handler(*args) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 999, in wrapper0 return function(arg) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 3329, in notify self.trait_property_changed(name, old) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 927, in decorator self.__dict__[name] = result = function(self) File "/home/rch/PycharmProjects/bmcs_cross_section/bmcs_cross_section/pullout/pullout_sim.py", line 870, in _get_bc return [self.control_bc] + self.fixed_bc_list File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 927, in decorator self.__dict__[name] = result = function(self) File "/home/rch/PycharmProjects/bmcs_cross_section/bmcs_cross_section/pullout/pullout_sim.py", line 861, in _get_control_bc return BCDof(node_name='pull-out displacement', File "/home/rch/PycharmProjects/bmcs_utils/bmcs_utils/model.py", line 20, in __init__ super().__init__(*args, **kw) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/base_trait_handler.py", line 74, in error raise TraitError( traits.trait_errors.TraitError: The 'time_function' trait of a BCDof instance must be a TimeFunction or None, but a value of <ibvpy.tfunction.loading_scenario.MonotonicLoadingScenario object at 0x7fc7492e14f0> <class 'ibvpy.tfunction.loading_scenario.MonotonicLoadingScenario'> was specified. Exception occurred in traits notification handler for object: t_n: 0, t_n1: 0 U_n[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] U_k[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.], trait: state_changed, old value: <undefined>, new value: True Traceback (most recent call last): File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/trait_notifiers.py", line 524, in _dispatch_change_event self.dispatch(handler, *args) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/trait_notifiers.py", line 486, in dispatch handler(*args) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 999, in wrapper0 return function(arg) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 3329, in notify self.trait_property_changed(name, old) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 927, in decorator self.__dict__[name] = result = function(self) File "/home/rch/PycharmProjects/bmcs_cross_section/bmcs_cross_section/pullout/pullout_sim.py", line 870, in _get_bc return [self.control_bc] + self.fixed_bc_list File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 927, in decorator self.__dict__[name] = result = function(self) File "/home/rch/PycharmProjects/bmcs_cross_section/bmcs_cross_section/pullout/pullout_sim.py", line 861, in _get_control_bc return BCDof(node_name='pull-out displacement', File "/home/rch/PycharmProjects/bmcs_utils/bmcs_utils/model.py", line 20, in __init__ super().__init__(*args, **kw) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/base_trait_handler.py", line 74, in error raise TraitError( traits.trait_errors.TraitError: The 'time_function' trait of a BCDof instance must be a TimeFunction or None, but a value of <ibvpy.tfunction.loading_scenario.MonotonicLoadingScenario object at 0x7fc7492e14f0> <class 'ibvpy.tfunction.loading_scenario.MonotonicLoadingScenario'> was specified. Exception occurred in traits notification handler for object: t_n: 0, t_n1: 0 U_n[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] U_k[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.], trait: state_changed, old value: <undefined>, new value: True Traceback (most recent call last): File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/trait_notifiers.py", line 524, in _dispatch_change_event self.dispatch(handler, *args) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/trait_notifiers.py", line 486, in dispatch handler(*args) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 999, in wrapper0 return function(arg) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 3329, in notify self.trait_property_changed(name, old) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 927, in decorator self.__dict__[name] = result = function(self) File "/home/rch/PycharmProjects/bmcs_cross_section/bmcs_cross_section/pullout/pullout_sim.py", line 870, in _get_bc return [self.control_bc] + self.fixed_bc_list File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 927, in decorator self.__dict__[name] = result = function(self) File "/home/rch/PycharmProjects/bmcs_cross_section/bmcs_cross_section/pullout/pullout_sim.py", line 861, in _get_control_bc return BCDof(node_name='pull-out displacement', File "/home/rch/PycharmProjects/bmcs_utils/bmcs_utils/model.py", line 20, in __init__ super().__init__(*args, **kw) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/base_trait_handler.py", line 74, in error raise TraitError( traits.trait_errors.TraitError: The 'time_function' trait of a BCDof instance must be a TimeFunction or None, but a value of <ibvpy.tfunction.loading_scenario.MonotonicLoadingScenario object at 0x7fc7492e14f0> <class 'ibvpy.tfunction.loading_scenario.MonotonicLoadingScenario'> was specified. Exception occurred in traits notification handler for object: t_n: 0, t_n1: 0 U_n[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] U_k[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.], trait: state_changed, old value: <undefined>, new value: True Traceback (most recent call last): File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/trait_notifiers.py", line 524, in _dispatch_change_event self.dispatch(handler, *args) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/trait_notifiers.py", line 486, in dispatch handler(*args) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 999, in wrapper0 return function(arg) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 3329, in notify self.trait_property_changed(name, old) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 927, in decorator self.__dict__[name] = result = function(self) File "/home/rch/PycharmProjects/bmcs_cross_section/bmcs_cross_section/pullout/pullout_sim.py", line 870, in _get_bc return [self.control_bc] + self.fixed_bc_list File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/has_traits.py", line 927, in decorator self.__dict__[name] = result = function(self) File "/home/rch/PycharmProjects/bmcs_cross_section/bmcs_cross_section/pullout/pullout_sim.py", line 861, in _get_control_bc return BCDof(node_name='pull-out displacement', File "/home/rch/PycharmProjects/bmcs_utils/bmcs_utils/model.py", line 20, in __init__ super().__init__(*args, **kw) File "/home/rch/miniconda3/envs/bmcs_env/lib/python3.8/site-packages/traits/base_trait_handler.py", line 74, in error raise TraitError( traits.trait_errors.TraitError: The 'time_function' trait of a BCDof instance must be a TimeFunction or None, but a value of <ibvpy.tfunction.loading_scenario.MonotonicLoadingScenario object at 0x7fc7492e14f0> <class 'ibvpy.tfunction.loading_scenario.MonotonicLoadingScenario'> was specified. </code> The configured model can be rendered anytime as a web-app to check the input parameters and to adjust them._____no_output_____ <code> po_rilem.run() po_rilem.interact()_____no_output_____ </code> ## Bond-slip law calibration/validation **Can we find just one material law that predicts all three tests?** - The preconfigured bond-slip law with an ascending branch after reaching the strength of $\tau_1 = 4$ MPa with the parameters $\tau_1 = 4$ MPa, $\tau_2 = 8$ MPa, $s_1 = 0.001$ mm, $s_1 = 0.12$ mm can reproduce the test with $d = 16$ mm and $L_b = 5d = 80$ mm. - To see the prediction for the test with $L_b = 10d = 160$ mm, modify the parameter `geometry.L_x = 160`. The result shows a good match with the experimentally observed response._____no_output_____**Can we compare the differences in one plot?** - The interactive user interface is illustrative and provides a quick orientation in the scope and functionality of the model. Once we have learned its structure, we can use the programming interface to run simulations in a loop and plot them in a single graph to see the similar picture as in the output of the RILEM test above. - Try to compare the third test with $d = 28$ mm and $L_b = 5d$ mm._____no_output_____<div style="background-color:lightgray;text-align:left"> <img src="../icons/step_by_step.png" alt="Step by step" width="40" height="40"> &nbsp; &nbsp; <b>Plot step by step</b> </div>_____no_output_____ <code> fig, (ax, ax_bond_slip) = plt.subplots(1,2, figsize=(8,3), tight_layout=True) fig.canvas.header_visible = False print('calculate d=16 mm, L=5d') d = 16.0 # [mm] po_rilem.cross_section.trait_set(A_m=100*100, A_f=3.14*(d/2)**2, P_b=3.14*d) po_rilem.w_max = 0.12 po_rilem.geometry.L_x=5*d po_rilem.reset() # it is like pressing the reset button in the above window po_rilem.run() # like pressing the run button po_rilem.history.plot_Pw(ax, color='blue') print('calculate d=16 mm, L=10d') d = 16.0 # [mm] po_rilem.cross_section.trait_set(A_m=100*100, A_f=3.14*(d/2)**2, P_b=3.14*d) po_rilem.w_max = 0.12 po_rilem.geometry.L_x=10*d po_rilem.reset() po_rilem.run() po_rilem.hist.plot_Pw(ax, color='red') print('calculate d=28 mm, L=3d') d = 16.0 # [mm] po_rilem.cross_section.trait_set(A_m=100*100, A_f=3.14*(d/2)**2, P_b=3.14*d) po_rilem.geometry.L_x=3*d po_rilem.w_max = 0.05 po_rilem.reset() po_rilem.run() po_rilem.hist.plot_Pw(ax, color='green') po_rilem.material_model_.plot(ax_bond_slip) # The code sequence can be certainly shortened by using the loop. # It is deliberately omitted here as the focus is not on programming._____no_output_____ </code> ## **Comments** on the study - Note that the bond-slip law that can fit all three pull-out tests exhibits hardening. - The maximum control displacement `w_max` is set equal to the one applied in the test as no information beyond this value is provided by the tests. - The trilinear bond-slip law does not give us the flexibility to reproduce the pull-out failure as it ends with a plateu. ## **Need for a more flexible bond-slip law** - A more flexibility is provided by a `multilinear` material model for which a list of `s_data` and `tau_data` can be specified. - The `multilinear` material model is used in the following code to show how to achieve a pull-out failure by introducing a descending branch in the bond-slip law. - Note that for bond-slip laws with descending branch, convergence problems can occur when approaching the pullout failure. The convergence behavior can be improved by refining the spatial discretization given by the number of finite elements along the bond zone `n_e_x` and by the size of the time step `time_line.step`._____no_output_____ <code> fig, (ax, ax_bond_slip) = plt.subplots(1,2, figsize=(8,3), tight_layout=True) fig.canvas.header_visible = False d = 32.0 # [mm] po_rilem.w_max = 0.12 po_rilem.time_line.step = 0.05 po_rilem.material_model='multilinear' po_rilem.material_model_.trait_set(E_m=28000, E_f=210000, tau_data='0, 4, 6, 0, 0', s_data='0, 1e-3, 0.08, 0.12, 0.2') po_rilem.geometry.L_x= 1*d po_rilem.reset() po_rilem.run() po_rilem.hist.plot_Pw(ax, color='magenta') po_rilem.material_model_.plot(ax_bond_slip)_____no_output_____ </code> ## **Questions:** Effect of bond length on the pullout response - **bond hardening**_____no_output_____ - The iterative trial and error fitting is tedious. **How to design a test from which we can directly obtain the bond-slip law?** Comparing the test with $L_b = 5d$ and $L_b = 10d$, we recognize that the shorter bond length resembles more the shape of the bond-slip law. To verify this, set the bond length in the above example to $L_\mathrm{b} = 1d$. - On the other hand, if we increase the length, the maximum pull-out will increase. **How can we determine the bond length at which the steel bar will yield?**. A simple and quick answer to this question can be provided by reusing the analytical pull-out model with a constant bond-slip law as a first approximation. The maximum achievable pull-out force of a test with an embedded length $L_\mathrm{b}$ is given as \begin{align} \label{EQ:MaxEmbeddedLength} P_{L} = \bar{\tau} p_\mathrm{b} L_\mathrm{b} \end{align} where $p_\mathrm{b}$ denotes the perimeter, equal in all experiments. The force at which the reinforcement attains the strength $\sigma_{\mathrm{f},\mathrm{mu}}$ and breaks is \begin{align} P_{\mathrm{f},\mathrm{mu}} = \sigma_{\mathrm{f},\mathrm{mu}} A_\mathrm{f} \end{align} so that the bond length at which the reinforcement will fail is obtained by requiring $P_L = P_{\mathrm{f},\mathrm{mu}}$ which renders \begin{align} \label{EQ:ConstantBondAnchorageLength} L_{\mathrm{b}} = \frac{\sigma_{\mathrm{f},\mathrm{mu}} A_\mathrm{f} } {\bar{\tau} p}. \end{align} For a generally nonlinear bond-slip law, we need to evaluate the maximum load numerically. Two examples quantifying the effect of the bond-length for bond-hardening and bond-softening systematically are provided in the notebook [3.2 Anchorage length](3_2_anchorage_length.ipynb)_____no_output_____<a id="cfrp_sheet_test"></a> # **Studies 2: Softening bond-slip law**_____no_output_____[![title](../fig/bmcs_video.png)](https://moodle.rwth-aachen.de/mod/page/view.php?id=551816)&nbsp; part 3_____no_output_____The presence of the descending branch in a constitutive law is the key for understanding the propagating debonding. Let us use the established framework to study the different phenomenology that occurs for constitutive laws exhibiting softening. Consider an interface between a fiber reinforced polymer (FRP) sheet used for retrofitting of RC structure. The study is based on a paper by [Dai et al. (2005)](../papers/dai_frp_pullout_2005.pdf). The goal of the paper is to derive constitutive laws capturing the bond-slip behavior of the adhesive between the FRP sheet and concrete surface. We will use selected experimental pullout curves from the paper to reproduce them using the numerical pullout model introduced above by verifying the bond-slip law derived in the paper._____no_output_____## Test setup_____no_output_____![image.png](attachment:974c60ea-69f6-4175-ab89-cea30a6bf1d6.png)_____no_output_____The width of the sheet was $p_b = 100$ mm, the attached bond length is also $L_b = 100$ mm. The properties of the tested sheets are summarized in the table. The dimensions of the concrete block were $200 \times 400 \times 200$ mm._____no_output_____![image.png](attachment:fb43c5d7-b9f6-4d4d-8406-9d7610039506.png)_____no_output_____The pull-out curves measured for different adhesives used to realize the bond to the matrix were evaluated as the strain in the FRP sheet at the loaded end versus slip displacement. _____no_output_____![image.png](attachment:0e4f48fa-0996-4c4a-be4c-7a2f893ba91d.png)_____no_output_____To compare with our studies, we can transfer the results to pullout force $P$ by evaluating \begin{align} P = p_\mathrm{b} t_\mathrm{f} E_\mathrm{f} \varepsilon_\mathrm{f} \end{align} yielding for the strain 0.010_____no_output_____ <code> p_b = 100 # [mm] t_f = 0.11 # [mm] E_f = 230000 # [MPa] eps_f_max = 0.01 # [-] P_max = p_b * t_f * E_f * eps_f_max / 1000 # [kN] P_max # [kN]_____no_output_____ </code> The bond-slip law reported by the authors of the paper has the following shape <a id="cfrp_bond_slip"></a>_____no_output_____![image.png](attachment:766b18c8-7f58-49a7-9406-ec2022c29188.png)_____no_output_____## Model for CFRP pullout test _____no_output_____Let us construct another pullout model named `po_cfrp` with a bond slip law exhibiting strong softening. Such kind of behavior is observed in tests between FRP sheets and concrete. An example of such an experimental study <a id="cfrp_trilinear_bond"></a>_____no_output_____ <code> po_cfrp = PullOutModel1D(n_e_x=300, w_max=1.5) # mm po_cfrp.geometry.L_x=100 # mm po_cfrp.time_line.step = 0.02 po_cfrp.cross_section.trait_set(A_m=400*200, A_f=100*0.11, P_b=100) po_cfrp.material_model='trilinear' po_cfrp.material_model_.trait_set(E_m=28000, E_f=230000, tau_1=5.5, tau_2=0, s_1=0.08, s_2=0.4) po_cfrp.interact()_____no_output_____ </code> ## **Conclusions:** to interactive study of CFRP sheet debonding - The bond-slip law reported in the paper can reproduce well the pullout response measured in the test. - The study of the debonding process shows that the adhesive is only active within an effective length of approximately 40-50 mm. - As a consequence, in contrast to steel rebar studied above, the maximum pullout load cannot be increased by an increasing bond length. - In the studied case there will FRP rupture is not possible because its strength is larger than $P_\max$ of 25 kN. To verify this, we use the strength of 3550 MPa given in the above table and multiply with the cross-sectional area of the sheet, i.e. $f_t t_f p_b$ to obtain_____no_output_____ <code> f_t = 3550 # CFRP sheet strength in [MPa] - see the table above f_t * t_f * p_b / 1000 # breaking force of the sheet 100 x 100 mm in [kN]_____no_output_____ </code> ## **Question:** Effect of bond length on the pullout response - **bond softening** - Similarly to the the example with bond hardening above, we ask the question what happens with the pullout curve if we reduce the bond length to a minimum. The answer is the same - we will recover a bond-slip law multiplied by the bond area. - However, if we increase the bond length, the trend will be different as already mentioned above. Once the length exceeds the effective bond length, there will be no increase in the pullout force and the pullout curve will exhibit a plateau. Let us show this trend by running a simple parametric study. Instead of doing it step by step we now run a loop over the list of length and colors, change the parameter `geometry.L_x` within the loop, `reset`, `run`, and `plot` the pullout curve in a respective color. _____no_output_____<div style="background-color:lightgray;text-align:left"> <img src="../icons/run.png" alt="Run" width="40" height="40"> &nbsp; &nbsp; <b>Run in a loop to see the effect of bond length</b> </div>_____no_output_____Note that a list in python is defined by the brackets ```[1,2,3,4]```. Two lists can be "zipped" together so that we can run a loop over the lengths and colors as shown in the third line of the cell <a id="crfp_study"></a>_____no_output_____ <code> fig, (ax, ax_bond_slip) = plt.subplots(1,2, figsize=(10,4), tight_layout=True) fig.canvas.header_visible = False for L, color in zip([5, 10, 50, 100, 200], ['red','green','blue','black','orange']): print('evaluating pullout curve for L', L) po_cfrp.geometry.L_x=L po_cfrp.reset() po_cfrp.run() po_cfrp.history.plot_Pw(ax, color=color) po_cfrp.material_model_.plot(ax_bond_slip)_____no_output_____ </code> # **Remark to structural ductility:** how to make the plateau useful? The softening bond cannot exploit the full strength of the CFRP sheet which might seem uneconomic at first sight. On the other hand it can be viewed as a mechanism that increases the deformation capacity of the structure with a constant level of load. This property can be effectively used to enhance the ductility of the structure, i.e. induce large deformation before the structural collapse required in engineering designs. This documents the importance of knowledge of the stress redistribution mechanisms available in the material. In steel reinforced structure, the ductility is provided inherently by the steel yielding property. In In case of brittle reinforcement, e.g. carbon fabrics, CFRP sheets, glass fabrics, other sources of ductility must be provided to ensure the sufficient deformation capacity between the serviceability and ultimate limit states. _____no_output_____<div style="background-color:lightgray;text-align:left"> <img src="../icons/exercise.png" alt="Run" width="40" height="40"> &nbsp; &nbsp; <a href="../exercises/X0301 - Pull-out curve versus shear stress profiles.pdf"><b>Exercise X0301:</b></a> <b>Pull-out curve versus shear stress profiles - part 1</b> <a href="https://moodle.rwth-aachen.de/mod/page/view.php?id=551821"><img src="../icons/bmcs_video.png" alt="Run"></a> </div>_____no_output_____<div style="background-color:lightgray;text-align:left"> <img src="../icons/exercise.png" alt="Run" width="40" height="40"> &nbsp; &nbsp; <a href="../exercises/X0302 - Pull-out curve versus shear stress profiles.pdf"><b>Exercise X0302:</b></a> <b>Pull-out curve versus shear stress profiles - part 2</b> <a href="https://moodle.rwth-aachen.de/mod/page/view.php?id=551823"><img src="../icons/bmcs_video.png" alt="Run"></a> </div>_____no_output_____<div style="background-color:lightgray;text-align:left;width:45%;display:inline-table;"> <img src="../icons/previous.png" alt="Previous trip" width="50" height="50"> &nbsp; <a href="../tour2_constant_bond/fragmentation.ipynb#top">2.3 Tensile behavior of composite</a> </div><div style="background-color:lightgray;text-align:center;width:10%;display:inline-table;"> <a href="#top"><img src="../icons/compass.png" alt="Compass" width="50" height="50"></a></div><div style="background-color:lightgray;text-align:right;width:45%;display:inline-table;"> <a href="3_2_anchorage_length.ipynb#top">3.2 Pullout curve versus bond length</a>&nbsp; <img src="../icons/next.png" alt="Previous trip" width="50" height="50"> </div> _____no_output_____
{ "repository": "bmcs-group/bmcs_tutorial", "path": "tour3_nonlinear_bond/3_1_nonlinear_bond.ipynb", "matched_keywords": [ "evolution" ], "stars": null, "size": 714395, "hexsha": "4818319ef185216e40592d269e36c99e8ee3b028", "max_line_length": 101548, "avg_line_length": 183.5075777036, "alphanum_fraction": 0.8539183505 }
# Notebook from swarnabha13/ai-economist Path: economic_simulation_basic.ipynb # Foundation Foundation is the name of the economic simulator built for the AI Economist ([paper here](https://arxiv.org/abs/2004.13332)). Foundation is specially designed for modeling economies in spatial, 2D grid worlds. The AI Economist paper uses a scenario with 4 agents in a world with *Stone* and *Wood*, which can be *collected*, *traded*, and used to build *Houses*. Here's a (nicely rendered) example of what such an environment looks like: ![Foundation snapshot](https://github.com/salesforce/ai-economist/blob/master/tutorials/assets/foundation_snapshot_rendered.jpg?raw=1) This image just shows what you might see spatially. Behind the scenes, agents have inventories of Stone, Wood, and *Coin*, which they can exchange through a commodities marketplace. In addition, they periodically pay taxes on income earned through trading and building._____no_output_____# Introduction Here we will demonstrate an instance of the simulation environment used and how to interact with it. We will cover the following: 1. Markov Decision Processes 2. Creating a Simulation Environment (a Scenario Instance) 3. Interacting with the Simulation 4. Sampling and Visualizing an Episode_____no_output_____## Dependencies: _____no_output_____ <code> import sys IN_COLAB = 'google.colab' in sys.modules if IN_COLAB: ! git clone https://github.com/salesforce/ai-economist.git %cd ai-economist ! pip install -e . else: ! pip install ai-economistCloning into 'ai-economist'... remote: Enumerating objects: 273, done. remote: Counting objects: 100% (12/12), done. remote: Compressing objects: 100% (12/12), done. remote: Total 273 (delta 6), reused 0 (delta 0), pack-reused 261 Receiving objects: 100% (273/273), 456.04 KiB | 3.40 MiB/s, done. Resolving deltas: 100% (141/141), done. /content/ai-economist Obtaining file:///content/ai-economist Requirement already satisfied: appdirs==1.4.4 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (1.4.4) Collecting appnope==0.1.0 Downloading https://files.pythonhosted.org/packages/87/a9/7985e6a53402f294c8f0e8eff3151a83f1fb901fa92909bb3ff29b4d22af/appnope-0.1.0-py2.py3-none-any.whl Collecting astroid==2.4.2 [?25l Downloading https://files.pythonhosted.org/packages/24/a8/5133f51967fb21e46ee50831c3f5dda49e976b7f915408d670b1603d41d6/astroid-2.4.2-py3-none-any.whl (213kB)  |████████████████████████████████| 215kB 4.3MB/s [?25hCollecting attrs==19.3.0 Downloading https://files.pythonhosted.org/packages/a2/db/4313ab3be961f7a763066401fb77f7748373b6094076ae2bda2806988af6/attrs-19.3.0-py2.py3-none-any.whl Requirement already satisfied: backcall==0.2.0 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (0.2.0) Collecting black==19.10b0 [?25l Downloading https://files.pythonhosted.org/packages/fd/bb/ad34bbc93d1bea3de086d7c59e528d4a503ac8fe318bd1fa48605584c3d2/black-19.10b0-py36-none-any.whl (97kB)  |████████████████████████████████| 102kB 4.8MB/s [?25hCollecting bleach==3.1.5 [?25l Downloading https://files.pythonhosted.org/packages/9a/1e/7d6cb3b27cd2c490558349ca5d5cc05b390b017da1c704cac807ac8bd9fb/bleach-3.1.5-py2.py3-none-any.whl (151kB)  |████████████████████████████████| 153kB 7.5MB/s [?25hCollecting certifi==2020.4.5.1 [?25l Downloading https://files.pythonhosted.org/packages/57/2b/26e37a4b034800c960a00c4e1b3d9ca5d7014e983e6e729e33ea2f36426c/certifi-2020.4.5.1-py2.py3-none-any.whl (157kB)  |████████████████████████████████| 163kB 5.9MB/s [?25hRequirement already satisfied: chardet==3.0.4 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (3.0.4) Requirement already satisfied: click==7.1.2 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (7.1.2) Requirement already satisfied: cycler==0.10.0 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (0.10.0) Requirement already satisfied: decorator==4.4.2 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (4.4.2) Collecting defusedxml==0.6.0 Downloading https://files.pythonhosted.org/packages/06/74/9b387472866358ebc08732de3da6dc48e44b0aacd2ddaa5cb85ab7e986a2/defusedxml-0.6.0-py2.py3-none-any.whl Collecting docutils==0.16 [?25l Downloading https://files.pythonhosted.org/packages/81/44/8a15e45ffa96e6cf82956dd8d7af9e666357e16b0d93b253903475ee947f/docutils-0.16-py2.py3-none-any.whl (548kB)  |████████████████████████████████| 552kB 7.0MB/s [?25hRequirement already satisfied: entrypoints==0.3 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (0.3) Collecting flake8==3.8.3 [?25l Downloading https://files.pythonhosted.org/packages/6c/20/6326a9a0c6f0527612bae748c4c03df5cd69cf06dfb2cf59d85c6e165a6a/flake8-3.8.3-py2.py3-none-any.whl (72kB)  |████████████████████████████████| 81kB 6.4MB/s [?25hCollecting idna==2.9 [?25l Downloading https://files.pythonhosted.org/packages/89/e3/afebe61c546d18fb1709a61bee788254b40e736cff7271c7de5de2dc4128/idna-2.9-py2.py3-none-any.whl (58kB)  |████████████████████████████████| 61kB 6.2MB/s [?25hCollecting importlib-metadata==1.6.1 Downloading https://files.pythonhosted.org/packages/98/13/a1d703ec396ade42c1d33df0e1cb691a28b7c08b336a5683912c87e04cd7/importlib_metadata-1.6.1-py2.py3-none-any.whl Collecting ipykernel==5.3.0 [?25l Downloading https://files.pythonhosted.org/packages/61/18/f2350f0396fca562c22f880e25d668eaf6de129b6a56bf5b6786796a12e1/ipykernel-5.3.0-py3-none-any.whl (119kB)  |████████████████████████████████| 122kB 11.7MB/s [?25hCollecting ipython==7.15.0 [?25l Downloading https://files.pythonhosted.org/packages/aa/e8/47fda10c3ab103d9d4a667b40da9afd542c4e50aeb00c861b4eee5bb4e8f/ipython-7.15.0-py3-none-any.whl (783kB)  |████████████████████████████████| 788kB 11.9MB/s [?25hRequirement already satisfied: ipython-genutils==0.2.0 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (0.2.0) Collecting ipywidgets==7.5.1 [?25l Downloading https://files.pythonhosted.org/packages/56/a0/dbcf5881bb2f51e8db678211907f16ea0a182b232c591a6d6f276985ca95/ipywidgets-7.5.1-py2.py3-none-any.whl (121kB)  |████████████████████████████████| 122kB 19.4MB/s [?25hCollecting isort==4.3.21 [?25l Downloading https://files.pythonhosted.org/packages/e5/b0/c121fd1fa3419ea9bfd55c7f9c4fedfec5143208d8c7ad3ce3db6c623c21/isort-4.3.21-py2.py3-none-any.whl (42kB)  |████████████████████████████████| 51kB 6.2MB/s [?25hCollecting jedi==0.17.0 [?25l Downloading https://files.pythonhosted.org/packages/f3/3f/67f027e18c60a800875df1a0894a2436ce9053637fa39725766e937c0a71/jedi-0.17.0-py2.py3-none-any.whl (1.1MB)  |████████████████████████████████| 1.1MB 18.5MB/s [?25hCollecting Jinja2==2.11.2 [?25l Downloading https://files.pythonhosted.org/packages/30/9e/f663a2aa66a09d838042ae1a2c5659828bb9b41ea3a6efa20a20fd92b121/Jinja2-2.11.2-py2.py3-none-any.whl (125kB)  |████████████████████████████████| 133kB 26.1MB/s [?25hCollecting jsonschema==3.2.0 [?25l Downloading https://files.pythonhosted.org/packages/c5/8f/51e89ce52a085483359217bc72cdbf6e75ee595d5b1d4b5ade40c7e018b8/jsonschema-3.2.0-py2.py3-none-any.whl (56kB)  |████████████████████████████████| 61kB 6.6MB/s [?25hRequirement already satisfied: jupyter==1.0.0 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (1.0.0) Collecting jupyter-client==6.1.3 [?25l Downloading https://files.pythonhosted.org/packages/34/0b/2ebddf775f558158ca8df23b35445fb15d4b1558a9e4a03bc7e75b13476e/jupyter_client-6.1.3-py3-none-any.whl (106kB)  |████████████████████████████████| 112kB 27.9MB/s [?25hCollecting jupyter-console==6.1.0 Downloading https://files.pythonhosted.org/packages/0a/89/742fa5a80b552ffcb6a8922712697c6e6828aee7b91ee4ae2b79f00f8401/jupyter_console-6.1.0-py2.py3-none-any.whl Collecting jupyter-core==4.6.3 [?25l Downloading https://files.pythonhosted.org/packages/63/0d/df2d17cdf389cea83e2efa9a4d32f7d527ba78667e0153a8e676e957b2f7/jupyter_core-4.6.3-py2.py3-none-any.whl (83kB)  |████████████████████████████████| 92kB 9.5MB/s [?25hCollecting keyring==21.2.1 Downloading https://files.pythonhosted.org/packages/a8/5e/d13b9feb235d042321a239ac8bc85e90cf3bbe49090c6f1383ac3fd53e0e/keyring-21.2.1-py3-none-any.whl Collecting kiwisolver==1.2.0 [?25l Downloading https://files.pythonhosted.org/packages/31/b9/6202dcae729998a0ade30e80ac00f616542ef445b088ec970d407dfd41c0/kiwisolver-1.2.0-cp37-cp37m-manylinux1_x86_64.whl (88kB)  |████████████████████████████████| 92kB 9.2MB/s [?25hCollecting lazy-object-proxy==1.4.3 [?25l Downloading https://files.pythonhosted.org/packages/23/f8/69df5a663b59512eb9f9b84e7f203c48c7a933e460316e9ebf4db2871ae0/lazy_object_proxy-1.4.3-cp37-cp37m-manylinux1_x86_64.whl (56kB)  |████████████████████████████████| 61kB 6.6MB/s [?25hCollecting lz4==3.1.0 [?25l Downloading https://files.pythonhosted.org/packages/38/bb/1f2854cbd89ddb79dff20067086f431f8e766ab08a4589659a75f7339865/lz4-3.1.0-cp37-cp37m-manylinux2010_x86_64.whl (1.8MB)  |████████████████████████████████| 1.8MB 25.0MB/s [?25hRequirement already satisfied: MarkupSafe==1.1.1 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (1.1.1) Collecting matplotlib==3.2.1 [?25l Downloading https://files.pythonhosted.org/packages/b2/c2/71fcf957710f3ba1f09088b35776a799ba7dd95f7c2b195ec800933b276b/matplotlib-3.2.1-cp37-cp37m-manylinux1_x86_64.whl (12.4MB)  |████████████████████████████████| 12.4MB 45.1MB/s [?25hCollecting mccabe==0.6.1 Downloading https://files.pythonhosted.org/packages/87/89/479dc97e18549e21354893e4ee4ef36db1d237534982482c3681ee6e7b57/mccabe-0.6.1-py2.py3-none-any.whl Requirement already satisfied: mistune==0.8.4 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (0.8.4) Collecting more-itertools==8.3.0 [?25l Downloading https://files.pythonhosted.org/packages/06/b1/2dcadc4861c505a807d5c6d88928450fe5afcf352f205432572a10d74657/more_itertools-8.3.0-py3-none-any.whl (44kB)  |████████████████████████████████| 51kB 6.6MB/s [?25hRequirement already satisfied: nbconvert==5.6.1 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (5.6.1) Collecting nbformat==5.0.7 [?25l Downloading https://files.pythonhosted.org/packages/4d/d1/b568bd35f95321f152f594b3647cd080e96d3347843ff2fa34dce871b8bf/nbformat-5.0.7-py3-none-any.whl (170kB)  |████████████████████████████████| 174kB 53.1MB/s [?25hCollecting notebook==6.0.3 [?25l Downloading https://files.pythonhosted.org/packages/b1/f1/0a67f09ef53a342403ffa66646ee39273e0ac79ffa5de5dbe2f3e28b5bdf/notebook-6.0.3-py3-none-any.whl (9.7MB)  |████████████████████████████████| 9.7MB 16.9MB/s [?25hCollecting numpy==1.18.5 [?25l Downloading https://files.pythonhosted.org/packages/d6/c6/58e517e8b1fb192725cfa23c01c2e60e4e6699314ee9684a1c5f5c9b27e1/numpy-1.18.5-cp37-cp37m-manylinux1_x86_64.whl (20.1MB)  |████████████████████████████████| 20.1MB 9.5MB/s [?25hCollecting packaging==20.4 Downloading https://files.pythonhosted.org/packages/46/19/c5ab91b1b05cfe63cccd5cfc971db9214c6dd6ced54e33c30d5af1d2bc43/packaging-20.4-py2.py3-none-any.whl Collecting pandocfilters==1.4.2 Downloading https://files.pythonhosted.org/packages/4c/ea/236e2584af67bb6df960832731a6e5325fd4441de001767da328c33368ce/pandocfilters-1.4.2.tar.gz Collecting parso==0.7.0 [?25l Downloading https://files.pythonhosted.org/packages/b5/61/998cce9e7476de000d031874df26a18f67cb73448164fc44a98f0c55920b/parso-0.7.0-py2.py3-none-any.whl (100kB)  |████████████████████████████████| 102kB 9.1MB/s [?25hCollecting pathspec==0.8.0 Downloading https://files.pythonhosted.org/packages/5d/d0/887c58853bd4b6ffc7aa9cdba4fc57d7b979b45888a6bd47e4568e1cf868/pathspec-0.8.0-py2.py3-none-any.whl Requirement already satisfied: pexpect==4.8.0 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (4.8.0) Requirement already satisfied: pickleshare==0.7.5 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (0.7.5) Collecting pkginfo==1.5.0.1 Downloading https://files.pythonhosted.org/packages/e6/d5/451b913307b478c49eb29084916639dc53a88489b993530fed0a66bab8b9/pkginfo-1.5.0.1-py2.py3-none-any.whl Collecting pluggy==0.13.1 Downloading https://files.pythonhosted.org/packages/a0/28/85c7aa31b80d150b772fbe4a229487bc6644da9ccb7e427dd8cc60cb8a62/pluggy-0.13.1-py2.py3-none-any.whl Collecting prometheus-client==0.8.0 [?25l Downloading https://files.pythonhosted.org/packages/3f/0e/554a265ffdc56e1494ef08e18f765b0cdec78797f510c58c45cf37abb4f4/prometheus_client-0.8.0-py2.py3-none-any.whl (53kB)  |████████████████████████████████| 61kB 8.0MB/s [?25hCollecting prompt-toolkit==3.0.5 [?25l Downloading https://files.pythonhosted.org/packages/e4/a7/81b39aa50e9284fe2cb21cc7fb7de7817b224172d42793fd57451d38842b/prompt_toolkit-3.0.5-py3-none-any.whl (351kB)  |████████████████████████████████| 358kB 43.6MB/s [?25hCollecting ptyprocess==0.6.0 Downloading https://files.pythonhosted.org/packages/d1/29/605c2cc68a9992d18dada28206eeada56ea4bd07a239669da41674648b6f/ptyprocess-0.6.0-py2.py3-none-any.whl Collecting py==1.8.1 [?25l Downloading https://files.pythonhosted.org/packages/99/8d/21e1767c009211a62a8e3067280bfce76e89c9f876180308515942304d2d/py-1.8.1-py2.py3-none-any.whl (83kB)  |████████████████████████████████| 92kB 10.3MB/s [?25hCollecting pycodestyle==2.6.0 [?25l Downloading https://files.pythonhosted.org/packages/10/5b/88879fb861ab79aef45c7e199cae3ef7af487b5603dcb363517a50602dd7/pycodestyle-2.6.0-py2.py3-none-any.whl (41kB)  |████████████████████████████████| 51kB 5.8MB/s [?25hCollecting pyflakes==2.2.0 [?25l Downloading https://files.pythonhosted.org/packages/69/5b/fd01b0c696f2f9a6d2c839883b642493b431f28fa32b29abc465ef675473/pyflakes-2.2.0-py2.py3-none-any.whl (66kB)  |████████████████████████████████| 71kB 8.7MB/s [?25hRequirement already satisfied: Pygments==2.6.1 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (2.6.1) Collecting pylint==2.5.3 [?25l Downloading https://files.pythonhosted.org/packages/e8/fb/734960c55474c8f74e6ad4c8588fc44073fb9d69e223269d26a3c2435d16/pylint-2.5.3-py3-none-any.whl (324kB)  |████████████████████████████████| 327kB 47.8MB/s [?25hRequirement already satisfied: pyparsing==2.4.7 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (2.4.7) Collecting pyrsistent==0.16.0 [?25l Downloading https://files.pythonhosted.org/packages/9f/0d/cbca4d0bbc5671822a59f270e4ce3f2195f8a899c97d0d5abb81b191efb5/pyrsistent-0.16.0.tar.gz (108kB)  |████████████████████████████████| 112kB 53.7MB/s [?25hCollecting pytest==5.4.3 [?25l Downloading https://files.pythonhosted.org/packages/9f/f3/0a83558da436a081344aa6c8b85ea5b5f05071214106036ce341b7769b0b/pytest-5.4.3-py3-none-any.whl (248kB)  |████████████████████████████████| 256kB 49.0MB/s [?25hRequirement already satisfied: python-dateutil==2.8.1 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (2.8.1) Collecting pyzmq==19.0.1 [?25l Downloading https://files.pythonhosted.org/packages/43/93/36a783239874b894d1fe1e791185ee6e627529661190bb06dc9f16fca78f/pyzmq-19.0.1-cp37-cp37m-manylinux1_x86_64.whl (1.1MB)  |████████████████████████████████| 1.1MB 45.1MB/s [?25hCollecting qtconsole==4.7.4 [?25l Downloading https://files.pythonhosted.org/packages/61/9c/ee26b844381f0cf2ea24bd822e4a9ed2c7fd6d8cdeef63be459c62132f9b/qtconsole-4.7.4-py2.py3-none-any.whl (118kB)  |████████████████████████████████| 122kB 40.3MB/s [?25hRequirement already satisfied: QtPy==1.9.0 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (1.9.0) Collecting readme-renderer==26.0 Downloading https://files.pythonhosted.org/packages/54/e4/ed43056d80a4fcc3667e543a59cc6beaf0a3c0eade837e5591e82ad3c25a/readme_renderer-26.0-py2.py3-none-any.whl Collecting regex==2020.6.8 [?25l Downloading https://files.pythonhosted.org/packages/84/60/cd50cc641bc3199bce3d37b3d240c20af9447ee06c8c283def56d7914232/regex-2020.6.8-cp37-cp37m-manylinux2010_x86_64.whl (661kB)  |████████████████████████████████| 665kB 47.5MB/s [?25hRequirement already satisfied: requests==2.23.0 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (2.23.0) Collecting requests-toolbelt==0.9.1 [?25l Downloading https://files.pythonhosted.org/packages/60/ef/7681134338fc097acef8d9b2f8abe0458e4d87559c689a8c306d0957ece5/requests_toolbelt-0.9.1-py2.py3-none-any.whl (54kB)  |████████████████████████████████| 61kB 7.4MB/s [?25hRequirement already satisfied: scipy==1.4.1 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (1.4.1) Requirement already satisfied: Send2Trash==1.5.0 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (1.5.0) Requirement already satisfied: six==1.15.0 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (1.15.0) Collecting terminado==0.8.3 Downloading https://files.pythonhosted.org/packages/ff/96/1d9a2c23990aea8f8e0b5c3b6627d03196a73771a17a2d9860bbe9823ab6/terminado-0.8.3-py2.py3-none-any.whl Requirement already satisfied: testpath==0.4.4 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (0.4.4) Collecting toml==0.10.1 Downloading https://files.pythonhosted.org/packages/9f/e1/1b40b80f2e1663a6b9f497123c11d7d988c0919abbf3c3f2688e448c5363/toml-0.10.1-py2.py3-none-any.whl Collecting tornado==6.0.4 [?25l Downloading https://files.pythonhosted.org/packages/95/84/119a46d494f008969bf0c775cb2c6b3579d3c4cc1bb1b41a022aa93ee242/tornado-6.0.4.tar.gz (496kB)  |████████████████████████████████| 501kB 28.5MB/s [?25hCollecting tqdm==4.46.1 [?25l Downloading https://files.pythonhosted.org/packages/f3/76/4697ce203a3d42b2ead61127b35e5fcc26bba9a35c03b32a2bd342a4c869/tqdm-4.46.1-py2.py3-none-any.whl (63kB)  |████████████████████████████████| 71kB 8.2MB/s [?25hCollecting traitlets==4.3.3 [?25l Downloading https://files.pythonhosted.org/packages/ca/ab/872a23e29cec3cf2594af7e857f18b687ad21039c1f9b922fac5b9b142d5/traitlets-4.3.3-py2.py3-none-any.whl (75kB)  |████████████████████████████████| 81kB 8.7MB/s [?25hCollecting twine==3.1.1 Downloading https://files.pythonhosted.org/packages/99/94/08b3b933c611416dad89c8abcc94a6d6c29e8609987235b6e7f10b42de82/twine-3.1.1-py3-none-any.whl Collecting typed-ast==1.4.1 [?25l Downloading https://files.pythonhosted.org/packages/5d/10/0c1e8aa723a2b0c4032e048d8e511df82c8a1262f0e1df5e4c54eb2613e9/typed_ast-1.4.1-cp37-cp37m-manylinux1_x86_64.whl (737kB)  |████████████████████████████████| 747kB 46.3MB/s [?25hCollecting urllib3==1.25.9 [?25l Downloading https://files.pythonhosted.org/packages/e1/e5/df302e8017440f111c11cc41a6b432838672f5a70aa29227bf58149dc72f/urllib3-1.25.9-py2.py3-none-any.whl (126kB)  |████████████████████████████████| 133kB 17.6MB/s [?25hCollecting wcwidth==0.2.4 Downloading https://files.pythonhosted.org/packages/ef/94/a17155b400812f0558093c6fb99f92ba823e24757d5884e3ec60f5e81319/wcwidth-0.2.4-py2.py3-none-any.whl Requirement already satisfied: webencodings==0.5.1 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (0.5.1) Requirement already satisfied: widgetsnbextension==3.5.1 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (3.5.1) Requirement already satisfied: wrapt==1.12.1 in /usr/local/lib/python3.7/dist-packages (from ai-economist==1.1.1) (1.12.1) Collecting zipp==3.1.0 Downloading https://files.pythonhosted.org/packages/b2/34/bfcb43cc0ba81f527bc4f40ef41ba2ff4080e047acb0586b56b3d017ace4/zipp-3.1.0-py3-none-any.whl Requirement already satisfied: setuptools>=18.5 in /usr/local/lib/python3.7/dist-packages (from ipython==7.15.0->ai-economist==1.1.1) (56.0.0) Collecting jeepney>=0.4.2; sys_platform == "linux" [?25l Downloading https://files.pythonhosted.org/packages/51/b0/a6ea72741aaac3f37fb96d195e4ee576a103c4c04e279bc6b446a70960e1/jeepney-0.6.0-py3-none-any.whl (45kB)  |████████████████████████████████| 51kB 7.0MB/s [?25hCollecting SecretStorage>=3; sys_platform == "linux" Downloading https://files.pythonhosted.org/packages/d9/1e/29cd69fdac7391aa51510dfd42aa70b4e6a826c8cd019ee2a8ab9ec0777f/SecretStorage-3.3.1-py3-none-any.whl Collecting cryptography>=2.0 [?25l Downloading https://files.pythonhosted.org/packages/b2/26/7af637e6a7e87258b963f1731c5982fb31cd507f0d90d91836e446955d02/cryptography-3.4.7-cp36-abi3-manylinux2014_x86_64.whl (3.2MB)  |████████████████████████████████| 3.2MB 43.2MB/s [?25hRequirement already satisfied: cffi>=1.12 in /usr/local/lib/python3.7/dist-packages (from cryptography>=2.0->SecretStorage>=3; sys_platform == "linux"->keyring==21.2.1->ai-economist==1.1.1) (1.14.5) Requirement already satisfied: pycparser in /usr/local/lib/python3.7/dist-packages (from cffi>=1.12->cryptography>=2.0->SecretStorage>=3; sys_platform == "linux"->keyring==21.2.1->ai-economist==1.1.1) (2.20) Building wheels for collected packages: pandocfilters, pyrsistent, tornado Building wheel for pandocfilters (setup.py) ... [?25l[?25hdone Created wheel for pandocfilters: filename=pandocfilters-1.4.2-cp37-none-any.whl size=7857 sha256=b61bf7d809243a46d2cdd5a9aa5fdf74adf34d41ecc8320d07c7fe50a29a3f17 Stored in directory: /root/.cache/pip/wheels/39/01/56/f1b08a6275acc59e846fa4c1e1b65dbc1919f20157d9e66c20 Building wheel for pyrsistent (setup.py) ... [?25l[?25hdone Created wheel for pyrsistent: filename=pyrsistent-0.16.0-cp37-cp37m-linux_x86_64.whl size=98732 sha256=108590b5fc0138ac961b98bac671194232738839f2c1e57224f0068133dce256 Stored in directory: /root/.cache/pip/wheels/c2/85/ad/bc6d41e2c4b35c9fdfed48f0fcd411ffc4164e67755ddf9ebb Building wheel for tornado (setup.py) ... [?25l[?25hdone Created wheel for tornado: filename=tornado-6.0.4-cp37-cp37m-linux_x86_64.whl size=428553 sha256=e90e4686268c9c8913ea9da58c621d2fa6866c12e29985a62dd09b4a0d32c1c0 Stored in directory: /root/.cache/pip/wheels/93/84/2f/409c7b2bb3afc3aa727f7ee8787975e0793f74d1165f4d0104 Successfully built pandocfilters pyrsistent tornado ERROR: tensorflow 2.4.1 has requirement numpy~=1.19.2, but you'll have numpy 1.18.5 which is incompatible. ERROR: nbclient 0.5.3 has requirement jupyter-client>=6.1.5, but you'll have jupyter-client 6.1.3 which is incompatible. ERROR: google-colab 1.0.0 has requirement ipykernel~=4.10, but you'll have ipykernel 5.3.0 which is incompatible. ERROR: google-colab 1.0.0 has requirement ipython~=5.5.0, but you'll have ipython 7.15.0 which is incompatible. ERROR: google-colab 1.0.0 has requirement notebook~=5.3.0; python_version >= "3.0", but you'll have notebook 6.0.3 which is incompatible. ERROR: google-colab 1.0.0 has requirement tornado~=5.1.0; python_version >= "3.0", but you'll have tornado 6.0.4 which is incompatible. ERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible. ERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.9 which is incompatible. Installing collected packages: appnope, typed-ast, lazy-object-proxy, astroid, attrs, toml, pathspec, regex, black, packaging, bleach, certifi, defusedxml, docutils, mccabe, zipp, importlib-metadata, pycodestyle, pyflakes, flake8, idna, tornado, pyzmq, traitlets, jupyter-core, jupyter-client, parso, jedi, wcwidth, prompt-toolkit, ipython, ipykernel, pyrsistent, jsonschema, nbformat, ipywidgets, isort, Jinja2, jupyter-console, jeepney, cryptography, SecretStorage, keyring, kiwisolver, lz4, numpy, matplotlib, more-itertools, prometheus-client, ptyprocess, terminado, notebook, pandocfilters, pkginfo, pluggy, py, pylint, pytest, qtconsole, readme-renderer, requests-toolbelt, tqdm, twine, urllib3, ai-economist Found existing installation: attrs 20.3.0 Uninstalling attrs-20.3.0: Successfully uninstalled attrs-20.3.0 Found existing installation: toml 0.10.2 Uninstalling toml-0.10.2: Successfully uninstalled toml-0.10.2 Found existing installation: regex 2019.12.20 Uninstalling regex-2019.12.20: Successfully uninstalled regex-2019.12.20 Found existing installation: packaging 20.9 Uninstalling packaging-20.9: Successfully uninstalled packaging-20.9 Found existing installation: bleach 3.3.0 Uninstalling bleach-3.3.0: Successfully uninstalled bleach-3.3.0 Found existing installation: certifi 2020.12.5 Uninstalling certifi-2020.12.5: Successfully uninstalled certifi-2020.12.5 Found existing installation: defusedxml 0.7.1 Uninstalling defusedxml-0.7.1: Successfully uninstalled defusedxml-0.7.1 Found existing installation: docutils 0.17 Uninstalling docutils-0.17: Successfully uninstalled docutils-0.17 Found existing installation: zipp 3.4.1 Uninstalling zipp-3.4.1: Successfully uninstalled zipp-3.4.1 Found existing installation: importlib-metadata 3.10.1 Uninstalling importlib-metadata-3.10.1: Successfully uninstalled importlib-metadata-3.10.1 Found existing installation: idna 2.10 Uninstalling idna-2.10: Successfully uninstalled idna-2.10 Found existing installation: tornado 5.1.1 Uninstalling tornado-5.1.1: Successfully uninstalled tornado-5.1.1 Found existing installation: pyzmq 22.0.3 Uninstalling pyzmq-22.0.3: Successfully uninstalled pyzmq-22.0.3 Found existing installation: traitlets 5.0.5 Uninstalling traitlets-5.0.5: Successfully uninstalled traitlets-5.0.5 Found existing installation: jupyter-core 4.7.1 Uninstalling jupyter-core-4.7.1: Successfully uninstalled jupyter-core-4.7.1 Found existing installation: jupyter-client 5.3.5 Uninstalling jupyter-client-5.3.5: Successfully uninstalled jupyter-client-5.3.5 Found existing installation: parso 0.8.2 Uninstalling parso-0.8.2: Successfully uninstalled parso-0.8.2 Found existing installation: jedi 0.18.0 Uninstalling jedi-0.18.0: Successfully uninstalled jedi-0.18.0 Found existing installation: wcwidth 0.2.5 Uninstalling wcwidth-0.2.5: Successfully uninstalled wcwidth-0.2.5 Found existing installation: prompt-toolkit 1.0.18 Uninstalling prompt-toolkit-1.0.18: Successfully uninstalled prompt-toolkit-1.0.18 Found existing installation: ipython 5.5.0 Uninstalling ipython-5.5.0: Successfully uninstalled ipython-5.5.0 Found existing installation: ipykernel 4.10.1 Uninstalling ipykernel-4.10.1: Successfully uninstalled ipykernel-4.10.1 Found existing installation: pyrsistent 0.17.3 Uninstalling pyrsistent-0.17.3: Successfully uninstalled pyrsistent-0.17.3 Found existing installation: jsonschema 2.6.0 Uninstalling jsonschema-2.6.0: Successfully uninstalled jsonschema-2.6.0 Found existing installation: nbformat 5.1.3 Uninstalling nbformat-5.1.3: Successfully uninstalled nbformat-5.1.3 Found existing installation: ipywidgets 7.6.3 Uninstalling ipywidgets-7.6.3: Successfully uninstalled ipywidgets-7.6.3 Found existing installation: Jinja2 2.11.3 Uninstalling Jinja2-2.11.3: Successfully uninstalled Jinja2-2.11.3 Found existing installation: jupyter-console 5.2.0 Uninstalling jupyter-console-5.2.0: Successfully uninstalled jupyter-console-5.2.0 Found existing installation: kiwisolver 1.3.1 Uninstalling kiwisolver-1.3.1: Successfully uninstalled kiwisolver-1.3.1 Found existing installation: numpy 1.19.5 Uninstalling numpy-1.19.5: Successfully uninstalled numpy-1.19.5 Found existing installation: matplotlib 3.2.2 Uninstalling matplotlib-3.2.2: Successfully uninstalled matplotlib-3.2.2 Found existing installation: more-itertools 8.7.0 Uninstalling more-itertools-8.7.0: Successfully uninstalled more-itertools-8.7.0 Found existing installation: prometheus-client 0.10.1 Uninstalling prometheus-client-0.10.1: Successfully uninstalled prometheus-client-0.10.1 Found existing installation: ptyprocess 0.7.0 Uninstalling ptyprocess-0.7.0: Successfully uninstalled ptyprocess-0.7.0 Found existing installation: terminado 0.9.4 Uninstalling terminado-0.9.4: Successfully uninstalled terminado-0.9.4 Found existing installation: notebook 5.3.1 Uninstalling notebook-5.3.1: Successfully uninstalled notebook-5.3.1 Found existing installation: pandocfilters 1.4.3 Uninstalling pandocfilters-1.4.3: Successfully uninstalled pandocfilters-1.4.3 Found existing installation: pluggy 0.7.1 Uninstalling pluggy-0.7.1: Successfully uninstalled pluggy-0.7.1 Found existing installation: py 1.10.0 Uninstalling py-1.10.0: Successfully uninstalled py-1.10.0 Found existing installation: pytest 3.6.4 Uninstalling pytest-3.6.4: Successfully uninstalled pytest-3.6.4 Found existing installation: qtconsole 5.0.3 Uninstalling qtconsole-5.0.3: Successfully uninstalled qtconsole-5.0.3 Found existing installation: tqdm 4.41.1 Uninstalling tqdm-4.41.1: Successfully uninstalled tqdm-4.41.1 Found existing installation: urllib3 1.24.3 Uninstalling urllib3-1.24.3: Successfully uninstalled urllib3-1.24.3 Running setup.py develop for ai-economist Successfully installed Jinja2-2.11.2 SecretStorage-3.3.1 ai-economist appnope-0.1.0 astroid-2.4.2 attrs-19.3.0 black-19.10b0 bleach-3.1.5 certifi-2020.4.5.1 cryptography-3.4.7 defusedxml-0.6.0 docutils-0.16 flake8-3.8.3 idna-2.9 importlib-metadata-1.6.1 ipykernel-5.3.0 ipython-7.15.0 ipywidgets-7.5.1 isort-4.3.21 jedi-0.17.0 jeepney-0.6.0 jsonschema-3.2.0 jupyter-client-6.1.3 jupyter-console-6.1.0 jupyter-core-4.6.3 keyring-21.2.1 kiwisolver-1.2.0 lazy-object-proxy-1.4.3 lz4-3.1.0 matplotlib-3.2.1 mccabe-0.6.1 more-itertools-8.3.0 nbformat-5.0.7 notebook-6.0.3 numpy-1.18.5 packaging-20.4 pandocfilters-1.4.2 parso-0.7.0 pathspec-0.8.0 pkginfo-1.5.0.1 pluggy-0.13.1 prometheus-client-0.8.0 prompt-toolkit-3.0.5 ptyprocess-0.6.0 py-1.8.1 pycodestyle-2.6.0 pyflakes-2.2.0 pylint-2.5.3 pyrsistent-0.16.0 pytest-5.4.3 pyzmq-19.0.1 qtconsole-4.7.4 readme-renderer-26.0 regex-2020.6.8 requests-toolbelt-0.9.1 terminado-0.8.3 toml-0.10.1 tornado-6.0.4 tqdm-4.46.1 traitlets-4.3.3 twine-3.1.1 typed-ast-1.4.1 urllib3-1.25.9 wcwidth-0.2.4 zipp-3.1.0 # Import foundation from ai_economist import foundation_____no_output_____import numpy as np %matplotlib inline import matplotlib.pyplot as plt from IPython import display if IN_COLAB: from tutorials.utils import plotting # plotting utilities for visualizing env state else: from utils import plotting_____no_output_____ </code> # 1. Markov Decision Process Formally, our economic simulation is a key part of a Markov Decision Process (MDP). MDPs describe episodes in which agents interact with a stateful environment in a continuous feedback loop. At each timestep, agents receive an observation and use a policy to choose actions. The environment then advances to a new state, using the old state and the chosen actions. The agents then receive new observations and rewards. This process repeats over $T$ timesteps (possibly infinite). The goal of each agent is to maximize its expected sum of future (discounted) rewards, by finding its optimal policy. Intuitively, this means that an agent needs to understand which (sequence of) actions lead to high rewards (in expectation). ### References For more information on reinforcement learning and MDPs, check out: - Richard S. Sutton and Andrew G. Barto, Reinforcement Learning: An Introduction. [http://incompleteideas.net/book/bookdraft2017nov5.pdf](http://incompleteideas.net/book/bookdraft2017nov5.pdf)_____no_output_____# 2. Creating a Simulation Environment (a Scenario Instance) The Scenario class implements an economic simulation with multiple agents and (optionally) a social planner. Each Scenario is stateful and implements two main methods: - __step__, which advances the simulation to the next state, and - __reset__, which puts the simulation back in an initial state. Each Scenario is customizable: you can specify options in a dictionary. Here is an example for a scenario with 4 agents:_____no_output_____ <code> # Define the configuration of the environment that will be built env_config = { # ===== SCENARIO CLASS ===== # Which Scenario class to use: the class's name in the Scenario Registry (foundation.scenarios). # The environment object will be an instance of the Scenario class. 'scenario_name': 'layout_from_file/simple_wood_and_stone', # ===== COMPONENTS ===== # Which components to use (specified as list of ("component_name", {component_kwargs}) tuples). # "component_name" refers to the Component class's name in the Component Registry (foundation.components) # {component_kwargs} is a dictionary of kwargs passed to the Component class # The order in which components reset, step, and generate obs follows their listed order below. 'components': [ # (1) Building houses ('Build', {'skill_dist': "pareto", 'payment_max_skill_multiplier': 3}), # (2) Trading collectible resources ('ContinuousDoubleAuction', {'max_num_orders': 5}), # (3) Movement and resource collection ('Gather', {}), ], # ===== SCENARIO CLASS ARGUMENTS ===== # (optional) kwargs that are added by the Scenario class (i.e. not defined in BaseEnvironment) 'env_layout_file': 'quadrant_25x25_20each_30clump.txt', 'starting_agent_coin': 10, 'fixed_four_skill_and_loc': True, # ===== STANDARD ARGUMENTS ====== # kwargs that are used by every Scenario class (i.e. defined in BaseEnvironment) 'n_agents': 4, # Number of non-planner agents (must be > 1) 'world_size': [25, 25], # [Height, Width] of the env world 'episode_length': 1000, # Number of timesteps per episode # In multi-action-mode, the policy selects an action for each action subspace (defined in component code). # Otherwise, the policy selects only 1 action. 'multi_action_mode_agents': False, 'multi_action_mode_planner': True, # When flattening observations, concatenate scalar & vector observations before output. # Otherwise, return observations with minimal processing. 'flatten_observations': False, # When Flattening masks, concatenate each action subspace mask into a single array. # Note: flatten_masks = True is required for masking action logits in the code below. 'flatten_masks': True, }_____no_output_____ </code> Create an environment instance using this configuration:_____no_output_____ <code> env = foundation.make_env_instance(**env_config)_____no_output_____ </code> # 3. Interacting with the Simulation_____no_output_____### Agents The Agent class holds the state of agents in the simulation. Each Agent instance represents a _logical_ agent. _Note that this might be separate from a Policy model that lives outside the Scenario and controls the Agent's behavior.______no_output_____ <code> env.get_agent(0)_____no_output_____ </code> ### A random policy Each Agent needs to choose which actions to execute using a __policy__. Agents might not always be allowed to execute all actions. For instance, a mobile Agent cannot move beyond the boundary of the world. Hence, in position (0, 0), a mobile cannot move "Left" or "Down". This information is given by a mask, which is provided under ```obs[<agent_id_str>]["action_mask"]``` in the observation dictionary ```obs``` returned by the scenario. Let's use a random policy to step through the simulation. The methods below implement a random policy._____no_output_____ <code> def sample_random_action(agent, mask): """Sample random UNMASKED action(s) for agent.""" # Return a list of actions: 1 for each action subspace if agent.multi_action_mode: split_masks = np.split(mask, agent.action_spaces.cumsum()[:-1]) return [np.random.choice(np.arange(len(m_)), p=m_/m_.sum()) for m_ in split_masks] # Return a single action else: return np.random.choice(np.arange(agent.action_spaces), p=mask/mask.sum()) def sample_random_actions(env, obs): """Samples random UNMASKED actions for each agent in obs.""" actions = { a_idx: sample_random_action(env.get_agent(a_idx), a_obs['action_mask']) for a_idx, a_obs in obs.items() } return actions_____no_output_____ </code> Now we're ready to interact with the simulation... First, environments can be put in an initial state by using __reset__._____no_output_____ <code> obs = env.reset()_____no_output_____ </code> Then, we call __step__ to advance the state and advance time by one tick._____no_output_____ <code> actions = sample_random_actions(env, obs) obs, rew, done, info = env.step(actions)_____no_output_____ </code> Internally, the __step__ method composes several Components (which act almost like modular sub-Environments) that implement various agent affordances and environment dynamics._____no_output_____### Observation Each observation is a dictionary that contains information for the $N$ agents and (optionally) social planner (with id "p")._____no_output_____ <code> obs.keys()_____no_output_____ </code> For each agent, the agent-specific observation is a dictionary. Each Component can contribute information to the agent-specific observation. For instance, the Build Component contributes the - Build-build_payment (float) - Build-build_skill (int) fields, which are defined in the ```generate_observations``` method in [foundation/components/build.py](https://github.com/salesforce/ai-economist/blob/master/ai_economist/foundation/components/build.py)._____no_output_____ <code> for key, val in obs['0'].items(): print("{:50} {}".format(key, type(val)))world-map <class 'numpy.ndarray'> world-idx_map <class 'numpy.ndarray'> world-loc-row <class 'float'> world-loc-col <class 'float'> world-inventory-Coin <class 'float'> world-inventory-Stone <class 'float'> world-inventory-Wood <class 'float'> time <class 'list'> Build-build_payment <class 'numpy.float64'> Build-build_skill <class 'float'> ContinuousDoubleAuction-market_rate-Stone <class 'numpy.float64'> ContinuousDoubleAuction-price_history-Stone <class 'numpy.ndarray'> ContinuousDoubleAuction-available_asks-Stone <class 'numpy.ndarray'> ContinuousDoubleAuction-available_bids-Stone <class 'numpy.ndarray'> ContinuousDoubleAuction-my_asks-Stone <class 'numpy.ndarray'> ContinuousDoubleAuction-my_bids-Stone <class 'numpy.ndarray'> ContinuousDoubleAuction-market_rate-Wood <class 'numpy.float64'> ContinuousDoubleAuction-price_history-Wood <class 'numpy.ndarray'> ContinuousDoubleAuction-available_asks-Wood <class 'numpy.ndarray'> ContinuousDoubleAuction-available_bids-Wood <class 'numpy.ndarray'> ContinuousDoubleAuction-my_asks-Wood <class 'numpy.ndarray'> ContinuousDoubleAuction-my_bids-Wood <class 'numpy.ndarray'> Gather-bonus_gather_prob <class 'float'> action_mask <class 'numpy.ndarray'> </code> ### Reward For each agent / planner, the reward dictionary contains a scalar reward:_____no_output_____ <code> for agent_idx, reward in rew.items(): print("{:2} {:.3f}".format(agent_idx, reward))0 -0.263 1 -0.263 2 -0.053 3 -0.053 p 0.000 </code> ### Done The __done__ object is a dictionary that by default records whether all agents have seen the end of the episode. The default criterion for each agent is to 'stop' their episode once $H$ steps have been executed. Once an agent is 'done', they do not change their state anymore. So, while it's not currently implemented, this could be used to indicate that the episode has ended *for a specific Agent*. In general, this is useful for telling a Reinforcement Learning framework when to reset the environment and how to organize the trajectories of individual Agents._____no_output_____ <code> done_____no_output_____ </code> ### Info The __info__ object can record any auxiliary information from the simulator, which can be useful, e.g., for visualization. By default, this is empty. To modify this behavior, modify the step() method in [foundation/base/base_env.py](https://github.com/salesforce/ai-economist/blob/master/ai_economist/foundation/base/base_env.py)._____no_output_____ <code> info_____no_output_____ </code> # 3. Sampling and Visualizing an Episode Let's step multiple times with this random policy and visualize the result:_____no_output_____ <code> def do_plot(env, ax, fig): """Plots world state during episode sampling.""" plotting.plot_env_state(env, ax) ax.set_aspect('equal') display.display(fig) display.clear_output(wait=True) def play_random_episode(env, plot_every=100, do_dense_logging=False): """Plays an episode with randomly sampled actions.""" fig, ax = plt.subplots(1, 1, figsize=(10, 10)) # Reset obs = env.reset(force_dense_logging=do_dense_logging) # Interaction loop (w/ plotting) for t in range(env.episode_length): actions = sample_random_actions(env, obs) obs, rew, done, info = env.step(actions) if ((t+1) % plot_every) == 0: do_plot(env, ax, fig) if ((t+1) % plot_every) != 0: do_plot(env, ax, fig) _____no_output_____play_random_episode(env, plot_every=100)_____no_output_____ </code> We see four agents (indicated by a circled __\*__) that move around in the 2-dimensional world. Light brown cells contain Stone, green cells contain Wood. Each agent can build Houses, indicated by corresponding colored cells. Water tiles (blue squares), which prevent movement, divide the map into four quadrants._____no_output_____# Visualize using dense logging Environments built with Foundation provide a couple tools for logging. Perhaps the most useful are **dense logs**. When you reset the environment, you can tell it to create a dense log for the new episode. This will store Agent states at each point in time along with any Component-specific dense log information (say, about builds, trades, etc.) that the Components provide. In addition, it will periodically store a snapshot of the world state. A few plotting tools that work well with the type of environment showcased here._____no_output_____ <code> # Play another episode. This time, tell the environment to do dense logging play_random_episode(env, plot_every=100, do_dense_logging=True) # Grab the dense log from the env dense_log = env.previous_episode_dense_log_____no_output_____# Show the evolution of the world state from t=0 to t=200 fig = plotting.vis_world_range(dense_log, t0=0, tN=200, N=5)_____no_output_____# Show the evolution of the world state over the full episode fig = plotting.vis_world_range(dense_log, N=5)_____no_output_____# Use the "breakdown" tool to visualize the world state, agent-wise quantities, movement, and trading events plotting.breakdown(dense_log);_______________:_ Agent 0 _____|_ Agent 1 _____|_ Agent 2 _____|_ Agent 3 ____ Cost (Wood) : 3.85 (n= 39) | 5.34 (n= 62) | 3.41 (n= 17) | 4.95 (n= 75) Cost (Stone) : 7.44 (n= 18) | 6.97 (n= 35) | 3.50 (n= 2) | 6.93 (n= 28) Income (Wood) : 5.00 (n= 32) | 4.82 (n= 73) | 3.50 (n= 14) | 4.72 (n= 74) Income (Stone) : 7.29 (n= 17) | 6.55 (n= 29) | 5.50 (n= 2) | 7.26 (n= 35) Income (Build) : 13.27 (n= 1) | 11.33 (n= 6) | ~~~~~~~~ | 16.47 (n= 15) </code>
{ "repository": "swarnabha13/ai-economist", "path": "economic_simulation_basic.ipynb", "matched_keywords": [ "evolution" ], "stars": null, "size": 287933, "hexsha": "48197ef46015ed94497f0ef77dac5ee53eb289b4", "max_line_length": 100658, "avg_line_length": 228.5182539683, "alphanum_fraction": 0.873772718 }
# Notebook from dongxulee/lifeCycle Path: 20210416/simulationV_gamma4.ipynb <code> %pylab inline from jax.scipy.ndimage import map_coordinates from constant import * import warnings from jax import jit, partial, random, vmap from tqdm import tqdm warnings.filterwarnings("ignore") np.printoptions(precision=2)Populating the interactive namespace from numpy and matplotlib nX = Xs.shape[0] nA = As.shape[0] Xs.shape, As.shape_____no_output_____Vgrid = np.load("Value.npy") #Define the earning function, which applies for both employment status and 27 econ states @partial(jit, static_argnums=(0,)) def y(t, x): ''' x = [w,n,m,s,e,o] x = [0,1,2,3,4,5] ''' if t <= T_R: return detEarning[t] * (1+gGDP[jnp.array(x[3], dtype = jnp.int8)]) * x[4] + (1-x[4]) * welfare else: return detEarning[-1] #Earning after tax and fixed by transaction in and out from 401k account @partial(jit, static_argnums=(0,)) def yAT(t,x): yt = y(t, x) # mortage payment pay = (x[2]*(1+rh)-x[2]*Dm[t])*x[5] if t <= T_R: # yi portion of the income will be put into the 401k if employed return (1-tau_L)*(yt * (1-yi) - pay)*x[4] + (1-x[4])*(yt-pay) else: # t > T_R, n/discounting amount will be withdraw from the 401k return (1-tau_R)*(yt-pay) + x[1]*Dn[t] #Define the evolution of the amount in 401k account @partial(jit, static_argnums=(0,)) def gn(t, x, r = r_bar): if t <= T_R: # if the person is employed, then yi portion of his income goes into 401k n_cur = x[1] + y(t, x) * yi * x[4] else: # t > T_R, n*Dn amount will be withdraw from the 401k n_cur = x[1] - x[1]*Dn[t] # the 401 grow as the same rate as the stock return (1+r)*n_cur #Define the utility function @jit def u(c): return (jnp.power(c, 1-gamma) - 1)/(1 - gamma) #Define the bequeath function, which is a function of bequeath wealth @jit def uB(tb): return B*u(tb) #Reward function depends on the housing and non-housing consumption @jit def R(x,a): ''' Input: x = [w,n,m,s,e,o] x = [0,1,2,3,4,5] a = [c,b,k,h,action] a = [0,1,2,3,4] ''' c = a[:,0] h = a[:,3] C = jnp.power(c, alpha) * jnp.power(h, 1-alpha) return u(C) @partial(jit, static_argnums=(0,)) def feasibleActions(t, x): # owner sell = As[:,2] budget1 = yAT(t,x) + x[0] + sell*(H*pt - x[2]*(1+rh) - c_s) c = budget1*As[:,0] # (H+h)*(1+kappa), h = percentage * H h = jnp.ones(nA)*H*(1+kappa) budget2 = budget1*(1-As[:,0]) k = budget2*As[:,1]*(1-Kc) b = budget2*(1-As[:,1]) owner_action = jnp.column_stack((c,b,k,h,sell)) # renter buy = As[:,2]*(t<=t_high) budget1 = yAT(t,x) + x[0] - buy*(H*pt*0.2 + c_h) h = budget1*As[:,0]*(1-alpha)/pr c = budget1*As[:,0]*alpha budget2 = budget1*(1-As[:,0]) k = budget2*As[:,1]*(1-Kc) b = budget2*(1-As[:,1]) renter_action = jnp.column_stack((c,b,k,h,buy)) actions = x[5]*owner_action + (1-x[5])*renter_action return actions @partial(jit, static_argnums=(0,)) def transition(t,a,x): ''' Input: x = [w,n,m,s,e,o] x = [0,1,2,3,4,5] a = [c,b,k,h,action] a = [0,1,2,3,4] Output: w_next n_next m_next s_next e_next o_next prob_next ''' nA = a.shape[0] s = jnp.array(x[3], dtype = jnp.int8) e = jnp.array(x[4], dtype = jnp.int8) # actions taken b = a[:,1] k = a[:,2] action = a[:,4] w_next = ((1+r_b[s])*b + jnp.outer(k,(1+r_k)).T).T.flatten().repeat(2) n_next = gn(t, x)*jnp.ones(w_next.size) s_next = jnp.tile(jnp.arange(nS),nA).repeat(nE) e_next = jnp.column_stack((e.repeat(nA*nS),(1-e).repeat(nA*nS))).flatten() # job status changing probability and econ state transition probability pe = Pe[s, e] ps = jnp.tile(Ps[s], nA) prob_next = jnp.column_stack(((1-pe)*ps,pe*ps)).flatten() # owner m_next_own = ((1-action)*x[2]*Dm[t]).repeat(nS*nE) o_next_own = (x[5] - action).repeat(nS*nE) # renter if t <= t_high: m_next_rent = (action*H*pt*0.8).repeat(nS*nE) o_next_rent = action.repeat(nS*nE) else: m_next_rent = np.zeros(w_next.size) o_next_rent = np.zeros(w_next.size) m_next = x[5] * m_next_own + (1-x[5]) * m_next_rent o_next = x[5] * o_next_own + (1-x[5]) * o_next_rent return jnp.column_stack((w_next,n_next,m_next,s_next,e_next,o_next,prob_next)) # used to calculate dot product @jit def dotProduct(p_next, uBTB): return (p_next*uBTB).reshape((p_next.shape[0]//(nS*nE), (nS*nE))).sum(axis = 1) # define approximation of fit @jit def fit(v, xp): return map_coordinates(v,jnp.vstack((xp[:,0]/scaleW, xp[:,1]/scaleN, xp[:,2]/scaleM, xp[:,3], xp[:,4], xp[:,5])), order = 1, mode = 'nearest') @partial(jit, static_argnums=(0,)) def V(t,V_next,x): ''' x = [w,n,m,s,e,o] x = [0,1,2,3,4,5] xp: w_next 0 n_next 1 m_next 2 s_next 3 e_next 4 o_next 5 prob_next 6 ''' actions = feasibleActions(t,x) xp = transition(t,actions,x) # bequeath utility TB = xp[:,0]+x[1]*(1+r_bar)+xp[:,5]*(H*pt-x[2]*(1+rh)) bequeathU = uB(TB) if t == T_max-1: Q = R(x,actions) + beta * dotProduct(xp[:,6], bequeathU) else: Q = R(x,actions) + beta * dotProduct(xp[:,6], Pa[t]*fit(V_next, xp) + (1-Pa[t])*bequeathU) Q = jnp.nan_to_num(Q, nan = -100) v = Q.max() cbkha = actions[Q.argmax()] return v, cbkha.reshape((1,-1))_____no_output_____num = 100000 ''' x = [w,n,m,s,e,o] x = [5,0,0,0,0,0] ''' from jax import random def simulation(key): x = [5, 0, 0, 0, 0, 0] path = [] move = [] for t in range(T_min, T_max-1): _, key = random.split(key) _,a = V(t,Vgrid[:,:,:,:,:,:,t+1],x) xp = transition(t,a,x) p = xp[:,-1] x_next = xp[:,:-1] x = x_next[random.choice(a = nS*nE, p=p, key = key)] path.append(x) move.append(a[0]) return jnp.array(path), jnp.array(move)_____no_output_____%%time # simulation part keys = vmap(random.PRNGKey)(jnp.arange(num)) Paths, Moves = vmap(simulation)(keys)CPU times: user 16h 2min 11s, sys: 4h 31min 12s, total: 20h 33min 24s Wall time: 47min 24s # x = [w,n,m,s,e,o] # x = [0,1,2,3,4,5] ws = Paths[:,:,0].T ns = Paths[:,:,1].T ms = Paths[:,:,2].T ss = Paths[:,:,3].T es = Paths[:,:,4].T os = Paths[:,:,5].T cs = Moves[:,:,0].T bs = Moves[:,:,1].T ks = Moves[:,:,2].T hs = Moves[:,:,3].T_____no_output_____plt.figure(figsize = [16,8]) plt.title("The mean values of simulation") plt.plot(range(21, T_max-1 + 20),jnp.median(ws,axis = 1)[:-1], label = "wealth") plt.plot(range(21, T_max-1 + 20),jnp.median(cs,axis = 1)[:-1], label = "consumption") plt.plot(range(21, T_max-1 + 20),jnp.median(bs,axis = 1)[:-1], label = "bond") plt.plot(range(21, T_max-1 + 20),jnp.median(ks,axis = 1)[:-1], label = "stock") # plt.plot(range(21, T_max-1 + 20),jnp.median(os*H*pt - ms+ws,axis = 1)[:-1], label = "Home equity + wealth") # plt.plot((hs*pr).mean(axis = 1)[:-1], label = "housing") plt.legend()_____no_output_____plt.title("housing consumption") plt.plot(range(21, T_max-1 + 20),(hs).mean(axis = 1)[:-1], label = "housing")_____no_output_____plt.title("house owner percentage in the population") plt.plot(range(21, T_max-1 + 20),(os).mean(axis = 1)[:-1], label = "owning")_____no_output_____plt.title("401k") plt.plot(range(21, T_max-1 + 20),(ns).mean(axis = 1)[:-1], label = "housing")_____no_output_____ </code>
{ "repository": "dongxulee/lifeCycle", "path": "20210416/simulationV_gamma4.ipynb", "matched_keywords": [ "evolution" ], "stars": null, "size": 144604, "hexsha": "481a6a3c59c8aa2c7ab599719e44c6ac9aec0187", "max_line_length": 78788, "avg_line_length": 293.9105691057, "alphanum_fraction": 0.9151060828 }
# Notebook from CactusPuppy/colab-notebooks Path: notebooks/AlphaFold.ipynb <a href="https://colab.research.google.com/github/CactusPuppy/colab-notebooks/blob/main/notebooks/AlphaFold.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____# AlphaFold Colab This Colab notebook allows you to easily predict the structure of a protein using a slightly simplified version of [AlphaFold v2.1.0](https://doi.org/10.1038/s41586-021-03819-2). **Differences to AlphaFold v2.1.0** In comparison to AlphaFold v2.1.0, this Colab notebook uses **no templates (homologous structures)** and a selected portion of the [BFD database](https://bfd.mmseqs.com/). We have validated these changes on several thousand recent PDB structures. While accuracy will be near-identical to the full AlphaFold system on many targets, a small fraction have a large drop in accuracy due to the smaller MSA and lack of templates. For best reliability, we recommend instead using the [full open source AlphaFold](https://github.com/deepmind/alphafold/), or the [AlphaFold Protein Structure Database](https://alphafold.ebi.ac.uk/). **This Colab has a small drop in average accuracy for multimers compared to local AlphaFold installation, for full multimer accuracy it is highly recommended to run [AlphaFold locally](https://github.com/deepmind/alphafold#running-alphafold).** Moreover, the AlphaFold-Multimer requires searching for MSA for every unique sequence in the complex, hence it is substantially slower. If your notebook times-out due to slow multimer MSA search, we recommend either using Colab Pro or running AlphaFold locally. Please note that this Colab notebook is provided as an early-access prototype and is not a finished product. It is provided for theoretical modelling only and caution should be exercised in its use. **Citing this work** Any publication that discloses findings arising from using this notebook should [cite](https://github.com/deepmind/alphafold/#citing-this-work) the [AlphaFold paper](https://doi.org/10.1038/s41586-021-03819-2). **Licenses** This Colab uses the [AlphaFold model parameters](https://github.com/deepmind/alphafold/#model-parameters-license) and its outputs are thus for non-commercial use only, under the Creative Commons Attribution-NonCommercial 4.0 International ([CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/legalcode)) license. The Colab itself is provided under the [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0). See the full license statement below. **More information** You can find more information about how AlphaFold works in the following papers: * [AlphaFold methods paper](https://www.nature.com/articles/s41586-021-03819-2) * [AlphaFold predictions of the human proteome paper](https://www.nature.com/articles/s41586-021-03828-1) * [AlphaFold-Multimer paper](https://www.biorxiv.org/content/10.1101/2021.10.04.463034v1) FAQ on how to interpret AlphaFold predictions are [here](https://alphafold.ebi.ac.uk/faq)._____no_output_____ <code> #@title Install third-party software #@markdown Please execute this cell by pressing the _Play_ button #@markdown on the left to download and import third-party software #@markdown in this Colab notebook. (See the [acknowledgements](https://github.com/deepmind/alphafold/#acknowledgements) in our readme.) #@markdown **Note**: This installs the software on the Colab #@markdown notebook in the cloud and not on your computer. from IPython.utils import io import os import subprocess import tqdm.notebook TQDM_BAR_FORMAT = '{l_bar}{bar}| {n_fmt}/{total_fmt} [elapsed: {elapsed} remaining: {remaining}]' try: with tqdm.notebook.tqdm(total=100, bar_format=TQDM_BAR_FORMAT) as pbar: with io.capture_output() as captured: # Uninstall default Colab version of TF. %shell pip uninstall -y tensorflow %shell sudo apt install --quiet --yes hmmer pbar.update(6) # Install py3dmol. %shell pip install py3dmol pbar.update(2) # Install OpenMM and pdbfixer. %shell rm -rf /opt/conda %shell wget -q -P /tmp \ https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \ && bash /tmp/Miniconda3-latest-Linux-x86_64.sh -b -p /opt/conda \ && rm /tmp/Miniconda3-latest-Linux-x86_64.sh pbar.update(9) PATH=%env PATH %env PATH=/opt/conda/bin:{PATH} %shell conda update -qy conda \ && conda install -qy -c conda-forge \ python=3.7 \ openmm=7.5.1 \ pdbfixer pbar.update(80) # Create a ramdisk to store a database chunk to make Jackhmmer run fast. %shell sudo mkdir -m 777 --parents /tmp/ramdisk %shell sudo mount -t tmpfs -o size=9G ramdisk /tmp/ramdisk pbar.update(2) %shell wget -q -P /content \ https://git.scicore.unibas.ch/schwede/openstructure/-/raw/7102c63615b64735c4941278d92b554ec94415f8/modules/mol/alg/src/stereo_chemical_props.txt pbar.update(1) except subprocess.CalledProcessError: print(captured) raise_____no_output_____#@title Download AlphaFold #@markdown Please execute this cell by pressing the *Play* button on #@markdown the left. GIT_REPO = 'https://github.com/deepmind/alphafold' SOURCE_URL = 'https://storage.googleapis.com/alphafold/alphafold_params_colab_2021-10-27.tar' PARAMS_DIR = './alphafold/data/params' PARAMS_PATH = os.path.join(PARAMS_DIR, os.path.basename(SOURCE_URL)) try: with tqdm.notebook.tqdm(total=100, bar_format=TQDM_BAR_FORMAT) as pbar: with io.capture_output() as captured: %shell rm -rf alphafold %shell git clone --branch main {GIT_REPO} alphafold pbar.update(8) # Install the required versions of all dependencies. %shell pip3 install -r ./alphafold/requirements.txt # Run setup.py to install only AlphaFold. %shell pip3 install --no-dependencies ./alphafold pbar.update(10) # Apply OpenMM patch. %shell pushd /opt/conda/lib/python3.7/site-packages/ && \ patch -p0 < /content/alphafold/docker/openmm.patch && \ popd # Make sure stereo_chemical_props.txt is in all locations where it could be searched for. %shell mkdir -p /content/alphafold/alphafold/common %shell cp -f /content/stereo_chemical_props.txt /content/alphafold/alphafold/common %shell mkdir -p /opt/conda/lib/python3.7/site-packages/alphafold/common/ %shell cp -f /content/stereo_chemical_props.txt /opt/conda/lib/python3.7/site-packages/alphafold/common/ %shell mkdir --parents "{PARAMS_DIR}" %shell wget -O "{PARAMS_PATH}" "{SOURCE_URL}" pbar.update(27) %shell tar --extract --verbose --file="{PARAMS_PATH}" \ --directory="{PARAMS_DIR}" --preserve-permissions %shell rm "{PARAMS_PATH}" pbar.update(55) except subprocess.CalledProcessError: print(captured) raise import jax if jax.local_devices()[0].platform == 'tpu': raise RuntimeError('Colab TPU runtime not supported. Change it to GPU via Runtime -> Change Runtime Type -> Hardware accelerator -> GPU.') elif jax.local_devices()[0].platform == 'cpu': raise RuntimeError('Colab CPU runtime not supported. Change it to GPU via Runtime -> Change Runtime Type -> Hardware accelerator -> GPU.') else: print(f'Running with {jax.local_devices()[0].device_kind} GPU') # Make sure everything we need is on the path. import sys sys.path.append('/opt/conda/lib/python3.7/site-packages') sys.path.append('/content/alphafold') # Make sure all necessary environment variables are set. import os os.environ['TF_FORCE_UNIFIED_MEMORY'] = '1' os.environ['XLA_PYTHON_CLIENT_MEM_FRACTION'] = '2.0'_____no_output_____ </code> ## Making a prediction Please paste the sequence of your protein in the text box below, then run the remaining cells via _Runtime_ > _Run after_. You can also run the cells individually by pressing the _Play_ button on the left. Note that the search against databases and the actual prediction can take some time, from minutes to hours, depending on the length of the protein and what type of GPU you are allocated by Colab (see FAQ below)._____no_output_____ <code> #@title Enter the amino acid sequence(s) to fold ⬇️ #@markdown Enter the amino acid sequence(s) to fold: #@markdown * If you enter only a single sequence, the monomer model will be used. #@markdown * If you enter multiple sequences, the multimer model will be used. from alphafold.notebooks import notebook_utils sequence_1 = 'APTSSSTKKTQLQLEHLLLDLQMILNGINNYKNPKLTRMLTFKFYMPKKATELKHLQCLEEELKPLEEVLNLAQSKNFHLRPRDLISNINVIVLELKGSETTFMCEYADETATIVEFLNRWITFSQSIISTLT' #@param {type:"string"} sequence_2 = '' #@param {type:"string"} sequence_3 = '' #@param {type:"string"} sequence_4 = '' #@param {type:"string"} sequence_5 = '' #@param {type:"string"} sequence_6 = '' #@param {type:"string"} sequence_7 = '' #@param {type:"string"} sequence_8 = '' #@param {type:"string"} input_sequences = (sequence_1, sequence_2, sequence_3, sequence_4, sequence_5, sequence_6, sequence_7, sequence_8) #@markdown If folding a complex target and all the input sequences are #@markdown prokaryotic then set `is_prokaryotic` to `True`. Set to `False` #@markdown otherwise or if the origin is unknown. is_prokaryote = False #@param {type:"boolean"} MIN_SINGLE_SEQUENCE_LENGTH = 16 MAX_SINGLE_SEQUENCE_LENGTH = 2500 MAX_MULTIMER_LENGTH = 2500 # Validate the input. sequences, model_type_to_use = notebook_utils.validate_input( input_sequences=input_sequences, min_length=MIN_SINGLE_SEQUENCE_LENGTH, max_length=MAX_SINGLE_SEQUENCE_LENGTH, max_multimer_length=MAX_MULTIMER_LENGTH)_____no_output_____#@title Search against genetic databases #@markdown Once this cell has been executed, you will see #@markdown statistics about the multiple sequence alignment #@markdown (MSA) that will be used by AlphaFold. In particular, #@markdown you’ll see how well each residue is covered by similar #@markdown sequences in the MSA. # --- Python imports --- import collections import copy from concurrent import futures import json import random from urllib import request from google.colab import files from matplotlib import gridspec import matplotlib.pyplot as plt import numpy as np import py3Dmol from alphafold.model import model from alphafold.model import config from alphafold.model import data from alphafold.data import feature_processing from alphafold.data import msa_pairing from alphafold.data import parsers from alphafold.data import pipeline from alphafold.data import pipeline_multimer from alphafold.data.tools import jackhmmer from alphafold.common import protein from alphafold.relax import relax from alphafold.relax import utils from IPython import display from ipywidgets import GridspecLayout from ipywidgets import Output # Color bands for visualizing plddt PLDDT_BANDS = [(0, 50, '#FF7D45'), (50, 70, '#FFDB13'), (70, 90, '#65CBF3'), (90, 100, '#0053D6')] # --- Find the closest source --- test_url_pattern = 'https://storage.googleapis.com/alphafold-colab{:s}/latest/uniref90_2021_03.fasta.1' ex = futures.ThreadPoolExecutor(3) def fetch(source): request.urlretrieve(test_url_pattern.format(source)) return source fs = [ex.submit(fetch, source) for source in ['', '-europe', '-asia']] source = None for f in futures.as_completed(fs): source = f.result() ex.shutdown() break JACKHMMER_BINARY_PATH = '/usr/bin/jackhmmer' DB_ROOT_PATH = f'https://storage.googleapis.com/alphafold-colab{source}/latest/' # The z_value is the number of sequences in a database. MSA_DATABASES = [ {'db_name': 'uniref90', 'db_path': f'{DB_ROOT_PATH}uniref90_2021_03.fasta', 'num_streamed_chunks': 59, 'z_value': 135_301_051}, {'db_name': 'smallbfd', 'db_path': f'{DB_ROOT_PATH}bfd-first_non_consensus_sequences.fasta', 'num_streamed_chunks': 17, 'z_value': 65_984_053}, {'db_name': 'mgnify', 'db_path': f'{DB_ROOT_PATH}mgy_clusters_2019_05.fasta', 'num_streamed_chunks': 71, 'z_value': 304_820_129}, ] # Search UniProt and construct the all_seq features only for heteromers, not homomers. if model_type_to_use == notebook_utils.ModelType.MULTIMER and len(set(sequences)) > 1: MSA_DATABASES.extend([ # Swiss-Prot and TrEMBL are concatenated together as UniProt. {'db_name': 'uniprot', 'db_path': f'{DB_ROOT_PATH}uniprot_2021_03.fasta', 'num_streamed_chunks': 98, 'z_value': 219_174_961 + 565_254}, ]) TOTAL_JACKHMMER_CHUNKS = sum([cfg['num_streamed_chunks'] for cfg in MSA_DATABASES]) MAX_HITS = { 'uniref90': 10_000, 'smallbfd': 5_000, 'mgnify': 501, 'uniprot': 50_000, } def get_msa(fasta_path): """Searches for MSA for the given sequence using chunked Jackhmmer search.""" # Run the search against chunks of genetic databases (since the genetic # databases don't fit in Colab disk). raw_msa_results = collections.defaultdict(list) with tqdm.notebook.tqdm(total=TOTAL_JACKHMMER_CHUNKS, bar_format=TQDM_BAR_FORMAT) as pbar: def jackhmmer_chunk_callback(i): pbar.update(n=1) for db_config in MSA_DATABASES: db_name = db_config['db_name'] pbar.set_description(f'Searching {db_name}') jackhmmer_runner = jackhmmer.Jackhmmer( binary_path=JACKHMMER_BINARY_PATH, database_path=db_config['db_path'], get_tblout=True, num_streamed_chunks=db_config['num_streamed_chunks'], streaming_callback=jackhmmer_chunk_callback, z_value=db_config['z_value']) # Group the results by database name. raw_msa_results[db_name].extend(jackhmmer_runner.query(fasta_path)) return raw_msa_results features_for_chain = {} raw_msa_results_for_sequence = {} for sequence_index, sequence in enumerate(sequences, start=1): print(f'\nGetting MSA for sequence {sequence_index}') fasta_path = f'target_{sequence_index}.fasta' with open(fasta_path, 'wt') as f: f.write(f'>query\n{sequence}') # Don't do redundant work for multiple copies of the same chain in the multimer. if sequence not in raw_msa_results_for_sequence: raw_msa_results = get_msa(fasta_path=fasta_path) raw_msa_results_for_sequence[sequence] = raw_msa_results else: raw_msa_results = copy.deepcopy(raw_msa_results_for_sequence[sequence]) # Extract the MSAs from the Stockholm files. # NB: deduplication happens later in pipeline.make_msa_features. single_chain_msas = [] uniprot_msa = None for db_name, db_results in raw_msa_results.items(): merged_msa = notebook_utils.merge_chunked_msa( results=db_results, max_hits=MAX_HITS.get(db_name)) if merged_msa.sequences and db_name != 'uniprot': single_chain_msas.append(merged_msa) msa_size = len(set(merged_msa.sequences)) print(f'{msa_size} unique sequences found in {db_name} for sequence {sequence_index}') elif merged_msa.sequences and db_name == 'uniprot': uniprot_msa = merged_msa notebook_utils.show_msa_info(single_chain_msas=single_chain_msas, sequence_index=sequence_index) # Turn the raw data into model features. feature_dict = {} feature_dict.update(pipeline.make_sequence_features( sequence=sequence, description='query', num_res=len(sequence))) feature_dict.update(pipeline.make_msa_features(msas=single_chain_msas)) # We don't use templates in AlphaFold Colab notebook, add only empty placeholder features. feature_dict.update(notebook_utils.empty_placeholder_template_features( num_templates=0, num_res=len(sequence))) # Construct the all_seq features only for heteromers, not homomers. if model_type_to_use == notebook_utils.ModelType.MULTIMER and len(set(sequences)) > 1: valid_feats = msa_pairing.MSA_FEATURES + ( 'msa_uniprot_accession_identifiers', 'msa_species_identifiers', ) all_seq_features = { f'{k}_all_seq': v for k, v in pipeline.make_msa_features([uniprot_msa]).items() if k in valid_feats} feature_dict.update(all_seq_features) features_for_chain[protein.PDB_CHAIN_IDS[sequence_index - 1]] = feature_dict # Do further feature post-processing depending on the model type. if model_type_to_use == notebook_utils.ModelType.MONOMER: np_example = features_for_chain[protein.PDB_CHAIN_IDS[0]] elif model_type_to_use == notebook_utils.ModelType.MULTIMER: all_chain_features = {} for chain_id, chain_features in features_for_chain.items(): all_chain_features[chain_id] = pipeline_multimer.convert_monomer_features( chain_features, chain_id) all_chain_features = pipeline_multimer.add_assembly_features(all_chain_features) np_example = feature_processing.pair_and_merge( all_chain_features=all_chain_features, is_prokaryote=is_prokaryote) # Pad MSA to avoid zero-sized extra_msa. np_example = pipeline_multimer.pad_msa(np_example, min_num_seq=512)_____no_output_____#@title Run AlphaFold and download prediction #@markdown Once this cell has been executed, a zip-archive with #@markdown the obtained prediction will be automatically downloaded #@markdown to your computer. #@markdown In case you are having issues with the relaxation stage, you can disable it below. #@markdown Warning: This means that the prediction might have distracting #@markdown small stereochemical violations. run_relax = True #@param {type:"boolean"} # --- Run the model --- if model_type_to_use == notebook_utils.ModelType.MONOMER: model_names = config.MODEL_PRESETS['monomer'] + ('model_2_ptm',) elif model_type_to_use == notebook_utils.ModelType.MULTIMER: model_names = config.MODEL_PRESETS['multimer'] output_dir = 'prediction' os.makedirs(output_dir, exist_ok=True) plddts = {} ranking_confidences = {} pae_outputs = {} unrelaxed_proteins = {} with tqdm.notebook.tqdm(total=len(model_names) + 1, bar_format=TQDM_BAR_FORMAT) as pbar: for model_name in model_names: pbar.set_description(f'Running {model_name}') cfg = config.model_config(model_name) if model_type_to_use == notebook_utils.ModelType.MONOMER: cfg.data.eval.num_ensemble = 1 elif model_type_to_use == notebook_utils.ModelType.MULTIMER: cfg.model.num_ensemble_eval = 1 params = data.get_model_haiku_params(model_name, './alphafold/data') model_runner = model.RunModel(cfg, params) processed_feature_dict = model_runner.process_features(np_example, random_seed=0) prediction = model_runner.predict(processed_feature_dict, random_seed=random.randrange(sys.maxsize)) mean_plddt = prediction['plddt'].mean() if model_type_to_use == notebook_utils.ModelType.MONOMER: if 'predicted_aligned_error' in prediction: pae_outputs[model_name] = (prediction['predicted_aligned_error'], prediction['max_predicted_aligned_error']) else: # Monomer models are sorted by mean pLDDT. Do not put monomer pTM models here as they # should never get selected. ranking_confidences[model_name] = prediction['ranking_confidence'] plddts[model_name] = prediction['plddt'] elif model_type_to_use == notebook_utils.ModelType.MULTIMER: # Multimer models are sorted by pTM+ipTM. ranking_confidences[model_name] = prediction['ranking_confidence'] plddts[model_name] = prediction['plddt'] pae_outputs[model_name] = (prediction['predicted_aligned_error'], prediction['max_predicted_aligned_error']) # Set the b-factors to the per-residue plddt. final_atom_mask = prediction['structure_module']['final_atom_mask'] b_factors = prediction['plddt'][:, None] * final_atom_mask unrelaxed_protein = protein.from_prediction( processed_feature_dict, prediction, b_factors=b_factors, remove_leading_feature_dimension=( model_type_to_use == notebook_utils.ModelType.MONOMER)) unrelaxed_proteins[model_name] = unrelaxed_protein # Delete unused outputs to save memory. del model_runner del params del prediction pbar.update(n=1) # --- AMBER relax the best model --- # Find the best model according to the mean pLDDT. best_model_name = max(ranking_confidences.keys(), key=lambda x: ranking_confidences[x]) if run_relax: pbar.set_description(f'AMBER relaxation') amber_relaxer = relax.AmberRelaxation( max_iterations=0, tolerance=2.39, stiffness=10.0, exclude_residues=[], max_outer_iterations=3) relaxed_pdb, _, _ = amber_relaxer.process(prot=unrelaxed_proteins[best_model_name]) else: print('Warning: Running without the relaxation stage.') relaxed_pdb = protein.to_pdb(unrelaxed_proteins[best_model_name]) pbar.update(n=1) # Finished AMBER relax. # Construct multiclass b-factors to indicate confidence bands # 0=very low, 1=low, 2=confident, 3=very high banded_b_factors = [] for plddt in plddts[best_model_name]: for idx, (min_val, max_val, _) in enumerate(PLDDT_BANDS): if plddt >= min_val and plddt <= max_val: banded_b_factors.append(idx) break banded_b_factors = np.array(banded_b_factors)[:, None] * final_atom_mask to_visualize_pdb = utils.overwrite_b_factors(relaxed_pdb, banded_b_factors) # Write out the prediction pred_output_path = os.path.join(output_dir, 'selected_prediction.pdb') with open(pred_output_path, 'w') as f: f.write(relaxed_pdb) # --- Visualise the prediction & confidence --- show_sidechains = True def plot_plddt_legend(): """Plots the legend for pLDDT.""" thresh = ['Very low (pLDDT < 50)', 'Low (70 > pLDDT > 50)', 'Confident (90 > pLDDT > 70)', 'Very high (pLDDT > 90)'] colors = [x[2] for x in PLDDT_BANDS] plt.figure(figsize=(2, 2)) for c in colors: plt.bar(0, 0, color=c) plt.legend(thresh, frameon=False, loc='center', fontsize=20) plt.xticks([]) plt.yticks([]) ax = plt.gca() ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.spines['left'].set_visible(False) ax.spines['bottom'].set_visible(False) plt.title('Model Confidence', fontsize=20, pad=20) return plt # Show the structure coloured by chain if the multimer model has been used. if model_type_to_use == notebook_utils.ModelType.MULTIMER: multichain_view = py3Dmol.view(width=800, height=600) multichain_view.addModelsAsFrames(to_visualize_pdb) multichain_style = {'cartoon': {'colorscheme': 'chain'}} multichain_view.setStyle({'model': -1}, multichain_style) multichain_view.zoomTo() multichain_view.show() # Color the structure by per-residue pLDDT color_map = {i: bands[2] for i, bands in enumerate(PLDDT_BANDS)} view = py3Dmol.view(width=800, height=600) view.addModelsAsFrames(to_visualize_pdb) style = {'cartoon': {'colorscheme': {'prop': 'b', 'map': color_map}}} if show_sidechains: style['stick'] = {} view.setStyle({'model': -1}, style) view.zoomTo() grid = GridspecLayout(1, 2) out = Output() with out: view.show() grid[0, 0] = out out = Output() with out: plot_plddt_legend().show() grid[0, 1] = out display.display(grid) # Display pLDDT and predicted aligned error (if output by the model). if pae_outputs: num_plots = 2 else: num_plots = 1 plt.figure(figsize=[8 * num_plots, 6]) plt.subplot(1, num_plots, 1) plt.plot(plddts[best_model_name]) plt.title('Predicted LDDT') plt.xlabel('Residue') plt.ylabel('pLDDT') if num_plots == 2: plt.subplot(1, 2, 2) pae, max_pae = list(pae_outputs.values())[0] plt.imshow(pae, vmin=0., vmax=max_pae, cmap='Greens_r') plt.colorbar(fraction=0.046, pad=0.04) # Display lines at chain boundaries. best_unrelaxed_prot = unrelaxed_proteins[best_model_name] total_num_res = best_unrelaxed_prot.residue_index.shape[-1] chain_ids = best_unrelaxed_prot.chain_index for chain_boundary in np.nonzero(chain_ids[:-1] - chain_ids[1:]): if chain_boundary.size: plt.plot([0, total_num_res], [chain_boundary, chain_boundary], color='red') plt.plot([chain_boundary, chain_boundary], [0, total_num_res], color='red') plt.title('Predicted Aligned Error') plt.xlabel('Scored residue') plt.ylabel('Aligned residue') # Save the predicted aligned error (if it exists). pae_output_path = os.path.join(output_dir, 'predicted_aligned_error.json') if pae_outputs: # Save predicted aligned error in the same format as the AF EMBL DB. pae_data = notebook_utils.get_pae_json(pae=pae, max_pae=max_pae.item()) with open(pae_output_path, 'w') as f: f.write(pae_data) # --- Download the predictions --- !zip -q -r {output_dir}.zip {output_dir} files.download(f'{output_dir}.zip')_____no_output_____ </code> ### Interpreting the prediction In general predicted LDDT (pLDDT) is best used for intra-domain confidence, whereas Predicted Aligned Error (PAE) is best used for determining between domain or between chain confidence. Please see the [AlphaFold methods paper](https://www.nature.com/articles/s41586-021-03819-2), the [AlphaFold predictions of the human proteome paper](https://www.nature.com/articles/s41586-021-03828-1), and the [AlphaFold-Multimer paper](https://www.biorxiv.org/content/10.1101/2021.10.04.463034v1) as well as [our FAQ](https://alphafold.ebi.ac.uk/faq) on how to interpret AlphaFold predictions._____no_output_____## FAQ & Troubleshooting * How do I get a predicted protein structure for my protein? * Click on the _Connect_ button on the top right to get started. * Paste the amino acid sequence of your protein (without any headers) into the “Enter the amino acid sequence to fold”. * Run all cells in the Colab, either by running them individually (with the play button on the left side) or via _Runtime_ > _Run all._ * The predicted protein structure will be downloaded once all cells have been executed. Note: This can take minutes to hours - see below. * How long will this take? * Downloading the AlphaFold source code can take up to a few minutes. * Downloading and installing the third-party software can take up to a few minutes. * The search against genetic databases can take minutes to hours. * Running AlphaFold and generating the prediction can take minutes to hours, depending on the length of your protein and on which GPU-type Colab has assigned you. * My Colab no longer seems to be doing anything, what should I do? * Some steps may take minutes to hours to complete. * If nothing happens or if you receive an error message, try restarting your Colab runtime via _Runtime_ > _Restart runtime_. * If this doesn’t help, try resetting your Colab runtime via _Runtime_ > _Factory reset runtime_. * How does this compare to the open-source version of AlphaFold? * This Colab version of AlphaFold searches a selected portion of the BFD dataset and currently doesn’t use templates, so its accuracy is reduced in comparison to the full version of AlphaFold that is described in the [AlphaFold paper](https://doi.org/10.1038/s41586-021-03819-2) and [Github repo](https://github.com/deepmind/alphafold/) (the full version is available via the inference script). * What is a Colab? * See the [Colab FAQ](https://research.google.com/colaboratory/faq.html). * I received a warning “Notebook requires high RAM”, what do I do? * The resources allocated to your Colab vary. See the [Colab FAQ](https://research.google.com/colaboratory/faq.html) for more details. * You can execute the Colab nonetheless. * I received an error “Colab CPU runtime not supported” or “No GPU/TPU found”, what do I do? * Colab CPU runtime is not supported. Try changing your runtime via _Runtime_ > _Change runtime type_ > _Hardware accelerator_ > _GPU_. * The type of GPU allocated to your Colab varies. See the [Colab FAQ](https://research.google.com/colaboratory/faq.html) for more details. * If you receive “Cannot connect to GPU backend”, you can try again later to see if Colab allocates you a GPU. * [Colab Pro](https://colab.research.google.com/signup) offers priority access to GPUs. * I received an error “ModuleNotFoundError: No module named ...”, even though I ran the cell that imports it, what do I do? * Colab notebooks on the free tier time out after a certain amount of time. See the [Colab FAQ](https://research.google.com/colaboratory/faq.html#idle-timeouts). Try rerunning the whole notebook from the beginning. * Does this tool install anything on my computer? * No, everything happens in the cloud on Google Colab. * At the end of the Colab execution a zip-archive with the obtained prediction will be automatically downloaded to your computer. * How should I share feedback and bug reports? * Please share any feedback and bug reports as an [issue](https://github.com/deepmind/alphafold/issues) on Github. ## Related work Take a look at these Colab notebooks provided by the community (please note that these notebooks may vary from our validated AlphaFold system and we cannot guarantee their accuracy): * The [ColabFold AlphaFold2 notebook](https://colab.research.google.com/github/sokrypton/ColabFold/blob/main/AlphaFold2.ipynb) by Sergey Ovchinnikov, Milot Mirdita and Martin Steinegger, which uses an API hosted at the Södinglab based on the MMseqs2 server ([Mirdita et al. 2019, Bioinformatics](https://academic.oup.com/bioinformatics/article/35/16/2856/5280135)) for the multiple sequence alignment creation. _____no_output_____# License and Disclaimer This is not an officially-supported Google product. This Colab notebook and other information provided is for theoretical modelling only, caution should be exercised in its use. It is provided ‘as-is’ without any warranty of any kind, whether expressed or implied. Information is not intended to be a substitute for professional medical advice, diagnosis, or treatment, and does not constitute medical or other professional advice. Copyright 2021 DeepMind Technologies Limited. ## AlphaFold Code License Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ## Model Parameters License The AlphaFold parameters are made available for non-commercial use only, under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. You can find details at: https://creativecommons.org/licenses/by-nc/4.0/legalcode ## Third-party software Use of the third-party software, libraries or code referred to in the [Acknowledgements section](https://github.com/deepmind/alphafold/#acknowledgements) in the AlphaFold README may be governed by separate terms and conditions or license provisions. Your use of the third-party software, libraries or code is subject to any such terms and you should check that you can comply with any applicable restrictions or terms and conditions before use. ## Mirrored Databases The following databases have been mirrored by DeepMind, and are available with reference to the following: * UniProt: v2021\_03 (unmodified), by The UniProt Consortium, available under a [Creative Commons Attribution-NoDerivatives 4.0 International License](http://creativecommons.org/licenses/by-nd/4.0/). * UniRef90: v2021\_03 (unmodified), by The UniProt Consortium, available under a [Creative Commons Attribution-NoDerivatives 4.0 International License](http://creativecommons.org/licenses/by-nd/4.0/). * MGnify: v2019\_05 (unmodified), by Mitchell AL et al., available free of all copyright restrictions and made fully and freely available for both non-commercial and commercial use under [CC0 1.0 Universal (CC0 1.0) Public Domain Dedication](https://creativecommons.org/publicdomain/zero/1.0/). * BFD: (modified), by Steinegger M. and Söding J., modified by DeepMind, available under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by/4.0/). See the Methods section of the [AlphaFold proteome paper](https://www.nature.com/articles/s41586-021-03828-1) for details._____no_output_____
{ "repository": "CactusPuppy/colab-notebooks", "path": "notebooks/AlphaFold.ipynb", "matched_keywords": [ "bioinformatics" ], "stars": null, "size": 43401, "hexsha": "481b1a1091125436299a23d741b4afdc03aee987", "max_line_length": 636, "avg_line_length": 53.8473945409, "alphanum_fraction": 0.5907006751 }
# Notebook from amygdala/terra-example-notebooks Path: terra-notebooks-playground/R - How to save and load R objects from the workspace bucket.ipynb # How to save and load R objects from the workspace bucket Save intermediate work to R's native format for rapid loading. <div class="alert alert-block alert-info"> <b>Tip:</b> By storing your RDA files in the workspace bucket, they are available to your workspace collaborators to load into their own notebooks! </div> See also [Notebooks 101 - How not to lose data output files or collaborator edits](https://broadinstitute.zendesk.com/hc/en-us/articles/360027300571-Notebooks-101-How-not-to-lose-data-output-files-or-collaborator-edits)._____no_output_____## Setup_____no_output_____ <code> library(lubridate) library(tidyverse) Attaching package: ‘lubridate’ The following objects are masked from ‘package:base’: date, intersect, setdiff, union ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.1 ── ✔ ggplot2 3.3.3 ✔ purrr  0.3.4 ✔ tibble  3.1.1 ✔ dplyr  1.0.6 ✔ tidyr  1.1.3 ✔ stringr 1.4.0 ✔ readr  1.4.0 ✔ forcats 0.5.1 ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ── ✖ lubridate::as.difftime() masks base::as.difftime() ✖ lubridate::date() masks base::date() ✖ dplyr::filter() masks stats::filter() ✖ lubridate::intersect() masks base::intersect() ✖ dplyr::lag() masks stats::lag() ✖ lubridate::setdiff() masks base::setdiff() ✖ lubridate::union() masks base::union() </code> Get the Cloud Storage bucket associated with this workspace._____no_output_____ <code> (WORKSPACE_BUCKET <- Sys.getenv('WORKSPACE_BUCKET'))_____no_output_____ </code> Create a timestamp for a folder of results generated today._____no_output_____ <code> (TIMESTAMP <- strftime(now(), '%Y%m%d/%H%M%S'))_____no_output_____ </code> Get your username so that everyone can know who created the RDA file._____no_output_____ <code> (OWNER_EMAIL <- Sys.getenv('OWNER_EMAIL'))_____no_output_____(RDA_FILENAME <- str_glue('thousand_genomes.rda'))_____no_output_____ </code> Assemble the destination path within the workspace bucket._____no_output_____ <code> (DESTINATION <- str_glue('{WORKSPACE_BUCKET}/data/r-objects/{OWNER_EMAIL}/{TIMESTAMP}/{RDA_FILENAME}'))_____no_output_____ </code> ## Read some data from Cloud Storage. Let’s retrieve the sample information for [1000 Genomes](http://www.internationalgenome.org/data "1000 Genomes"). This approach uses `gsutil cat` to transfer the contents of the CSV file since we want to load the whole thing. If you instead want to load a subset of columns or a subset of rows, instead retrieve the data from BigQuery table [bigquery-public-data.human_genome_variants.1000_genomes_sample_info](https://bigquery.cloud.google.com/table/bigquery-public-data:human_genome_variants.1000_genomes_sample_info)._____no_output_____ <code> sample_info <- read_csv(pipe('gsutil cat gs://genomics-public-data/1000-genomes/other/sample_info/sample_info.csv'), guess_max = 5000) ── Column specification ──────────────────────────────────────────────────────── cols( .default = col_character(), In_Low_Coverage_Pilot = col_double(), In_High_Coverage_Pilot = col_double(), In_Exon_Targetted_Pilot = col_double(), Has_Sequence_in_Phase1 = col_double(), In_Phase1_Integrated_Variant_Set = col_double(), Has_Phase1_chrY_SNPS = col_double(), Has_phase1_chrY_Deletions = col_double(), Has_phase1_chrMT_SNPs = col_double(), Total_LC_Sequence = col_double(), LC_Non_Duplicated_Aligned_Coverage = col_double(), Total_Exome_Sequence = col_double(), X_Targets_Covered_to_20x_or_greater = col_double(), VerifyBam_E_Omni_Free = col_double(), VerifyBam_E_Affy_Free = col_double(), VerifyBam_E_Omni_Chip = col_double(), VerifyBam_E_Affy_Chip = col_double(), VerifyBam_LC_Omni_Free = col_double(), VerifyBam_LC_Affy_Free = col_double(), VerifyBam_LC_Omni_Chip = col_double(), VerifyBam_LC_Affy_Chip = col_double() # ... with 11 more columns ) ℹ Use `spec()` for the full column specifications. </code> ## Save the object(s) to a local file._____no_output_____ <code> save(sample_info, file = RDA_FILENAME)_____no_output_____ </code> ## Transfer the file to the workspace bucket Use `gsutil` to copy the file from your Jupyter harddrive to the workspace bucket._____no_output_____ <code> system(str_glue('gsutil cp {RDA_FILENAME} {DESTINATION} 2>&1'), intern = TRUE)_____no_output_____ </code> ## Now, load that object from the native format file in Cloud Storage_____no_output_____ <code> # The object exists in memory. head(sample_info)_____no_output_____# Go ahead and delete it. rm(sample_info)_____no_output_____# Okay, its gone. head(sample_info)_____no_output_____load(pipe(str_glue('gsutil cat {DESTINATION}')))_____no_output_____# The object exists in memory again! head(sample_info)_____no_output_____ </code> # Provenance_____no_output_____ <code> devtools::session_info()_____no_output_____ </code> Copyright 2018 The Broad Institute, Inc., Verily Life Sciences, LLC All rights reserved. This software may be modified and distributed under the terms of the BSD license. See the LICENSE file for details._____no_output_____
{ "repository": "amygdala/terra-example-notebooks", "path": "terra-notebooks-playground/R - How to save and load R objects from the workspace bucket.ipynb", "matched_keywords": [ "genomics" ], "stars": 4, "size": 42569, "hexsha": "481c04858f889e4f2b72ead048dfc31d1fc3488c", "max_line_length": 767, "avg_line_length": 54.2971938776, "alphanum_fraction": 0.4946087528 }
# Notebook from ParasAlex/Big_Data_HW Path: Copy_of_big_data_level_2.ipynb <code> import os # Find the latest version of spark 3.0 from http://www.apache.org/dist/spark/ and enter as the spark version # For example: spark_version = 'spark-3.0.3' #spark_version = 'spark-3.<enter version>' os.environ['SPARK_VERSION']=spark_version # Install Spark and Java !apt-get update !apt-get install openjdk-8-jdk-headless -qq > /dev/null !wget -q http://www.apache.org/dist/spark/$SPARK_VERSION/$SPARK_VERSION-bin-hadoop2.7.tgz !tar xf $SPARK_VERSION-bin-hadoop2.7.tgz !pip install -q findspark # Set Environment Variables os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" os.environ["SPARK_HOME"] = f"/content/{spark_version}-bin-hadoop2.7" # Start a SparkSession import findspark findspark.init() 0% [Working] Hit:1 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ InRelease 0% [Connecting to archive.ubuntu.com (91.189.88.152)] [Connecting to security.u 0% [1 InRelease gpgv 3,626 B] [Connecting to archive.ubuntu.com (91.189.88.152) Ign:2 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease 0% [1 InRelease gpgv 3,626 B] [Waiting for headers] [Waiting for headers] [Conn Hit:3 http://archive.ubuntu.com/ubuntu bionic InRelease Hit:4 http://security.ubuntu.com/ubuntu bionic-security InRelease Ign:5 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease Hit:6 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release Hit:7 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic InRelease Hit:8 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release Hit:9 http://archive.ubuntu.com/ubuntu bionic-updates InRelease Hit:10 http://archive.ubuntu.com/ubuntu bionic-backports InRelease Hit:11 http://ppa.launchpad.net/cran/libgit2/ubuntu bionic InRelease Hit:12 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu bionic InRelease Hit:13 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease Reading package lists... Done #Load Amazon Data into Spark Dataframe from pyspark import SparkFiles url = "https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Video_Games_v1_00.tsv.gz" spark.sparkContext.addFile(url) video_game_df = spark.read.csv(SparkFiles.get("amazon_reviews_us_Video_Games_v1_00.tsv.gz"), sep="\t", header=True, inferSchema=True) video_game_df.show()_____no_output_____#Filter by votes filtered_video_game_df = video_game_df.select(["star_rating", "helpful_votes", "total_votes", "vine", "verfied_purchase"]) filtered_video_game_df.show()_____no_output_____#filter for greater than 20 votes total_votes_df = filtered_video_game_df.filter(filtered_video_game_df["total_votes"] >= 20) total_votes_df.show()_____no_output_____#filter for greater than 50% helpful vote percentage helpful_votes_df = total_votes_df.filter(total_votes_df["helpful_votes"]/total_votes_df["total_votes"] >= 0.5) helpful_votes_df.show()_____no_output_____#Describe the Stats paid_df = helpful_votes_df.filter(helpful_votes_df["vine"] == "Y") paid_df.describe().show(10)_____no_output_____unpaid_df = helpful_votes_df.filter(helpful_votes_df["vine"] == "N") unpaid_df.describe().show(10)_____no_output_____#Determine the five star reveiw among non-vine reviews paid_five_star_number = paid_df[paid_df("star_rating") == 5].couunt() paid_number = paid_df.count() percentage_five_star_vine = round(float(paid_five_star_number) / float(paid_number), 4) percentage_five_star_vine_____no_output_____print(f"Number of Paid Reviews {paid_number}") print(f"Number of Paid Five Star Reviews {paid_five_star_number}") print(f"Percentage of paid reveiws that are five stars {percentage_five_star_vine * 100}%")_____no_output_____unpaid_five_star_number = unpaid_df[unpaid_df("star_rating") == 5].couunt() unpaid_number = unpaid_df.count() percentage_five_star_vine = round(float(unpaid_five_star_number) / float(unpaid_number), 4) percentage_five_star_vine_____no_output_____print(f"Number of UnPaid Reviews {unpaid_number}") print(f"Number of UnPaid Five Star Reviews {unpaid_five_star_number}") print(f"Percentage of paid reveiws that are five stars {percentage_five_star_vine * 100}%")_____no_output_____ </code>
{ "repository": "ParasAlex/Big_Data_HW", "path": "Copy_of_big_data_level_2.ipynb", "matched_keywords": [ "STAR" ], "stars": null, "size": 7714, "hexsha": "481eb86822d2b322d471a0818de931c2859ea7c2", "max_line_length": 353, "avg_line_length": 36.5592417062, "alphanum_fraction": 0.5506870625 }
# Notebook from smythi93/debuggingbook Path: notebooks/Repairer.ipynb # Repairing Code Automatically So far, we have discussed how to track failures and how to locate defects in code. Let us now discuss how to _repair_ defects – that is, to correct the code such that the failure no longer occurs. We will discuss how to _repair code automatically_ – by systematically searching through possible fixes and evolving the most promising candidates._____no_output_____ <code> from bookutils import YouTubeVideo YouTubeVideo("UJTf7cW0idI")_____no_output_____ </code> **Prerequisites** * Re-read the [introduction to debugging](Intro_Debugging.ipynb), notably on how to properly fix code. * We make use of automatic fault localization, as discussed in [the chapter on statistical debugging](StatisticalDebugger.ipynb). * We make extensive use of code transformations, as discussed in [the chapter on tracing executions](Tracer.ipynb). * We make use of [delta debugging](DeltaDebugger.ipynb)._____no_output_____ <code> import bookutils_____no_output_____ </code> ## Synopsis <!-- Automatically generated. Do not edit. --> To [use the code provided in this chapter](Importing.ipynb), write ```python >>> from debuggingbook.Repairer import <identifier> ``` and then make use of the following features. This chapter provides tools and techniques for automated repair of program code. The `Repairer()` class takes a `RankingDebugger` debugger as input (such as `OchiaiDebugger` from [the chapter on statistical debugging](StatisticalDebugger.ipynb). A typical setup looks like this: ```python from debuggingbook.StatisticalDebugger import OchiaiDebugger debugger = OchiaiDebugger() for inputs in TESTCASES: with debugger: test_foo(inputs) ... repairer = Repairer(debugger) ``` Here, `test_foo()` is a function that raises an exception if the tested function `foo()` fails. If `foo()` passes, `test_foo()` should not raise an exception. The `repair()` method of a `Repairer` searches for a repair of the code covered in the debugger (except for methods starting or ending in `test`, such that `foo()`, not `test_foo()` is repaired). `repair()` returns the best fix candidate as a pair `(tree, fitness)` where `tree` is a [Python abstract syntax tree](http://docs.python.org/3/library/ast) (AST) of the fix candidate, and `fitness` is the fitness of the candidate (a value between 0 and 1). A `fitness` of 1.0 means that the candidate passed all tests. A typical usage looks like this: ```python import astor tree, fitness = repairer.repair() print(astor.to_source(tree), fitness) ``` Here is a complete example for the `middle()` program. This is the original source code of `middle()`: ```python def middle(x, y, z): # type: ignore if y < z: if x < y: return y elif x < z: return y else: if x > y: return y elif x > z: return x return z ``` We set up a function `middle_test()` that tests it. The `middle_debugger` collects testcases and outcomes: ```python >>> middle_debugger = OchiaiDebugger() >>> for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES: >>> with middle_debugger: >>> middle_test(x, y, z) ``` The repairer attempts to repair the invoked function (`middle()`). The returned AST `tree` can be output via `astor.to_source()`: ```python >>> middle_repairer = Repairer(middle_debugger) >>> tree, fitness = middle_repairer.repair() >>> print(astor.to_source(tree), fitness) def middle(x, y, z): if y < z: if x < z: if x < y: return y else: return x elif x > y: return y elif x > z: return x return z 1.0 ``` Here are the classes defined in this chapter. A `Repairer` repairs a program, using a `StatementMutator` and a `CrossoverOperator` to evolve a population of candidates. ![](PICS/Repairer-synopsis-1.svg) _____no_output_____## Automatic Code Repairs So far, we have discussed how to locate defects in code, how to track failures back to the defects that caused them, and how to systematically determine failure conditions. Let us now address the last step in debugging – namely, how to _automatically fix code_. Already in the [introduction to debugging](Intro_Debugging.ipynb), we have discussed how to fix code manually. Notably, we have established that a _diagnosis_ (which induces a fix) should show _causality_ (i.e, how the defect causes the failure) and _incorrectness_ (how the defect is wrong). Is it possible to obtain such a diagnosis automatically?_____no_output_____In this chapter, we introduce a technique of _automatic code repair_ – that is, for a given failure, automatically determine a fix that makes the failure go away. To do so, we randomly (but systematically) _mutate_ the program code – that is, insert, change, and delete fragments – until we find a change that actually causes the failing test to pass._____no_output_____If this sounds like an audacious idea, that is because it is. But not only is _automated program repair_ one of the hottest topics of software research in the last decade, it is also being increasingly deployed in industry. At Facebook, for instance, every failing test report comes with an automatically generated _repair suggestion_ – a suggestion that already has been validated to work. Programmers can apply the suggestion as is or use it as basis for their own fixes._____no_output_____### The middle() Function_____no_output_____Let us introduce our ongoing example. In the [chapter on statistical debugging](StatisticalDebugger.ipynb), we have introduced the `middle()` function – a function that returns the "middle" of three numbers `x`, `y`, and `z`:_____no_output_____ <code> from StatisticalDebugger import middle_____no_output_____# ignore from bookutils import print_content_____no_output_____# ignore import inspect_____no_output_____# ignore _, first_lineno = inspect.getsourcelines(middle) middle_source = inspect.getsource(middle) print_content(middle_source, '.py', start_line_number=first_lineno)_____no_output_____ </code> In most cases, `middle()` just runs fine:_____no_output_____ <code> middle(4, 5, 6)_____no_output_____ </code> In some other cases, though, it does not work correctly:_____no_output_____ <code> middle(2, 1, 3)_____no_output_____ </code> ### Validated Repairs_____no_output_____Now, if we only want a repair that fixes this one given failure, this would be very easy. All we have to do is to replace the entire body by a single statement:_____no_output_____ <code> def middle_sort_of_fixed(x, y, z): # type: ignore return x_____no_output_____ </code> You will concur that the failure no longer occurs:_____no_output_____ <code> middle_sort_of_fixed(2, 1, 3)_____no_output_____ </code> But this, of course, is not the aim of automatic fixes, nor of fixes in general: We want our fixes not only to make the given failure go away, but we also want the resulting code to be _correct_ (which, of course, is a lot harder)._____no_output_____Automatic repair techniques therefore assume the existence of a _test suite_ that can check whether an implementation satisfies its requirements. Better yet, one can use the test suite to gradually check _how close_ one is to perfection: A piece of code that satisfies 99% of all tests is better than one that satisfies ~33% of all tests, as `middle_sort_of_fixed()` would do (assuming the test suite evenly checks the input space)._____no_output_____### Genetic Optimization_____no_output_____The master plan for automatic repair #TODO: also many other promising approaches which do not rely on genetics, so not actual the master plan more like a common approach# follows the principle of _genetic optimization_. Roughly spoken, genetic optimization is a _metaheuristic_ inspired by the process of _natural selection_. The idea is to _evolve_ a selection of _candidate solutions_ towards a maximum _fitness_: 1. Have a selection of _candidates_. 2. Determine the _fitness_ of each candidate. 3. Retain those candidates with the _highest fitness_. 4. Create new candidates from the retained candidates, by applying genetic operations: * _Mutation_ mutates some aspect of a candidate. * _CrossoverOperator_ creates new candidates combining features of two candidates. 5. Repeat until an optimal solution is found._____no_output_____Applied for automated program repair, this means the following steps: 1. Have a _test suite_ with both failing and passing tests that helps asserting correctness of possible solutions. 2. With the test suite, use [fault localization](StatisticalDebugger.ipynb) to determine potential code locations to be fixed. 3. Systematically _mutate_ the code (by adding, changing, or deleting code) and _cross_ code to create possible fix candidates. 4. Identify the _fittest_ fix candidates – that is, those that satisfy the most tests. 5. _Evolve_ the fittest candidates until a perfect fix is found, or until time resources are depleted._____no_output_____Let us illustrate these steps in the following sections._____no_output_____## A Test Suite_____no_output_____In automated repair, the larger and the more thorough the test suite, the higher the quality of the resulting fix (if any). Hence, if we want to repair `middle()` automatically, we need a good test suite – with good inputs, but also with good checks #TODO: there is a tradeoff between the size of a test suite and the runtime of the repairer, running the suite takes commonly the most time for repairing#_____no_output_____For better repair, we will use the test suites introduced in the [chapter on statistical debugging](StatisticalDebugger.ipynb):_____no_output_____ <code> from StatisticalDebugger import MIDDLE_PASSING_TESTCASES, MIDDLE_FAILING_TESTCASES_____no_output_____ </code> The `middle_test()` function fails whenever `middle()` returns an incorrect result:_____no_output_____ <code> def middle_test(x: int, y: int, z: int) -> None: m = middle(x, y, z) assert m == sorted([x, y, z])[1]_____no_output_____from ExpectError import ExpectError_____no_output_____with ExpectError(): middle_test(2, 1, 3)_____no_output_____ </code> ## Locating the Defect_____no_output_____Our next step is to find potential defect locations – that is, those locations in the code our mutations should focus upon. Since we already do have two test suites, we can make use of [statistical debugging](StatisticalDebugger.ipynb) #TODO: not actually statistical debugging but rather spectrum based fault localization# to identify likely faulty locations. Our `OchiaiDebugger` ranks individual code lines by how frequently they are executed in failing runs (and not in passing runs)._____no_output_____ <code> from StatisticalDebugger import OchiaiDebugger, RankingDebugger_____no_output_____middle_debugger = OchiaiDebugger() for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES: with middle_debugger: middle_test(x, y, z)_____no_output_____ </code> We see that the upper half of the `middle()` code is definitely more suspicious:_____no_output_____ <code> middle_debugger_____no_output_____ </code> The most suspicious line is:_____no_output_____ <code> # ignore location = middle_debugger.rank()[0] (func_name, lineno) = location lines, first_lineno = inspect.getsourcelines(middle) print(lineno, end="") print_content(lines[lineno - first_lineno], '.py')_____no_output_____ </code> with a suspiciousness of:_____no_output_____ <code> # ignore middle_debugger.suspiciousness(location)_____no_output_____ </code> ## Random Code Mutations_____no_output_____Our third step in automatic code repair is to _randomly mutate the code_. Specifically, we want to randomly _delete_, _insert_, and _replace_ statements in the program to be repaired. However, simply synthesizing code _from scratch_ is unlikely to yield anything meaningful – the number of combinations is simply far too high. Already for a three-character identifier name, we have more than 200,000 combinations: #TODO: you could also explain the concept behind this mutations, i.e. the redundancy assumption/plastic surgery hypothesis#_____no_output_____ <code> import string_____no_output_____string.ascii_letters_____no_output_____len(string.ascii_letters + '_') * \ len(string.ascii_letters + '_' + string.digits) * \ len(string.ascii_letters + '_' + string.digits)_____no_output_____ </code> Hence, we do _not_ synthesize code from scratch, but instead _reuse_ elements from the program to be fixed, hypothesizing that "a program that contains an error in one area likely implements the correct behavior elsewhere" \cite{LeGoues2012}._____no_output_____Furthermore, we do not operate on a _textual_ representation of the program, but rather on a _structural_ representation, which by construction allows us to avoid lexical and syntactical errors in the first place. This structural representation is the _abstract syntax tree_ (AST), which we already have seen in various chapters, such as the [chapter on delta debugging](DeltaDebugger.ipynb), the [chapter on tracing](Tracer.ipynb), and excessively in the [chapter on slicing](Slicer.ipynb). The [official Python `ast` reference](http://docs.python.org/3/library/ast) is complete, but a bit brief; the documentation ["Green Tree Snakes - the missing Python AST docs"](https://greentreesnakes.readthedocs.io/en/latest/) provides an excellent introduction. Recapitulating, an AST is a tree representation of the program, showing a hierarchical structure of the program's elements. Here is the AST for our `middle()` function._____no_output_____ <code> import ast import astor import inspect_____no_output_____from bookutils import print_content, show_ast_____no_output_____def middle_tree() -> ast.AST: return ast.parse(inspect.getsource(middle))_____no_output_____show_ast(middle_tree())_____no_output_____ </code> You see that it consists of one function definition (`FunctionDef`) with three `arguments` and two statements – one `If` and one `Return`. Each `If` subtree has three branches – one for the condition (`test`), one for the body to be executed if the condition is true (`body`), and one for the `else` case (`orelse`). The `body` and `orelse` branches again are lists of statements._____no_output_____An AST can also be shown as text, which is more compact, yet reveals more information. `ast.dump()` gives not only the class names of elements, but also how they are constructed – actually, the whole expression can be used to construct an AST._____no_output_____ <code> print(ast.dump(middle_tree()))_____no_output_____ </code> This is the path to the first `return` statement:_____no_output_____ <code> ast.dump(middle_tree().body[0].body[0].body[0].body[0]) # type: ignore_____no_output_____ </code> ### Picking Statements_____no_output_____For our mutation operators, we want to use statements from the program itself. Hence, we need a means to find those very statements. The `StatementVisitor` class iterates through an AST, adding all statements it finds in function definitions to its `statements` list. To do so, it subclasses the Python `ast` `NodeVisitor` class, described in the [official Python `ast` reference](http://docs.python.org/3/library/ast)._____no_output_____ <code> from ast import NodeVisitor_____no_output_____# ignore from typing import Any, Callable, Optional, Type, Tuple from typing import Dict, Union, Set, List, cast_____no_output_____class StatementVisitor(NodeVisitor): """Visit all statements within function defs in an AST""" def __init__(self) -> None: self.statements: List[Tuple[ast.AST, str]] = [] self.func_name = "" self.statements_seen: Set[Tuple[ast.AST, str]] = set() super().__init__() def add_statements(self, node: ast.AST, attr: str) -> None: elems: List[ast.AST] = getattr(node, attr, []) if not isinstance(elems, list): elems = [elems] # type: ignore for elem in elems: stmt = (elem, self.func_name) if stmt in self.statements_seen: continue self.statements.append(stmt) self.statements_seen.add(stmt) def visit_node(self, node: ast.AST) -> None: # Any node other than the ones listed below self.add_statements(node, 'body') self.add_statements(node, 'orelse') def visit_Module(self, node: ast.Module) -> None: # Module children are defs, classes and globals - don't add super().generic_visit(node) def visit_ClassDef(self, node: ast.ClassDef) -> None: # Class children are defs and globals - don't add super().generic_visit(node) def generic_visit(self, node: ast.AST) -> None: self.visit_node(node) super().generic_visit(node) def visit_FunctionDef(self, node: Union[ast.FunctionDef, ast.AsyncFunctionDef]) -> None: if not self.func_name: self.func_name = node.name self.visit_node(node) super().generic_visit(node) self.func_name = "" def visit_AsyncFunctionDef(self, node: ast.AsyncFunctionDef) -> None: return self.visit_FunctionDef(node)_____no_output_____ </code> The function `all_statements()` returns all statements in the given AST `tree`. If an `ast` class `tp` is given, it only returns instances of that class._____no_output_____ <code> def all_statements_and_functions(tree: ast.AST, tp: Optional[Type] = None) -> \ List[Tuple[ast.AST, str]]: """ Return a list of pairs (`statement`, `function`) for all statements in `tree`. If `tp` is given, return only statements of that class. """ visitor = StatementVisitor() visitor.visit(tree) statements = visitor.statements if tp is not None: statements = [s for s in statements if isinstance(s[0], tp)] return statements_____no_output_____def all_statements(tree: ast.AST, tp: Optional[Type] = None) -> List[ast.AST]: """ Return a list of all statements in `tree`. If `tp` is given, return only statements of that class. """ return [stmt for stmt, func_name in all_statements_and_functions(tree, tp)]_____no_output_____ </code> Here are all the `return` statements in `middle()`:_____no_output_____ <code> all_statements(middle_tree(), ast.Return)_____no_output_____all_statements_and_functions(middle_tree(), ast.If)_____no_output_____ </code> We can randomly pick an element:_____no_output_____ <code> import random_____no_output_____random_node = random.choice(all_statements(middle_tree())) astor.to_source(random_node)_____no_output_____ </code> ### Mutating Statements The main part in mutation, however, is to actually mutate the code of the program under test. To this end, we introduce a `StatementMutator` class – a subclass of `NodeTransformer`, described in the [official Python `ast` reference](http://docs.python.org/3/library/ast)._____no_output_____The constructor provides various keyword arguments to configure the mutator._____no_output_____ <code> from ast import NodeTransformer_____no_output_____import copy_____no_output_____class StatementMutator(NodeTransformer): """Mutate statements in an AST for automated repair.""" def __init__(self, suspiciousness_func: Optional[Callable[[Tuple[Callable, int]], float]] = None, source: Optional[List[ast.AST]] = None, log: bool = False) -> None: """ Constructor. `suspiciousness_func` is a function that takes a location (function, line_number) and returns a suspiciousness value between 0 and 1.0. If not given, all locations get the same suspiciousness of 1.0. `source` is a list of statements to choose from. """ super().__init__() self.log = log if suspiciousness_func is None: def suspiciousness_func(location: Tuple[Callable, int]) -> float: return 1.0 assert suspiciousness_func is not None self.suspiciousness_func: Callable = suspiciousness_func if source is None: source = [] self.source = source if self.log > 1: for i, node in enumerate(self.source): print(f"Source for repairs #{i}:") print_content(astor.to_source(node), '.py') print() print() self.mutations = 0_____no_output_____ </code> #### Choosing Suspicious Statements to Mutate We start with deciding which AST nodes to mutate. The method `node_suspiciousness()` returns the suspiciousness for a given node, by invoking the suspiciousness function `suspiciousness_func` given during initialization._____no_output_____ <code> import warnings_____no_output_____class StatementMutator(StatementMutator): def node_suspiciousness(self, stmt: ast.AST, func_name: str) -> float: if not hasattr(stmt, 'lineno'): warnings.warn(f"{self.format_node(stmt)}: Expected line number") return 0.0 suspiciousness = self.suspiciousness_func((func_name, stmt.lineno)) if suspiciousness is None: # not executed return 0.0 return suspiciousness def format_node(self, node: ast.AST) -> str: ..._____no_output_____ </code> The method `node_to_be_mutated()` picks a node (statement) to be mutated. It determines the suspiciousness of all statements, and invokes `random.choices()`, using the suspiciousness as weight. Unsuspicious statements (with zero weight) will not be chosen._____no_output_____ <code> class StatementMutator(StatementMutator): def node_to_be_mutated(self, tree: ast.AST) -> ast.AST: statements = all_statements_and_functions(tree) assert len(statements) > 0, "No statements" weights = [self.node_suspiciousness(stmt, func_name) for stmt, func_name in statements] stmts = [stmt for stmt, func_name in statements] if self.log > 1: print("Weights:") for i, stmt in enumerate(statements): node, func_name = stmt print(f"{weights[i]:.2} {self.format_node(node)}") if sum(weights) == 0.0: # No suspicious line return random.choice(stmts) else: return random.choices(stmts, weights=weights)[0]_____no_output_____ </code> #### Choosing a Mutation Method_____no_output_____The method `visit()` is invoked on all nodes. For nodes marked with a `mutate_me` attribute, it randomly chooses a mutation method (`choose_op()`) and then invokes it on the node. According to the rules of `NodeTransformer`, the mutation method can return * a new node or a list of nodes, replacing the current node; * `None`, deleting it; or * the node itself, keeping things as they are._____no_output_____ <code> import re_____no_output_____RE_SPACE = re.compile(r'[ \t\n]+')_____no_output_____class StatementMutator(StatementMutator): def choose_op(self) -> Callable: return random.choice([self.insert, self.swap, self.delete]) def visit(self, node: ast.AST) -> ast.AST: super().visit(node) # Visits (and transforms?) children if not node.mutate_me: # type: ignore return node op = self.choose_op() new_node = op(node) self.mutations += 1 if self.log: print(f"{node.lineno:4}:{op.__name__ + ':':7} " f"{self.format_node(node)} " f"becomes {self.format_node(new_node)}") return new_node_____no_output_____ </code> #### Swapping Statements Our first mutator is `swap()`, which replaces the current node NODE by a random node found in `source` (using a newly defined `choose_statement()`). As a rule of thumb, we try to avoid inserting entire subtrees with all attached statements; and try to respect only the first line of a node. If the new node has the form ```python if P: BODY ``` we thus only insert ```python if P: pass ``` since the statements in BODY have a later chance to get inserted. The same holds for all constructs that have a BODY, i.e. `while`, `for`, `try`, `with`, and more._____no_output_____ <code> class StatementMutator(StatementMutator): def choose_statement(self) -> ast.AST: return copy.deepcopy(random.choice(self.source))_____no_output_____class StatementMutator(StatementMutator): def swap(self, node: ast.AST) -> ast.AST: """Replace `node` with a random node from `source`""" new_node = self.choose_statement() if isinstance(new_node, ast.stmt): # The source `if P: X` is added as `if P: pass` if hasattr(new_node, 'body'): new_node.body = [ast.Pass()] # type: ignore if hasattr(new_node, 'orelse'): new_node.orelse = [] # type: ignore if hasattr(new_node, 'finalbody'): new_node.finalbody = [] # type: ignore # ast.copy_location(new_node, node) return new_node_____no_output_____ </code> #### Inserting Statements Our next mutator is `insert()`, which randomly chooses some node from `source` and inserts it after the current node NODE. (If NODE is a `return` statement, then we insert the new node _before_ NODE.) If the statement to be inserted has the form ```python if P: BODY ``` we only insert the "header" of the `if`, resulting in ```python if P: NODE ``` Again, this applies to all constructs that have a BODY, i.e. `while`, `for`, `try`, `with`, and more._____no_output_____ <code> class StatementMutator(StatementMutator): def insert(self, node: ast.AST) -> Union[ast.AST, List[ast.AST]]: """Insert a random node from `source` after `node`""" new_node = self.choose_statement() if isinstance(new_node, ast.stmt) and hasattr(new_node, 'body'): # Inserting `if P: X` as `if P:` new_node.body = [node] # type: ignore if hasattr(new_node, 'orelse'): new_node.orelse = [] # type: ignore if hasattr(new_node, 'finalbody'): new_node.finalbody = [] # type: ignore # ast.copy_location(new_node, node) return new_node # Only insert before `return`, not after it if isinstance(node, ast.Return): if isinstance(new_node, ast.Return): return new_node else: return [new_node, node] return [node, new_node]_____no_output_____ </code> #### Deleting Statements Our last mutator is `delete()`, which deletes the current node NODE. The standard case is to replace NODE by a `pass` statement. If the statement to be deleted has the form ```python if P: BODY ``` we only delete the "header" of the `if`, resulting in ```python BODY ``` Again, this applies to all constructs that have a BODY, i.e. `while`, `for`, `try`, `with`, and more; it also selects a random branch, including `else` branches._____no_output_____ <code> class StatementMutator(StatementMutator): def delete(self, node: ast.AST) -> None: """Delete `node`.""" branches = [attr for attr in ['body', 'orelse', 'finalbody'] if hasattr(node, attr) and getattr(node, attr)] if branches: # Replace `if P: S` by `S` branch = random.choice(branches) new_node = getattr(node, branch) return new_node if isinstance(node, ast.stmt): # Avoid empty bodies; make this a `pass` statement new_node = ast.Pass() ast.copy_location(new_node, node) return new_node return None # Just delete_____no_output_____from bookutils import quiz_____no_output_____quiz("Why are statements replaced by `pass` rather than deleted?", [ "Because `if P: pass` is valid Python, while `if P:` is not", "Because in Python, bodies for `if`, `while`, etc. cannot be empty", "Because a `pass` node makes a target for future mutations", "Because it causes the tests to pass" ], '[3 ^ n for n in range(3)]')_____no_output_____ </code> Indeed, Python's `compile()` will fail if any of the bodies is an empty list. Also, it leaves us a statement that can be evolved further._____no_output_____#### Helpers For logging purposes, we introduce a helper function `format_node()` that returns a short string representation of the node._____no_output_____ <code> class StatementMutator(StatementMutator): NODE_MAX_LENGTH = 20 def format_node(self, node: ast.AST) -> str: """Return a string representation for `node`.""" if node is None: return "None" if isinstance(node, list): return "; ".join(self.format_node(elem) for elem in node) s = RE_SPACE.sub(' ', astor.to_source(node)).strip() if len(s) > self.NODE_MAX_LENGTH - len("..."): s = s[:self.NODE_MAX_LENGTH] + "..." return repr(s)_____no_output_____ </code> #### All Together Let us now create the main entry point, which is `mutate()`. It picks the node to be mutated and marks it with a `mutate_me` attribute. By calling `visit()`, it then sets off the `NodeTransformer` transformation._____no_output_____ <code> class StatementMutator(StatementMutator): def mutate(self, tree: ast.AST) -> ast.AST: """Mutate the given AST `tree` in place. Return mutated tree.""" assert isinstance(tree, ast.AST) tree = copy.deepcopy(tree) if not self.source: self.source = all_statements(tree) for node in ast.walk(tree): node.mutate_me = False # type: ignore node = self.node_to_be_mutated(tree) node.mutate_me = True # type: ignore self.mutations = 0 tree = self.visit(tree) if self.mutations == 0: warnings.warn("No mutations found") ast.fix_missing_locations(tree) return tree_____no_output_____ </code> Here are a number of transformations applied by `StatementMutator`:_____no_output_____ <code> mutator = StatementMutator(log=True) for i in range(10): new_tree = mutator.mutate(middle_tree())_____no_output_____ </code> This is the effect of the last mutator applied on `middle`:_____no_output_____ <code> print_content(astor.to_source(new_tree), '.py')_____no_output_____ </code> ## Fitness Now that we can apply random mutations to code, let us find out how good these mutations are. Given our test suites for `middle`, we can check for a given code candidate how many of the previously passing test cases it passes, and how many of the failing test cases it passes. The more tests pass, the higher the _fitness_ of the candidate._____no_output_____Not all passing tests have the same value, though. We want to prevent _regressions_ – that is, having a fix that breaks a previously passing test. The values of `WEIGHT_PASSING` and `WEIGHT_FAILING` set the relative weight (or importance) of passing vs. failing tests; we see that keeping passing tests passing is far more important then fixing failing tests._____no_output_____ <code> WEIGHT_PASSING = 0.99 WEIGHT_FAILING = 0.01_____no_output_____def middle_fitness(tree: ast.AST) -> float: """Compute fitness of a `middle()` candidate given in `tree`""" original_middle = middle try: code = compile(tree, '<fitness>', 'exec') except ValueError: return 0 # Compilation error exec(code, globals()) passing_passed = 0 failing_passed = 0 # Test how many of the passing runs pass for x, y, z in MIDDLE_PASSING_TESTCASES: try: middle_test(x, y, z) passing_passed += 1 except AssertionError: pass passing_ratio = passing_passed / len(MIDDLE_PASSING_TESTCASES) # Test how many of the failing runs pass for x, y, z in MIDDLE_FAILING_TESTCASES: try: middle_test(x, y, z) failing_passed += 1 except AssertionError: pass failing_ratio = failing_passed / len(MIDDLE_FAILING_TESTCASES) fitness = (WEIGHT_PASSING * passing_ratio + WEIGHT_FAILING * failing_ratio) globals()['middle'] = original_middle return fitness_____no_output_____ </code> Our faulty `middle()` program has a fitness of `WEIGHT_PASSING` (99%), because it passes all the passing tests (but none of the failing ones)._____no_output_____ <code> middle_fitness(middle_tree())_____no_output_____ </code> Our "sort of fixed" version of `middle()` gets a much lower fitness:_____no_output_____ <code> middle_fitness(ast.parse("def middle(x, y, z): return x"))_____no_output_____ </code> In the [chapter on statistical debugging](StatisticalDebugger), we also defined a fixed version of `middle()`. This gets a fitness of 1.0, passing all tests. (We won't use this fixed version for automated repairs.)_____no_output_____ <code> from StatisticalDebugger import middle_fixed_____no_output_____middle_fixed_source = \ inspect.getsource(middle_fixed).replace('middle_fixed', 'middle').strip()_____no_output_____middle_fitness(ast.parse(middle_fixed_source))_____no_output_____ </code> ## Population We now set up a _population_ of fix candidates to evolve over time. A higher population size will yield more candidates to check, but also need more time to test; a lower population size will yield fewer candidates, but allow for more evolution steps. We choose a population size of 40 (from \cite{LeGoues2012})._____no_output_____ <code> POPULATION_SIZE = 40 middle_mutator = StatementMutator()_____no_output_____ </code> #TODO: maybe set a seed here, sometime the program gets repaired without evolving, sometimes the repairer seems to produce a solution which does not match the quiz in the next section#_____no_output_____ <code> MIDDLE_POPULATION = [middle_tree()] + \ [middle_mutator.mutate(middle_tree()) for i in range(POPULATION_SIZE - 1)]_____no_output_____ </code> We sort the fix candidates according to their fitness. This actually runs all tests on all candidates._____no_output_____ <code> MIDDLE_POPULATION.sort(key=middle_fitness, reverse=True)_____no_output_____ </code> The candidate with the highest fitness is still our original (faulty) `middle()` code:_____no_output_____ <code> print(astor.to_source(MIDDLE_POPULATION[0]), middle_fitness(MIDDLE_POPULATION[0]))_____no_output_____ </code> At the other end of the spectrum, the candidate with the lowest fitness has some vital functionality removed:_____no_output_____ <code> print(astor.to_source(MIDDLE_POPULATION[-1]), middle_fitness(MIDDLE_POPULATION[-1]))_____no_output_____ </code> ## Evolution To evolve our population of candidates, we fill up the population with mutations created from the population, using a `StatementMutator` as described above to create these mutations. Then we reduce the population to its original size, keeping the fittest candidates. #TODO: shouldn't there be some kind of randomness to also keep sometimes candidates with lesser fitness#_____no_output_____ <code> def evolve_middle() -> None: global MIDDLE_POPULATION source = all_statements(middle_tree()) mutator = StatementMutator(source=source) n = len(MIDDLE_POPULATION) offspring: List[ast.AST] = [] while len(offspring) < n: parent = random.choice(MIDDLE_POPULATION) offspring.append(mutator.mutate(parent)) MIDDLE_POPULATION += offspring MIDDLE_POPULATION.sort(key=middle_fitness, reverse=True) MIDDLE_POPULATION = MIDDLE_POPULATION[:n]_____no_output_____ </code> This is what happens when evolving our population for the first time; the original source is still our best candidate._____no_output_____ <code> evolve_middle()_____no_output_____tree = MIDDLE_POPULATION[0] print(astor.to_source(tree), middle_fitness(tree))_____no_output_____ </code> However, nothing keeps us from evolving for a few generations more..._____no_output_____ <code> for i in range(50): evolve_middle() best_middle_tree = MIDDLE_POPULATION[0] fitness = middle_fitness(best_middle_tree) print(f"\rIteration {i:2}: fitness = {fitness} ", end="") if fitness >= 1.0: break_____no_output_____ </code> Success! We find a candidate that actually passes all tests, including the failing ones. Here is the candidate:_____no_output_____ <code> print_content(astor.to_source(best_middle_tree), '.py', start_line_number=1)_____no_output_____ </code> ... and yes, it passes all tests:_____no_output_____ <code> original_middle = middle code = compile(best_middle_tree, '<string>', 'exec') exec(code, globals()) for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES: middle_test(x, y, z) middle = original_middle_____no_output_____ </code> As the code is already validated by hundreds of test cases, it is very valuable for the programmer. Even if the programmer decides not to use the code as is, the location gives very strong hints on which code to examine and where to apply a fix._____no_output_____However, a closer look at our fix candidate shows that there is some amount of redundancy – that is, superfluous statements._____no_output_____ <code> quiz("Some of the lines in our fix candidate are redundant. Which are these?", [ "Line 3: `if x < y`", "Line 4: `if x > z`", "Line 5: `return x`", "Line 13: `return z`" ], '[eval(chr(100 - x)) for x in [49, 50]]')_____no_output_____ </code> ## Simplifying_____no_output_____As demonstrated in the chapter on [reducing failure-inducing inputs](DeltaDebugger.ipynb), we can use delta debugging on code to get rid of these superfluous statements._____no_output_____The trick for simplification is to have the test function (`test_middle_lines()`) declare a fitness of 1.0 as a "failure". Delta debugging will then simplify the input as long as the "failure" (and hence the maximum fitness obtained) persists._____no_output_____ <code> from DeltaDebugger import DeltaDebugger_____no_output_____middle_lines = astor.to_source(best_middle_tree).strip().split('\n')_____no_output_____def test_middle_lines(lines: List[str]) -> None: source = "\n".join(lines) tree = ast.parse(source) assert middle_fitness(tree) < 1.0 # "Fail" only while fitness is 1.0_____no_output_____with DeltaDebugger() as dd: test_middle_lines(middle_lines)_____no_output_____reduced_lines = dd.min_args()['lines']_____no_output_____# assert len(reduced_lines) < len(middle_lines)_____no_output_____reduced_source = "\n".join(reduced_lines)_____no_output_____repaired_source = astor.to_source(ast.parse(reduced_source)) # normalize print_content(repaired_source, '.py')_____no_output_____ </code> Success! Delta Debugging has eliminated the superfluous statements. We can present the difference to the original as a patch:_____no_output_____ <code> original_source = astor.to_source(ast.parse(middle_source)) # normalize_____no_output_____from ChangeDebugger import diff, print_patch # minor dependency_____no_output_____for patch in diff(original_source, repaired_source): print_patch(patch)_____no_output_____ </code> We can present this patch to the programmer, who will then immediately know what to fix in the `middle()` code._____no_output_____## Crossover So far, we have only applied one kind of genetic operators – mutation. There is a second one, though, also inspired by natural selection. The *crossover* operation mutates two strands of genes, as illustrated in the following picture. We have two parents (red and blue), each as a sequence of genes. To create "crossed" chilren, we pick a _crossover point_ and exchange the strands at this very point: ![](https://upload.wikimedia.org/wikipedia/commons/thumb/5/56/OnePointCrossover.svg/500px-OnePointCrossover.svg.png)_____no_output_____We implement a `CrossoverOperator` class that implements such an operation on two randomly chosen statement lists of two programs. It is used as ```python crossover = CrossoverOperator() crossover.crossover(tree_p1, tree_p2) ``` where `tree_p1` and `tree_p2` are two ASTs that are changed in place._____no_output_____### Excursion: Implementing Crossover_____no_output_____#### Crossing Statement Lists_____no_output_____Applied on programs, a crossover mutation takes two parents and "crosses" a list of statements. As an example, if our "parents" `p1()` and `p2()` are defined as follows:_____no_output_____ <code> def p1(): # type: ignore a = 1 b = 2 c = 3_____no_output_____def p2(): # type: ignore x = 1 y = 2 z = 3_____no_output_____ </code> Then a crossover operation would produce one child with a body ```python a = 1 y = 2 z = 3 ``` and another child with a body ```python x = 1 b = 2 c = 3 ```_____no_output_____We can easily implement this in a `CrossoverOperator` class in a method `cross_bodies()`._____no_output_____ <code> class CrossoverOperator: """A class for performing statement crossover of Python programs""" def __init__(self, log: bool = False): """Constructor. If `log` is set, turn on logging.""" self.log = log def cross_bodies(self, body_1: List[ast.AST], body_2: List[ast.AST]) -> \ Tuple[List[ast.AST], List[ast.AST]]: """Crossover the statement lists `body_1` x `body_2`. Return new lists.""" assert isinstance(body_1, list) assert isinstance(body_2, list) crossover_point_1 = len(body_1) // 2 crossover_point_2 = len(body_2) // 2 return (body_1[:crossover_point_1] + body_2[crossover_point_2:], body_2[:crossover_point_2] + body_1[crossover_point_1:])_____no_output_____ </code> Here's the `CrossoverOperatorMutator` applied on `p1` and `p2`:_____no_output_____ <code> tree_p1: ast.Module = ast.parse(inspect.getsource(p1)) tree_p2: ast.Module = ast.parse(inspect.getsource(p2))_____no_output_____body_p1 = tree_p1.body[0].body # type: ignore body_p2 = tree_p2.body[0].body # type: ignore body_p1_____no_output_____crosser = CrossoverOperator() tree_p1.body[0].body, tree_p2.body[0].body = crosser.cross_bodies(body_p1, body_p2) # type: ignore_____no_output_____print_content(astor.to_source(tree_p1), '.py')_____no_output_____print_content(astor.to_source(tree_p2), '.py')_____no_output_____ </code> #### Applying Crossover on Programs Applying the crossover operation on arbitrary programs is a bit more complex, though. We first have to _find_ lists of statements that we a actually can cross over. The `can_cross()` method returns True if we have a list of statements that we can cross. Python modules and classes are excluded, because changing the ordering of definitions will not have much impact on the program. #TODO: could actually be harmful because if class definitions depend on each other, changing the odering could crash the program#_____no_output_____ <code> class CrossoverOperator(CrossoverOperator): # In modules and class defs, the ordering of elements does not matter (much) SKIP_LIST = {ast.Module, ast.ClassDef} def can_cross(self, tree: ast.AST, body_attr: str = 'body') -> bool: if any(isinstance(tree, cls) for cls in self.SKIP_LIST): return False body = getattr(tree, body_attr, []) return body and len(body) >= 2_____no_output_____ </code> Here comes our method `crossover_attr()` which searches for crossover possibilities. It takes two ASTs `t1` and `t2` and an attribute (typically `'body'`) and retrieves the attribute lists $l_1$ (from `t1.<attr>`) and $l_2$ (from `t2.<attr>`). If $l_1$ and $l_2$ can be crossed, it crosses them, and is done. Otherwise * If there is a pair of elements $e_1 \in l_1$ and $e_2 \in l_2$ that has the same name – say, functions of the same name –, it applies itself to $e_1$ and $e_2$. * Otherwise, it creates random pairs of elements $e_1 \in l_1$ and $e_2 \in l_2$ and applies itself on these very pairs. `crossover_attr()` changes `t1` and `t2` in place and returns True if a crossover was found; it returns False otherwise._____no_output_____ <code> class CrossoverOperator(CrossoverOperator): def crossover_attr(self, t1: ast.AST, t2: ast.AST, body_attr: str) -> bool: """ Crossover the bodies `body_attr` of two trees `t1` and `t2`. Return True if successful. """ assert isinstance(t1, ast.AST) assert isinstance(t2, ast.AST) assert isinstance(body_attr, str) if not getattr(t1, body_attr, None) or not getattr(t2, body_attr, None): return False if self.crossover_branches(t1, t2): return True if self.log > 1: print(f"Checking {t1}.{body_attr} x {t2}.{body_attr}") body_1 = getattr(t1, body_attr) body_2 = getattr(t2, body_attr) # If both trees have the attribute, we can cross their bodies if self.can_cross(t1, body_attr) and self.can_cross(t2, body_attr): if self.log: print(f"Crossing {t1}.{body_attr} x {t2}.{body_attr}") new_body_1, new_body_2 = self.cross_bodies(body_1, body_2) setattr(t1, body_attr, new_body_1) setattr(t2, body_attr, new_body_2) return True # Strategy 1: Find matches in class/function of same name for child_1 in body_1: if hasattr(child_1, 'name'): for child_2 in body_2: if (hasattr(child_2, 'name') and child_1.name == child_2.name): if self.crossover_attr(child_1, child_2, body_attr): return True # Strategy 2: Find matches anywhere for child_1 in random.sample(body_1, len(body_1)): for child_2 in random.sample(body_2, len(body_2)): if self.crossover_attr(child_1, child_2, body_attr): return True return False_____no_output_____ </code> We have a special case for `if` nodes, where we can cross their body and `else` branches. #TODO: in python the same could be possible for while..else and for..else#_____no_output_____ <code> class CrossoverOperator(CrossoverOperator): def crossover_branches(self, t1: ast.AST, t2: ast.AST) -> bool: """Special case: `t1` = `if P: S1 else: S2` x `t2` = `if P': S1' else: S2'` becomes `t1` = `if P: S2' else: S1'` and `t2` = `if P': S2 else: S1` Returns True if successful. """ assert isinstance(t1, ast.AST) assert isinstance(t2, ast.AST) if (hasattr(t1, 'body') and hasattr(t1, 'orelse') and hasattr(t2, 'body') and hasattr(t2, 'orelse')): t1 = cast(ast.If, t1) # keep mypy happy t2 = cast(ast.If, t2) if self.log: print(f"Crossing branches {t1} x {t2}") t1.body, t1.orelse, t2.body, t2.orelse = \ t2.orelse, t2.body, t1.orelse, t1.body return True return False_____no_output_____ </code> The method `crossover()` is the main entry point. It checks for the special `if` case as described above; if not, it searches for possible crossover points. It raises `CrossoverError` if not successful._____no_output_____ <code> class CrossoverOperator(CrossoverOperator): def crossover(self, t1: ast.AST, t2: ast.AST) -> Tuple[ast.AST, ast.AST]: """Do a crossover of ASTs `t1` and `t2`. Raises `CrossoverError` if no crossover is found.""" assert isinstance(t1, ast.AST) assert isinstance(t2, ast.AST) for body_attr in ['body', 'orelse', 'finalbody']: if self.crossover_attr(t1, t2, body_attr): return t1, t2 raise CrossoverError("No crossover found")_____no_output_____class CrossoverError(ValueError): pass_____no_output_____ </code> ### End of Excursion_____no_output_____### Crossover in Action_____no_output_____Let us put our `CrossoverOperator` in action. Here is a test case for crossover, involving more deeply nested structures:_____no_output_____ <code> def p1(): # type: ignore if True: print(1) print(2) print(3)_____no_output_____def p2(): # type: ignore if True: print(a) print(b) else: print(c) print(d)_____no_output_____ </code> We invoke the `crossover()` method with two ASTs from `p1` and `p2`:_____no_output_____ <code> crossover = CrossoverOperator() tree_p1 = ast.parse(inspect.getsource(p1)) tree_p2 = ast.parse(inspect.getsource(p2)) crossover.crossover(tree_p1, tree_p2);_____no_output_____ </code> Here is the crossed offspring, mixing statement lists of `p1` and `p2`:_____no_output_____ <code> print_content(astor.to_source(tree_p1), '.py')_____no_output_____print_content(astor.to_source(tree_p2), '.py')_____no_output_____ </code> #TODO: when the orelse in the if-else case is empty you could add a pass, s.t. the crossover ```python def p2(): if True: else: print(1) print(2) print(3) ``` executes#_____no_output_____Here is our special case for `if` nodes in action, crossing our `middle()` tree with `p2`._____no_output_____ <code> middle_t1, middle_t2 = crossover.crossover(middle_tree(), ast.parse(inspect.getsource(p2)))_____no_output_____ </code> We see how the resulting offspring encompasses elements of both sources:_____no_output_____ <code> print_content(astor.to_source(middle_t1), '.py')_____no_output_____print_content(astor.to_source(middle_t2), '.py')_____no_output_____ </code> ## A Repairer Class So far, we have applied all our techniques on the `middle()` program only. Let us now create a `Repairer` class that applies automatic program repair on arbitrary Python programs. The idea is that you can apply it on some statistical debugger, for which you have gathered passing and failing test cases, and then invoke its `repair()` method to find a "best" fix candidate: ```python debugger = OchiaiDebugger() with debugger: <passing test> with debugger: <failing test> ... repairer = Repairer(debugger) repairer.repair() ```_____no_output_____### Excursion: Implementing Repairer_____no_output_____The main argument to the `Repairer` constructor is the `debugger` to get information from. On top of that, it also allows to customize the classes used for mutation, crossover, and reduction. Setting `targets` allows to define a set of functions to repair; setting `sources` allows to set a set of sources to take repairs from. The constructor then sets up the environment for running tests and repairing, as described below._____no_output_____ <code> from StackInspector import StackInspector # minor dependency_____no_output_____class Repairer(StackInspector): """A class for automatic repair of Python programs""" def __init__(self, debugger: RankingDebugger, *, targets: Optional[List[Any]] = None, sources: Optional[List[Any]] = None, log: Union[bool, int] = False, mutator_class: Type = StatementMutator, crossover_class: Type = CrossoverOperator, reducer_class: Type = DeltaDebugger, globals: Optional[Dict[str, Any]] = None): """Constructor. `debugger`: a `RankingDebugger` to take tests and coverage from. `targets`: a list of functions/modules to be repaired. (default: the covered functions in `debugger`, except tests) `sources`: a list of functions/modules to take repairs from. (default: same as `targets`) `globals`: if given, a `globals()` dict for executing targets (default: `globals()` of caller)""" assert isinstance(debugger, RankingDebugger) self.debugger = debugger self.log = log if targets is None: targets = self.default_functions() if not targets: raise ValueError("No targets to repair") if sources is None: sources = self.default_functions() if not sources: raise ValueError("No sources to take repairs from") if self.debugger.function() is None: raise ValueError("Multiple entry points observed") self.target_tree: ast.AST = self.parse(targets) self.source_tree: ast.AST = self.parse(sources) self.log_tree("Target code to be repaired:", self.target_tree) if ast.dump(self.target_tree) != ast.dump(self.source_tree): self.log_tree("Source code to take repairs from:", self.source_tree) self.fitness_cache: Dict[str, float] = {} self.mutator: StatementMutator = \ mutator_class( source=all_statements(self.source_tree), suspiciousness_func=self.debugger.suspiciousness, log=(self.log >= 3)) self.crossover: CrossoverOperator = crossover_class(log=(self.log >= 3)) self.reducer: DeltaDebugger = reducer_class(log=(self.log >= 3)) if globals is None: globals = self.caller_globals() # see below self.globals = globals_____no_output_____ </code> When we access or execute functions, we ault_functionso \todo{What? -- BM} so in the caller's environment, not ours. The `caller_globals()` method from `StackInspector` acts as replacement for `globals()`._____no_output_____#### Helper Functions The constructor uses a number of helper functions to create its environment._____no_output_____ <code> class Repairer(Repairer): def getsource(self, item: Union[str, Any]) -> str: """Get the source for `item`. Can also be a string.""" if isinstance(item, str): item = self.globals[item] return inspect.getsource(item)_____no_output_____class Repairer(Repairer): def default_functions(self) -> List[Callable]: """Return the set of functions to be repaired. Functions whose names start or end in `test` are excluded.""" def is_test(name: str) -> bool: return name.startswith('test') or name.endswith('test') return [func for func in self.debugger.covered_functions() if not is_test(func.__name__)]_____no_output_____class Repairer(Repairer): def log_tree(self, description: str, tree: Any) -> None: """Print out `tree` as source code prefixed by `description`.""" if self.log: print(description) print_content(astor.to_source(tree), '.py') print() print()_____no_output_____class Repairer(Repairer): def parse(self, items: List[Any]) -> ast.AST: """Read in a list of items into a single tree""" tree = ast.parse("") for item in items: if isinstance(item, str): item = self.globals[item] item_lines, item_first_lineno = inspect.getsourcelines(item) try: item_tree = ast.parse("".join(item_lines)) except IndentationError: # inner function or likewise warnings.warn(f"Can't parse {item.__name__}") continue ast.increment_lineno(item_tree, item_first_lineno - 1) tree.body += item_tree.body return tree_____no_output_____ </code> #### Running Tests Now that we have set the environment for `Repairer`, we can implement one step of automatic repair after the other. The method `run_test_set()` runs the given `test_set` (`DifferenceDebugger.PASS` or `DifferenceDebugger.FAIL`), returning the number of passed tests. If `validate` is set, it checks whether the outcomes are as expected._____no_output_____ <code> class Repairer(Repairer): def run_test_set(self, test_set: str, validate: bool = False) -> int: """ Run given `test_set` (`DifferenceDebugger.PASS` or `DifferenceDebugger.FAIL`). If `validate` is set, check expectations. Return number of passed tests. """ passed = 0 collectors = self.debugger.collectors[test_set] function = self.debugger.function() assert function is not None # FIXME: function may have been redefined for c in collectors: if self.log >= 4: print(f"Testing {c.id()}...", end="") try: function(**c.args()) except Exception as err: if self.log >= 4: print(f"failed ({err.__class__.__name__})") if validate and test_set == self.debugger.PASS: raise err.__class__( f"{c.id()} should have passed, but failed") continue passed += 1 if self.log >= 4: print("passed") if validate and test_set == self.debugger.FAIL: raise FailureNotReproducedError( f"{c.id()} should have failed, but passed") return passed_____no_output_____class FailureNotReproducedError(ValueError): pass_____no_output_____ </code> Here is how we use `run_tests_set()`:_____no_output_____ <code> repairer = Repairer(middle_debugger) assert repairer.run_test_set(middle_debugger.PASS) == \ len(MIDDLE_PASSING_TESTCASES) assert repairer.run_test_set(middle_debugger.FAIL) == 0_____no_output_____ </code> The method `run_tests()` runs passing and failing tests, weighing the passed testcases to obtain the overall fitness._____no_output_____ <code> class Repairer(Repairer): def weight(self, test_set: str) -> float: """ Return the weight of `test_set` (`DifferenceDebugger.PASS` or `DifferenceDebugger.FAIL`). """ return { self.debugger.PASS: WEIGHT_PASSING, self.debugger.FAIL: WEIGHT_FAILING }[test_set] def run_tests(self, validate: bool = False) -> float: """Run passing and failing tests, returning weighted fitness.""" fitness = 0.0 for test_set in [self.debugger.PASS, self.debugger.FAIL]: passed = self.run_test_set(test_set, validate=validate) ratio = passed / len(self.debugger.collectors[test_set]) fitness += self.weight(test_set) * ratio return fitness_____no_output_____ </code> The method `validate()` ensures the observed tests can be adequately reproduced._____no_output_____ <code> class Repairer(Repairer): def validate(self) -> None: fitness = self.run_tests(validate=True) assert fitness == self.weight(self.debugger.PASS)_____no_output_____repairer = Repairer(middle_debugger) repairer.validate()_____no_output_____ </code> #### (Re)defining Functions Our `run_tests()` methods above do not yet redefine the function to be repaired. This is done by the `fitness()` function, which compiles and defines the given repair candidate `tree` before testing it. It caches and returns the fitness._____no_output_____ <code> class Repairer(Repairer): def fitness(self, tree: ast.AST) -> float: """Test `tree`, returning its fitness""" key = cast(str, ast.dump(tree)) if key in self.fitness_cache: return self.fitness_cache[key] # Save defs original_defs: Dict[str, Any] = {} for name in self.toplevel_defs(tree): if name in self.globals: original_defs[name] = self.globals[name] else: warnings.warn(f"Couldn't find definition of {repr(name)}") assert original_defs, f"Couldn't find any definition" if self.log >= 3: print("Repair candidate:") print_content(astor.to_source(tree), '.py') print() # Create new definition try: code = compile(tree, '<Repairer>', 'exec') except ValueError: # Compilation error code = None if code is None: if self.log >= 3: print(f"Fitness = 0.0 (compilation error)") fitness = 0.0 return fitness # Execute new code, defining new functions in `self.globals` exec(code, self.globals) # Set new definitions in the namespace (`__globals__`) # of the function we will be calling. function = self.debugger.function() assert function is not None assert hasattr(function, '__globals__') for name in original_defs: function.__globals__[name] = self.globals[name] # type: ignore fitness = self.run_tests(validate=False) # Restore definitions for name in original_defs: function.__globals__[name] = original_defs[name] # type: ignore self.globals[name] = original_defs[name] if self.log >= 3: print(f"Fitness = {fitness}") self.fitness_cache[key] = fitness return fitness_____no_output_____ </code> The helper function `toplevel_defs()` helps saving and restoring the environment before and after redefining the function under repair._____no_output_____ <code> class Repairer(Repairer): def toplevel_defs(self, tree: ast.AST) -> List[str]: """Return a list of names of defined functions and classes in `tree`""" visitor = DefinitionVisitor() visitor.visit(tree) assert hasattr(visitor, 'definitions') return visitor.definitions_____no_output_____class DefinitionVisitor(NodeVisitor): def __init__(self) -> None: self.definitions: List[str] = [] def add_definition(self, node: Union[ast.ClassDef, ast.FunctionDef, ast.AsyncFunctionDef]) -> None: self.definitions.append(node.name) def visit_FunctionDef(self, node: ast.FunctionDef) -> None: self.add_definition(node) def visit_AsyncFunctionDef(self, node: ast.AsyncFunctionDef) -> None: self.add_definition(node) def visit_ClassDef(self, node: ast.ClassDef) -> None: self.add_definition(node)_____no_output_____ </code> Here's an example for `fitness()`:_____no_output_____ <code> repairer = Repairer(middle_debugger, log=1)_____no_output_____good_fitness = repairer.fitness(middle_tree()) good_fitness_____no_output_____# ignore assert good_fitness >= 0.99, "fitness() failed"_____no_output_____bad_middle_tree = ast.parse("def middle(x, y, z): return x") bad_fitness = repairer.fitness(bad_middle_tree) bad_fitness_____no_output_____# ignore assert bad_fitness < 0.5, "fitness() failed"_____no_output_____ </code> #### Repairing Now for the actual `repair()` method, which creates a `population` and then evolves it until the fitness is 1.0 or the given number of iterations is spent._____no_output_____ <code> import traceback_____no_output_____class Repairer(Repairer): def initial_population(self, size: int) -> List[ast.AST]: """Return an initial population of size `size`""" return [self.target_tree] + \ [self.mutator.mutate(copy.deepcopy(self.target_tree)) for i in range(size - 1)] def repair(self, population_size: int = POPULATION_SIZE, iterations: int = 100) -> \ Tuple[ast.AST, float]: """ Repair the function we collected test runs from. Use a population size of `population_size` and at most `iterations` iterations. Returns a pair (`ast`, `fitness`) where `ast` is the AST of the repaired function, and `fitness` is its fitness (between 0 and 1.0) """ self.validate() population = self.initial_population(population_size) last_key = ast.dump(self.target_tree) for iteration in range(iterations): population = self.evolve(population) best_tree = population[0] fitness = self.fitness(best_tree) if self.log: print(f"Evolving population: " f"iteration{iteration:4}/{iterations} " f"fitness = {fitness:.5} \r", end="") if self.log >= 2: best_key = ast.dump(best_tree) if best_key != last_key: print() print() self.log_tree(f"New best code (fitness = {fitness}):", best_tree) last_key = best_key if fitness >= 1.0: break if self.log: print() if self.log and self.log < 2: self.log_tree(f"Best code (fitness = {fitness}):", best_tree) best_tree = self.reduce(best_tree) fitness = self.fitness(best_tree) self.log_tree(f"Reduced code (fitness = {fitness}):", best_tree) return best_tree, fitness_____no_output_____ </code> #### Evolving The evolution of our population takes place in the `evolve()` method. In contrast to the `evolve_middle()` function, above, we use crossover to create the offspring, which we still mutate afterwards._____no_output_____ <code> class Repairer(Repairer): def evolve(self, population: List[ast.AST]) -> List[ast.AST]: """Evolve the candidate population by mutating and crossover.""" n = len(population) # Create offspring as crossover of parents offspring: List[ast.AST] = [] while len(offspring) < n: parent_1 = copy.deepcopy(random.choice(population)) parent_2 = copy.deepcopy(random.choice(population)) try: self.crossover.crossover(parent_1, parent_2) except CrossoverError: pass # Just keep parents offspring += [parent_1, parent_2] # Mutate offspring offspring = [self.mutator.mutate(tree) for tree in offspring] # Add it to population population += offspring # Keep the fitter part of the population population.sort(key=self.fitness_key, reverse=True) population = population[:n] return population_____no_output_____ </code> A second difference is that we not only sort by fitness, but also by tree size – with equal fitness, a smaller tree thus will be favored. This helps keeping fixes and patches small._____no_output_____ <code> class Repairer(Repairer): def fitness_key(self, tree: ast.AST) -> Tuple[float, int]: """Key to be used for sorting the population""" tree_size = len([node for node in ast.walk(tree)]) return (self.fitness(tree), -tree_size)_____no_output_____ </code> #### Simplifying The last step in repairing is simplifying the code. As demonstrated in the chapter on [reducing failure-inducing inputs](DeltaDebugger.ipynb), we can use delta debugging on code to get rid of superfluous statements. To this end, we convert the tree to lines, run delta debugging on them, and then convert it back to a tree._____no_output_____ <code> class Repairer(Repairer): def reduce(self, tree: ast.AST) -> ast.AST: """Simplify `tree` using delta debugging.""" original_fitness = self.fitness(tree) source_lines = astor.to_source(tree).split('\n') with self.reducer: self.test_reduce(source_lines, original_fitness) reduced_lines = self.reducer.min_args()['source_lines'] reduced_source = "\n".join(reduced_lines) return ast.parse(reduced_source)_____no_output_____ </code> As dicussed above, we simplify the code by having the test function (`test_reduce()`) declare reaching the maximum fitness obtained so far as a "failure". Delta debugging will then simplify the input as long as the "failure" (and hence the maximum fitness obtained) persists._____no_output_____ <code> class Repairer(Repairer): def test_reduce(self, source_lines: List[str], original_fitness: float) -> None: """Test function for delta debugging.""" try: source = "\n".join(source_lines) tree = ast.parse(source) fitness = self.fitness(tree) assert fitness < original_fitness except AssertionError: raise except SyntaxError: raise except IndentationError: raise except Exception: # traceback.print_exc() # Uncomment to see internal errors raise_____no_output_____ </code> ### End of Excursion_____no_output_____### Repairer in Action Let us go and apply `Repairer` in practice. We initialize it with `middle_debugger`, which has (still) collected the passing and failing runs for `middle_test()`. We also set `log` for some diagnostics along the way._____no_output_____ <code> repairer = Repairer(middle_debugger, log=True)_____no_output_____ </code> We now invoke `repair()` to evolve our population. After a few iterations, we find a best tree with perfect fitness._____no_output_____ <code> best_tree, fitness = repairer.repair()_____no_output_____print_content(astor.to_source(best_tree), '.py')_____no_output_____fitness_____no_output_____ </code> Again, we have a perfect solution. Here, we did not even need to simplify the code in the last iteration, as our `fitness_key()` function favors smaller implementations._____no_output_____## Removing HTML Markup Let us apply `Repairer` on our other ongoing example, namely `remove_html_markup()`._____no_output_____ <code> def remove_html_markup(s): # type: ignore tag = False quote = False out = "" for c in s: if c == '<' and not quote: tag = True elif c == '>' and not quote: tag = False elif c == '"' or c == "'" and tag: quote = not quote elif not tag: out = out + c return out_____no_output_____def remove_html_markup_tree() -> ast.AST: return ast.parse(inspect.getsource(remove_html_markup))_____no_output_____ </code> To run `Repairer` on `remove_html_markup()`, we need a test and a test suite. `remove_html_markup_test()` raises an exception if applying `remove_html_markup()` on the given `html` string does not yield the `plain` string._____no_output_____ <code> def remove_html_markup_test(html: str, plain: str) -> None: outcome = remove_html_markup(html) assert outcome == plain, \ f"Got {repr(outcome)}, expected {repr(plain)}"_____no_output_____ </code> Now for the test suite. We use a simple fuzzing scheme to create dozens of passing and failing test cases in `REMOVE_HTML_PASSING_TESTCASES` and `REMOVE_HTML_FAILING_TESTCASES`, respectively._____no_output_____### Excursion: Creating HTML Test Cases_____no_output_____ <code> def random_string(length: int = 5, start: int = ord(' '), end: int = ord('~')) -> str: return "".join(chr(random.randrange(start, end + 1)) for i in range(length))_____no_output_____random_string()_____no_output_____def random_id(length: int = 2) -> str: return random_string(start=ord('a'), end=ord('z'))_____no_output_____random_id()_____no_output_____def random_plain() -> str: return random_string().replace('<', '').replace('>', '')_____no_output_____def random_string_noquotes() -> str: return random_string().replace('"', '').replace("'", '')_____no_output_____def random_html(depth: int = 0) -> Tuple[str, str]: prefix = random_plain() tag = random_id() if depth > 0: html, plain = random_html(depth - 1) else: html = plain = random_plain() attr = random_id() value = '"' + random_string_noquotes() + '"' postfix = random_plain() return f'{prefix}<{tag} {attr}={value}>{html}</{tag}>{postfix}', \ prefix + plain + postfix_____no_output_____random_html()_____no_output_____def remove_html_testcase(expected: bool = True) -> Tuple[str, str]: while True: html, plain = random_html() outcome = (remove_html_markup(html) == plain) if outcome == expected: return html, plain_____no_output_____REMOVE_HTML_TESTS = 100 REMOVE_HTML_PASSING_TESTCASES = \ [remove_html_testcase(True) for i in range(REMOVE_HTML_TESTS)] REMOVE_HTML_FAILING_TESTCASES = \ [remove_html_testcase(False) for i in range(REMOVE_HTML_TESTS)]_____no_output_____ </code> ### End of Excursion_____no_output_____Here is a passing test case:_____no_output_____ <code> REMOVE_HTML_PASSING_TESTCASES[0]_____no_output_____html, plain = REMOVE_HTML_PASSING_TESTCASES[0] remove_html_markup_test(html, plain)_____no_output_____ </code> Here is a failing test case (containing a double quote in the plain text)_____no_output_____ <code> REMOVE_HTML_FAILING_TESTCASES[0]_____no_output_____with ExpectError(): html, plain = REMOVE_HTML_FAILING_TESTCASES[0] remove_html_markup_test(html, plain)_____no_output_____ </code> We run our tests, collecting the outcomes in `html_debugger`._____no_output_____ <code> html_debugger = OchiaiDebugger()_____no_output_____for html, plain in (REMOVE_HTML_PASSING_TESTCASES + REMOVE_HTML_FAILING_TESTCASES): with html_debugger: remove_html_markup_test(html, plain)_____no_output_____ </code> The suspiciousness distribution will not be of much help here – pretty much all lines in `remove_html_markup()` have the same suspiciousness._____no_output_____ <code> html_debugger_____no_output_____ </code> Let us create our repairer and run it._____no_output_____ <code> html_repairer = Repairer(html_debugger, log=True)_____no_output_____best_tree, fitness = html_repairer.repair(iterations=20)_____no_output_____ </code> We see that the "best" code is still our original code, with no changes. And we can set `iterations` to 50, 100, 200... – our `Repairer` won't be able to repair it._____no_output_____ <code> quiz("Why couldn't `Repairer()` repair `remove_html_markup()`?", [ "The population is too small!", "The suspiciousness is too evenly distributed!", "We need more test cases!", "We need more iterations!", "There is no statement in the source with a correct condition!", "The population is too big!", ], '5242880 >> 20')_____no_output_____ </code> You can explore all of the hypotheses above by changing the appropriate parameters, but you won't be able to change the outcome. The problem is that, unlike `middle()`, there is no statement (or combination thereof) in `remove_html_markup()` that could be used to make the failure go away. For this, we need to mutate another aspect of the code, which we will explore in the next section._____no_output_____## Mutating Conditions The `Repairer` class is very configurable. The individual steps in automated repair can all be replaced by providing own classes in the keyword arguments of its `__init__()` constructor: * To change fault localization, pass a different `debugger` that is a subclass of `RankingDebugger`. * To change the mutation operator, set `mutator_class` to a subclass of `StatementMutator`. * To change the crossover operator, set `crossover_class` to a subclass of `CrossoverOperator`. * To change the reduction algorithm, set `reducer_class` to a subclass of `Reducer`. In this section, we will explore how to extend the mutation operator such that it can mutate _conditions_ for control constructs such as `if`, `while`, or `for`. To this end, we introduce a new class `ConditionMutator` subclassing `StatementMutator`._____no_output_____### Collecting Conditions Let us start with a few simple supporting functions. The function `all_conditions()` retrieves all control conditions from an AST._____no_output_____ <code> def all_conditions(trees: Union[ast.AST, List[ast.AST]], tp: Optional[Type] = None) -> List[ast.expr]: """ Return all conditions from the AST (or AST list) `trees`. If `tp` is given, return only elements of that type. """ if not isinstance(trees, list): assert isinstance(trees, ast.AST) trees = [trees] visitor = ConditionVisitor() for tree in trees: visitor.visit(tree) conditions = visitor.conditions if tp is not None: conditions = [c for c in conditions if isinstance(c, tp)] return conditions_____no_output_____ </code> `all_conditions()` uses a `ConditionVisitor` class to walk the tree and collect the conditions:_____no_output_____ <code> class ConditionVisitor(NodeVisitor): def __init__(self) -> None: self.conditions: List[ast.expr] = [] self.conditions_seen: Set[str] = set() super().__init__() def add_conditions(self, node: ast.AST, attr: str) -> None: elems = getattr(node, attr, []) if not isinstance(elems, list): elems = [elems] elems = cast(List[ast.expr], elems) for elem in elems: elem_str = astor.to_source(elem) if elem_str not in self.conditions_seen: self.conditions.append(elem) self.conditions_seen.add(elem_str) def visit_BoolOp(self, node: ast.BoolOp) -> ast.AST: self.add_conditions(node, 'values') return super().generic_visit(node) def visit_UnaryOp(self, node: ast.UnaryOp) -> ast.AST: if isinstance(node.op, ast.Not): self.add_conditions(node, 'operand') return super().generic_visit(node) def generic_visit(self, node: ast.AST) -> ast.AST: if hasattr(node, 'test'): self.add_conditions(node, 'test') return super().generic_visit(node)_____no_output_____ </code> Here are all the conditions in `remove_html_markup()`. This is some material to construct new conditions from._____no_output_____ <code> [astor.to_source(cond).strip() for cond in all_conditions(remove_html_markup_tree())]_____no_output_____ </code> ### Mutating Conditions Here comes our `ConditionMutator` class. We subclass from `StatementMutator` and set an attribute `self.conditions` containing all the conditions in the source. The method `choose_condition()` randomly picks a condition._____no_output_____ <code> class ConditionMutator(StatementMutator): """Mutate conditions in an AST""" def __init__(self, *args: Any, **kwargs: Any) -> None: """Constructor. Arguments are as with `StatementMutator` constructor.""" super().__init__(*args, **kwargs) self.conditions = all_conditions(self.source) if self.log: print("Found conditions", [astor.to_source(cond).strip() for cond in self.conditions]) def choose_condition(self) -> ast.expr: """Return a random condition from source.""" return copy.deepcopy(random.choice(self.conditions))_____no_output_____ </code> The actual mutation takes place in the `swap()` method. If the node to be replaced has a `test` attribute (i.e. a controlling predicate), then we pick a random condition `cond` from the source and randomly chose from: * **set**: We change `test` to `cond`. * **not**: We invert `test`. * **and**: We replace `test` by `cond and test`. * **or**: We replace `test` by `cond or test`. Over time, this might lead to operators propagating across the population._____no_output_____ <code> class ConditionMutator(ConditionMutator): def choose_bool_op(self) -> str: return random.choice(['set', 'not', 'and', 'or']) def swap(self, node: ast.AST) -> ast.AST: """Replace `node` condition by a condition from `source`""" if not hasattr(node, 'test'): return super().swap(node) node = cast(ast.If, node) cond = self.choose_condition() new_test = None choice = self.choose_bool_op() if choice == 'set': new_test = cond elif choice == 'not': new_test = ast.UnaryOp(op=ast.Not(), operand=node.test) elif choice == 'and': new_test = ast.BoolOp(op=ast.And(), values=[cond, node.test]) elif choice == 'or': new_test = ast.BoolOp(op=ast.Or(), values=[cond, node.test]) else: raise ValueError("Unknown boolean operand") if new_test: # ast.copy_location(new_test, node) node.test = new_test return node_____no_output_____ </code> We can use the mutator just like `StatementMutator`, except that some of the mutations will also include new conditions:_____no_output_____ <code> mutator = ConditionMutator(source=all_statements(remove_html_markup_tree()), log=True)_____no_output_____for i in range(10): new_tree = mutator.mutate(remove_html_markup_tree())_____no_output_____ </code> Let us put our new mutator to action, again in a `Repairer()`. To activate it, all we need to do is to pass it as `mutator_class` keyword argument._____no_output_____ <code> condition_repairer = Repairer(html_debugger, mutator_class=ConditionMutator, log=2)_____no_output_____ </code> We might need more iterations for this one. Let us see..._____no_output_____ <code> best_tree, fitness = condition_repairer.repair(iterations=200)_____no_output_____repaired_source = astor.to_source(best_tree)_____no_output_____print_content(repaired_source, '.py')_____no_output_____ </code> Success again! We have automatically repaired `remove_html_markup()` – the resulting code passes all tests, including those that were previously failing._____no_output_____Again, we can present the fix as a patch:_____no_output_____ <code> original_source = astor.to_source(remove_html_markup_tree())_____no_output_____for patch in diff(original_source, repaired_source): print_patch(patch)_____no_output_____ </code> However, looking at the patch, one may come up with doubts._____no_output_____ <code> quiz("Is this actually the best solution?", [ "Yes, sure, of course. Why?", "Err - what happened to single quotes?" ], 1 << 1)_____no_output_____ </code> Indeed – our solution does not seem to handle single quotes anymore. Why is that so? #TODO: the solution I've got still handles single quotes, so maybe again set a corresponding seed#_____no_output_____ <code> quiz("Why aren't single quotes handled in the solution?", [ "Because they're not important. I mean, who uses 'em anyway?", "Because they are not part of our tests? " "Let me look up how they are constructed..." ], 1 << 1)_____no_output_____ </code> Correct! Our test cases do not include single quotes – at least not in the interior of HTML tags – and thus, automatic repair did not care to preserve their handling._____no_output_____How can we fix this? An easy way is to include an appropriate test case in our set – a test case that passes with the original `remove_html_markup()`, yet fails with the "repaired" `remove_html_markup()` as whosn above._____no_output_____ <code> with html_debugger: remove_html_markup_test("<foo quote='>abc'>me</foo>", "me")_____no_output_____ </code> Let us repeat the repair with the extended test set:_____no_output_____ <code> best_tree, fitness = condition_repairer.repair(iterations=200)_____no_output_____ </code> Here is the final tree:_____no_output_____ <code> print_content(astor.to_source(best_tree), '.py')_____no_output_____ </code> And here is its fitness:_____no_output_____ <code> fitness_____no_output_____ </code> The revised candidate now passes _all_ tests (including the tricky quote test we added last). Its condition now properly checks for `tag` _and_ both quotes. (The `tag` inside the parentheses is still redundant, but so be it.) From this example, we can learn a few lessons about the possibilities and risks of automated repair: * First, automatic repair is highly dependent on the quality of the checking tests. The risk is that the repair may overspecialize towards the test. * Second, automated repair is highly dependent on the sources that program fragments are chosen from. If there is a hint of a solution somewhere in the code, there is a chance that automated repair will catch it up. #TODO: only applies to the presented technique# * Third, automatic repair is a deeply heuristic approach. Its behavior will vary widely with any change to the parameters (and the underlying random number generators) * Fourth, automatic repair can take a long time. The examples we have in this chapter take less than a minute to compute, and neither Python nor our implementation is exactly fast. But as the search space grows, automated repair will take much longer. On the other hand, even an incomplete automated repair candidate can be much better than nothing at all – it may provide all the essential ingredients (such as the location or the involved variables) for a successful fix. When users of automated repair techniques are aware of its limitations and its assumptions, there is lots of potential in automated repair. Enjoy!_____no_output_____## Limitations_____no_output_____The `Repairer` class is hardly tested. Things that do not work include * Functions with inner functions are not repaired._____no_output_____## Synopsis_____no_output_____This chapter provides tools and techniques for automated repair of program code. The `Repairer()` class takes a `RankingDebugger` debugger as input (such as `OchiaiDebugger` from [the chapter on statistical debugging](StatisticalDebugger.ipynb). A typical setup looks like this: ```python from debuggingbook.StatisticalDebugger import OchiaiDebugger debugger = OchiaiDebugger() for inputs in TESTCASES: with debugger: test_foo(inputs) ... repairer = Repairer(debugger) ``` Here, `test_foo()` is a function that raises an exception if the tested function `foo()` fails. If `foo()` passes, `test_foo()` should not raise an exception._____no_output_____The `repair()` method of a `Repairer` searches for a repair of the code covered in the debugger (except for methods starting or ending in `test`, such that `foo()`, not `test_foo()` is repaired). `repair()` returns the best fix candidate as a pair `(tree, fitness)` where `tree` is a [Python abstract syntax tree](http://docs.python.org/3/library/ast) (AST) of the fix candidate, and `fitness` is the fitness of the candidate (a value between 0 and 1). A `fitness` of 1.0 means that the candidate passed all tests. A typical usage looks like this: ```python import astor tree, fitness = repairer.repair() print(astor.to_source(tree), fitness) ```_____no_output_____Here is a complete example for the `middle()` program. This is the original source code of `middle()`:_____no_output_____ <code> # ignore print_content(middle_source, '.py')_____no_output_____ </code> We set up a function `middle_test()` that tests it. The `middle_debugger` collects testcases and outcomes:_____no_output_____ <code> middle_debugger = OchiaiDebugger()_____no_output_____for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES: with middle_debugger: middle_test(x, y, z)_____no_output_____ </code> The repairer attempts to repair the invoked function (`middle()`). The returned AST `tree` can be output via `astor.to_source()`:_____no_output_____ <code> middle_repairer = Repairer(middle_debugger) tree, fitness = middle_repairer.repair() print(astor.to_source(tree))_____no_output_____ </code> The `fitness` value shows how well the repaired program fits the tests. A fitness value of 1.0 shows that the repaired program satisfies all tests._____no_output_____ <code> fitness_____no_output_____# ignore assert fitness >= 1.0_____no_output_____ </code> Hence, the above program indeed is a perfect repair in the sense that all previously failing tests now pass – our repair was successful._____no_output_____Here are the classes defined in this chapter. A `Repairer` repairs a program, using a `StatementMutator` and a `CrossoverOperator` to evolve a population of candidates._____no_output_____ <code> # ignore from ClassDiagram import display_class_hierarchy_____no_output_____# ignore display_class_hierarchy([Repairer, ConditionMutator, CrossoverOperator], abstract_classes=[ NodeVisitor, NodeTransformer ], public_methods=[ Repairer.__init__, Repairer.repair, StatementMutator.__init__, StatementMutator.mutate, ConditionMutator.__init__, CrossoverOperator.__init__, CrossoverOperator.crossover, ], project='debuggingbook')_____no_output_____ </code> ## Lessons Learned * Automated repair based on genetic optimization uses five ingredients: 1. A _test suite_ to determine passing and failing tests 2. _Defect localization_ (typically obtained from [statistical debugging](StatisticalDebugger.ipynb) with the test suite) to determine potential locations to be fixed 3. _Random code mutations_ and _crossover operations_ to create and evolve a population of inputs 4. A _fitness function_ and a _selection strategy_ to determine the part of the population that should be evolved further 5. A _reducer_ such as [delta debugging](DeltaDebugger.ipynb) to simplify the final candidate with the highest fitness. * The result of automated repair is a _fix candidate_ with the highest fitness for the given tests. * A _fix candidate_ is not guaranteed to be correct or optimal, but gives important hints on how to fix the program. * All of the above ingredients offer plenty of settings and alternatives to experiment with._____no_output_____## Background The seminal work in automated repair is [GenProg](https://squareslab.github.io/genprog-code/) \cite{LeGoues2012}, which heavily inspired our `Repairer` implementation. Major differences between GenProg and `Repairer` include: * GenProg includes its own defect localization (which is also dynamically updated), whereas `Repairer` builds on earlier statistical debugging. * GenProg can apply multiple mutations on programs (or none at all), whereas `Repairer` applies exactly one mutation. * The `StatementMutator` used by `Repairer` includes various special cases for program structures (`if`, `for`, `while`...), whereas GenProg operates on statements only. * GenProg has been tested on large production programs. While GenProg is _the_ seminal work in the area (and arguably the most important software engineering research contribution of the 2010s), there have been a number of important extensions of automated repair. These include: * *AutoFix* \cite{Pei2014} leverages _program contracts_ (pre- and postconditions) to generate tests and assertions automatically. Not only do such [assertions](Assertions.ipynb) help in fault localization, they also allow for much better validation of fix candidates. * *SemFix* \cite{Nguyen2013} presents automated program repair based on _symbolic analysis_ rather than genetic optimization. This allows to leverage program semantics, which GenProg does not consider. #TODO: SemFix already have a successor Angelix ([http://angelix.io](http://angelix.io))# To learn more about automated program repair, see [program-repair.org](http://program-repair.org), the community page dedicated to research in program repair._____no_output_____## Exercises_____no_output_____### Exercise 1: Automated Repair Parameters Automated Repair is influenced by a large number of design choices – the size of the population, the number of iterations, the genetic optimization strategy, and more. How do changes to these design choices affect its effectiveness? * Consider the constants defined in this chapter (such as `POPULATION_SIZE` or `WEIGHT_PASSING` vs. `WEIGHT_FAILING`). How do changes affect the effectiveness of automated repair? * As an effectiveness metric, consider the number of iterations it takes to produce a fix candidate. * Since genetic optimization is a random algorithm, you need to determine effectiveness averages over a large number of runs (say, 100)._____no_output_____### Exercise 2: Elitism [_Elitism_](https://en.wikipedia.org/wiki/Genetic_algorithm#Elitism) (also known as _elitist selection_) is a variant of genetic selection in which a small fraction of the fittest candidates of the last population are included unchanged in the offspring. * Implement elitist selection by subclassing the `evolve()` method. Experiment with various fractions (5%, 10%, 25%) of "elites" and see how this improves results._____no_output_____### Exercise 3: Evolving Values Following the steps of `ConditionMutator`, implement a `ValueMutator` class that replaces one constant value by another one found in the source (say, `0` by `1` or `True` by `False`). For validation, consider the following failure in the `square_root()` function from [the chapter on assertions](Assertions.ipynb):_____no_output_____ <code> from Assertions import square_root # minor dependency_____no_output_____with ExpectError(): square_root_of_zero = square_root(0)_____no_output_____ </code> Can your `ValueMutator` automatically fix this failure?_____no_output_____**Solution.** Your solution will be effective if it also includes named constants such as `None`._____no_output_____ <code> import math_____no_output_____def square_root_fixed(x): # type: ignore assert x >= 0 # precondition approx = 0 # <-- FIX: Change `None` to 0 guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 assert math.isclose(approx * approx, x) return approx_____no_output_____square_root_fixed(0)_____no_output_____ </code> ### Exercise 4: Evolving Variable Names Following the steps of `ConditionMutator`, implement a `IdentifierMutator` class that replaces one identifier by another one found in the source (say, `y` by `x`). Does it help fixing the `middle()` error?_____no_output_____### Exercise 5: Parallel Repair Automatic Repair is a technique that is embarrassingly parallel – all tests for one candidate can all be run in parallel, and all tests for _all_ candidates can also be run in parallel. Set up an infrastructure for running concurrent tests using Pythons [asyncio](https://docs.python.org/3/library/asyncio.html) library._____no_output_____
{ "repository": "smythi93/debuggingbook", "path": "notebooks/Repairer.ipynb", "matched_keywords": [ "evolution" ], "stars": null, "size": 140300, "hexsha": "481eeb280275332b2492ae15dbc4e3e738359a7c", "max_line_length": 556, "avg_line_length": 31.1155466844, "alphanum_fraction": 0.5553314326 }
# Notebook from stanzheng/advent-of-code Path: 2015/Day1.ipynb <code> """ --- Day 1: Not Quite Lisp --- Santa was hoping for a white Christmas, but his weather machine's "snow" function is powered by stars, and he's fresh out! To save Christmas, he needs you to collect fifty stars by December 25th. Collect stars by helping Santa solve puzzles. Two puzzles will be made available on each day in the advent calendar; the second puzzle is unlocked when you complete the first. Each puzzle grants one star. Good luck! Here's an easy puzzle to warm you up. Santa is trying to deliver presents in a large apartment building, but he can't find the right floor - the directions he got are a little confusing. He starts on the ground floor (floor 0) and then follows the instructions one character at a time. An opening parenthesis, (, means he should go up one floor, and a closing parenthesis, ), means he should go down one floor. The apartment building is very tall, and the basement is very deep; he will never find the top or bottom floors. For example: (()) and ()() both result in floor 0. ((( and (()(()( both result in floor 3. ))((((( also results in floor 3. ()) and ))( both result in floor -1 (the first basement level). ))) and )())()) both result in floor -3. To what floor do the instructions take Santa? """ q1 = "(((())))()((((((((())()(()))(()((((()(()(((()((()((()(()()()()()))(((()(()((((((((((())(()()((())()(((())))()(()(()((()(()))(()()()()((()((()(((()()(((((((()()())()((((()()(((((()(())()(())((())()()))()(((((((())(()())(()(((())(()))((())))(()((()())))()())((((())))(()(((((()(())(((()()((()((()((((((((((())(()())))))()))())()()((((()()()()()()((((((())())(((()())()((()()(((()()()))(((((()))(((()(()()()(()(()(((())()))(()(((()((())()(()())())))((()()()(()()(((()))(((()((((()(((((()()(()())((()())())(()((((((()(()()))((((()))))())((())()()((()(()))))((((((((()))(()()(((())())(())()((()()()()((()((()((()()(((())))(()((())()((((((((()((()(()()(((())())())))(())())))()((((()))))))())))()()))()())((()())()((()()()))(()()(((()(())((((())())((((((((()()()()())))()()()((((()()))))))()((((()(((()))(()()())))((()()(((()))()()())())(((())((()()(())()()()(((())))))()())((()))()))((())()()())()())()()(()))())))())()))(())((()(())))(()(())(()))))(()(())())(()(())(()(()))))((()())()))()((((()()))))())))()()())((())()((()()()))()(((()(()))))(())()()))(((()())))))))))(((())))()))())()))))()()(((())))))))()(()()(()))((()))))((())))((()((())))())))()()(()))())()(()((()())(()(()()())())(()()))()))))(()())()()))()()()()))(()(()(()))))))()(()))()))()()(()((())(()(())))()(((())(())())))))()(()(()))))()))(()()()(())()(()(())))()))))()()(((((())))))())()())())())()())()))))()))))))))())()()()()()()())))()))((())()))())))()((())()))))()))())))))))())()()()))()()(()((((()(((((((()(())((()())((()()))()))))(())))()()()(())((())()())))(())))(())))(((()()))()(())(((()(()))((())))())()))((((()))())()))))))))()(())())))(()))()(()()))())()()(())())))())()()(()())))()((()())(()(())(())))))))))))))(()))))()))))))()()())(()(((((()(()())))())()))(()))()))(()()))()())(()))())()(())((()()))))))())))())()(((())))(()(()))()()))()(()))))))((()())(()))))))()())))()()))))))))((((((((()()()(()))))))()())))())))()()((())()))((())(())))())())))()()()((()((()(())))())()(())))))))))()())))()()()()()()))()))((())())(()(()))))))(()()))()))(())))()))))))))))))(()))))))))()))))()))()())()))()()))))))()))))((()))))(()))())()(())))(()())((((()())))()))))(()))()(()()(())))))())))))()))))))())))())))))())))())())))())(()))))(())()(())))())()))((()()))))))())))((())))))))())))(())))))()()())))))())))))()))))))()))()()()(()(((()())())())(()))())))))((()(())(()))))))))(())))()()()())())(()))))()()()))()))())())())()(())))()(((()((((())))))))()))))))))))))))))))))((())()())(()))))()()))))))(()()(())())))())))((())))((())))))))))))))()))))()(()))))))())))))()))(()()())(()())))))))))()))))))(())))))()()))()())(((())))()))(()))))))))(())())))())))())())())()()))((())()(())()())()))()())(())(()))))()())))(()(((()))))))()(()())()()()))()))))))))()()()(())()())()(((((()))()())())(()))))()()()(())))())))()((()())))(()))())()(()())())(()))()()))((()()))((()()()()())))(())()))(()(())))((()()))))))))())))))))())()()))))))))))))))))(())()(())(())()())())()))()(()))))())())))))()())()(()))()()(())))(())())))))(()))))))))))))))())())(())(())))(((()))()))))())((())(()))())))))))())))))())))()))()))))))))))))())()))))()))))((()))(())))()(())))(())()))()))())))())))))))()(()())())))()()())))(())))))(()))))))))))))(()))()))()))())))(((()()()(())((()())))()())(((()))(())()))((()()()())))())(())(()))))()(((((())))(()))())())))))))((((()()()))())())()(()(()())))))))))()())())))(())))()())(((()(())())()()))())())))))))((()())((()()(()))(()(())))()))()))(()))(()))()()(()(((())((((()))()(()))((())()(()(()())()(()))()())))))(()))()))())()())))())))(())))((())(()())))))()))(())(()))()())()(()()((()(()))))))()(())(()())(())()))(((())()))(()()(()()()))))(()(())))()))))())))))())(()()()()()()(((())))(()()))()((())(((((()()())))(()))(()))()()))(((())())()(((()()()()))))(()))(())())))()())(()()())())))))))()))))((())))()())(()))(()(()))())))))())(())))))()()())())()))()()(())))(()))(())((((((())(()))(()))())()))(()()(())))()))(()()))()))()(())))(())))((()(()))(())()()())())))(((()()())(())()))))))()(((()(((((()()(((())(())))())()((()))))((()())()(())(((())))(((()((()(()(()))(()()))())(()))(())(())))()))))))((((()))()((((()(()))()))()()))))()(()(()))()(()((()(((()(()()(((()))))()(((()(()(()(((()(()())())()()(()(()())())(()((((())(()))()))(((((()()())(())()((()()())))()()(((()()))()((((((((()(())))())((()))))(())))(()))))((()((((()()(())(((((()))(((((((((((((()())))((((()(((()((())())()))((()))()(()()((()()()()(()()(()(()(((())()(()((((((()((()()((())()((((()((()()(()()())((()()()((()((())()(()(((()((())((((())(()))((()(()))(()())()((((((((()(((((((((((()))(()(((()(()()()((((())((())()())()))(())((())(()))(((()((()(())))(()))))((()()))))((((()(()(()())(()(())((((((((()((((()((()(((((()))())()(()))(()()((()(())(((((()(())()(((((()()))))))()(((())()(()()((((())()((())((()(((())(((()))((()()((((()(())))))((()((((()((()((()(((())((()))(((((((()(((()((((((((())()))((((())(((((()((((((((()(((()((()(((()()(((()((((((()()(()((((((((()()(()(()(())((((()())()))))(((()))((((())((((()())((()(())()((()((((((()((((((()(())))()())(((())())())()(())()(()())((()()((((())((((((())(()(((((()((((())()((((()(()(())(()())(((())()((())((((()))()((((((())(()(((()(((()((((((()(((()))(()()())())((()((()())()((((())(((()(()(((((((((())(())))()((()()()()(())((()))(((((((()(((((((((()(()))))(()((((((((()((((()((()()((((((()()(((((((()(()(())()(())((()()()((()(((((()())()(((((()())()()((()(()())(()()()(((()()(((((()((((((()()((()(()()()((((((((((((()((((((((()()(((()())))()(((()()(())())((((()((((()((((()()()(())(())((()(()(((((((((((((((()(())(())))))()()))((()(((()(())((()(((()(()()((((()()(((()(((()(((((()()((()(()(((()))((((((()((((((((()((()((())(((((()(((())(())())((()()))((((())()()((()(((()(((((()()(((()))(((()(()(((((((((((((()))((((((((()(((()))))())((((((((((((())((())((()())(((())((())(()((((((((((()(((())((()()(()((())(((((((((((()))((((((((((((()(()())((()((()((()(()(((()((((((((()()(()((()(()(((()))((()))(((((((((((((()(())((((((())(((()(())(()(()(()((()()))((((()((((()((((())))())((((()((((()))((((((()((((((()((()(((())))((())(()))(()((()((((()((()(((()()))((((()()()(((((((())(((())(()))())((((()())(((()(((((((((((()(()(()((()(((((((((((((((()()((((()((((((((()(((()()((()((((()))(((()(())((((((()((((())()((((()((()))(())()(()(((()((())())((((((()(()(())())(((())(()(()())(((((()((()((())()())(())))(((()(())))))))(((()(((()))()((()(((()()((()())()()))())))(((()))(()(((()(((((((((()(()(((((()()(((()())()()))))()(((()))(((()(()(()(()(()))()(())()))(()(((())))(()))))))))))(())((()((())((()(())()(())((()()((((()()((()()))((())(((()((()(())(())))()(()(((((()((()))())()(((((()()(((()(()((((((())(()))(())()))((()(()()))(())())()))(((())))(()((()(((())(())())))((()()((((((((((((((()((()(()()(()(((()))())()()((()()()(())(()))(()())(((())((())()(())()()(()()(())))((()(((()))))(((()()(()()))())((()((())()))((((()()()())((())))(((()(())(((((()(((((()((()(()((((()()(((()()()(((()())(((()()((((())(()))(((()))(())())((()))(((()((()))(((()()((())((()(((((()((((()()())((()))()((((()((()(()()()(" q2 = "(((())))()((((((((())()(()))(()((((()(()(((()((()((()(()()()()()))(((()(()((((((((((())(()()((())()(((())))()(()(()((()(()))(()()()()((()((()(((()()(((((((()()())()((((()()(((((()(())()(())((())()()))()(((((((())(()())(()(((())(()))((())))(()((()())))()())((((())))(()(((((()(())(((()()((()((()((((((((((())(()())))))()))())()()((((()()()()()()((((((())())(((()())()((()()(((()()()))(((((()))(((()(()()()(()(()(((())()))(()(((()((())()(()())())))((()()()(()()(((()))(((()((((()(((((()()(()())((()())())(()((((((()(()()))((((()))))())((())()()((()(()))))((((((((()))(()()(((())())(())()((()()()()((()((()((()()(((())))(()((())()((((((((()((()(()()(((())())())))(())())))()((((()))))))())))()()))()())((()())()((()()()))(()()(((()(())((((())())((((((((()()()()())))()()()((((()()))))))()((((()(((()))(()()())))((()()(((()))()()())())(((())((()()(())()()()(((())))))()())((()))()))((())()()())()())()()(()))())))())()))(())((()(())))(()(())(()))))(()(())())(()(())(()(()))))((()())()))()((((()()))))())))()()())((())()((()()()))()(((()(()))))(())()()))(((()())))))))))(((())))()))())()))))()()(((())))))))()(()()(()))((()))))((())))((()((())))())))()()(()))())()(()((()())(()(()()())())(()()))()))))(()())()()))()()()()))(()(()(()))))))()(()))()))()()(()((())(()(())))()(((())(())())))))()(()(()))))()))(()()()(())()(()(())))()))))()()(((((())))))())()())())())()())()))))()))))))))())()()()()()()())))()))((())()))())))()((())()))))()))())))))))())()()()))()()(()((((()(((((((()(())((()())((()()))()))))(())))()()()(())((())()())))(())))(())))(((()()))()(())(((()(()))((())))())()))((((()))())()))))))))()(())())))(()))()(()()))())()()(())())))())()()(()())))()((()())(()(())(())))))))))))))(()))))()))))))()()())(()(((((()(()())))())()))(()))()))(()()))()())(()))())()(())((()()))))))())))())()(((())))(()(()))()()))()(()))))))((()())(()))))))()())))()()))))))))((((((((()()()(()))))))()())))())))()()((())()))((())(())))())())))()()()((()((()(())))())()(())))))))))()())))()()()()()()))()))((())())(()(()))))))(()()))()))(())))()))))))))))))(()))))))))()))))()))()())()))()()))))))()))))((()))))(()))())()(())))(()())((((()())))()))))(()))()(()()(())))))())))))()))))))())))())))))())))())())))())(()))))(())()(())))())()))((()()))))))())))((())))))))())))(())))))()()())))))())))))()))))))()))()()()(()(((()())())())(()))())))))((()(())(()))))))))(())))()()()())())(()))))()()()))()))())())())()(())))()(((()((((())))))))()))))))))))))))))))))((())()())(()))))()()))))))(()()(())())))())))((())))((())))))))))))))()))))()(()))))))())))))()))(()()())(()())))))))))()))))))(())))))()()))()())(((())))()))(()))))))))(())())))())))())())())()()))((())()(())()())()))()())(())(()))))()())))(()(((()))))))()(()())()()()))()))))))))()()()(())()())()(((((()))()())())(()))))()()()(())))())))()((()())))(()))())()(()())())(()))()()))((()()))((()()()()())))(())()))(()(())))((()()))))))))())))))))())()()))))))))))))))))(())()(())(())()())())()))()(()))))())())))))()())()(()))()()(())))(())())))))(()))))))))))))))())())(())(())))(((()))()))))())((())(()))())))))))())))))())))()))()))))))))))))())()))))()))))((()))(())))()(())))(())()))()))())))())))))))()(()())())))()()())))(())))))(()))))))))))))(()))()))()))())))(((()()()(())((()())))()())(((()))(())()))((()()()())))())(())(()))))()(((((())))(()))())())))))))((((()()()))())())()(()(()())))))))))()())())))(())))()())(((()(())())()()))())())))))))((()())((()()(()))(()(())))()))()))(()))(()))()()(()(((())((((()))()(()))((())()(()(()())()(()))()())))))(()))()))())()())))())))(())))((())(()())))))()))(())(()))()())()(()()((()(()))))))()(())(()())(())()))(((())()))(()()(()()()))))(()(())))()))))())))))())(()()()()()()(((())))(()()))()((())(((((()()())))(()))(()))()()))(((())())()(((()()()()))))(()))(())())))()())(()()())())))))))()))))((())))()())(()))(()(()))())))))())(())))))()()())())()))()()(())))(()))(())((((((())(()))(()))())()))(()()(())))()))(()()))()))()(())))(())))((()(()))(())()()())())))(((()()())(())()))))))()(((()(((((()()(((())(())))())()((()))))((()())()(())(((())))(((()((()(()(()))(()()))())(()))(())(())))()))))))((((()))()((((()(()))()))()()))))()(()(()))()(()((()(((()(()()(((()))))()(((()(()(()(((()(()())())()()(()(()())())(()((((())(()))()))(((((()()())(())()((()()())))()()(((()()))()((((((((()(())))())((()))))(())))(()))))((()((((()()(())(((((()))(((((((((((((()())))((((()(((()((())())()))((()))()(()()((()()()()(()()(()(()(((())()(()((((((()((()()((())()((((()((()()(()()())((()()()((()((())()(()(((()((())((((())(()))((()(()))(()())()((((((((()(((((((((((()))(()(((()(()()()((((())((())()())()))(())((())(()))(((()((()(())))(()))))((()()))))((((()(()(()())(()(())((((((((()((((()((()(((((()))())()(()))(()()((()(())(((((()(())()(((((()()))))))()(((())()(()()((((())()((())((()(((())(((()))((()()((((()(())))))((()((((()((()((()(((())((()))(((((((()(((()((((((((())()))((((())(((((()((((((((()(((()((()(((()()(((()((((((()()(()((((((((()()(()(()(())((((()())()))))(((()))((((())((((()())((()(())()((()((((((()((((((()(())))()())(((())())())()(())()(()())((()()((((())((((((())(()(((((()((((())()((((()(()(())(()())(((())()((())((((()))()((((((())(()(((()(((()((((((()(((()))(()()())())((()((()())()((((())(((()(()(((((((((())(())))()((()()()()(())((()))(((((((()(((((((((()(()))))(()((((((((()((((()((()()((((((()()(((((((()(()(())()(())((()()()((()(((((()())()(((((()())()()((()(()())(()()()(((()()(((((()((((((()()((()(()()()((((((((((((()((((((((()()(((()())))()(((()()(())())((((()((((()((((()()()(())(())((()(()(((((((((((((((()(())(())))))()()))((()(((()(())((()(((()(()()((((()()(((()(((()(((((()()((()(()(((()))((((((()((((((((()((()((())(((((()(((())(())())((()()))((((())()()((()(((()(((((()()(((()))(((()(()(((((((((((((()))((((((((()(((()))))())((((((((((((())((())((()())(((())((())(()((((((((((()(((())((()()(()((())(((((((((((()))((((((((((((()(()())((()((()((()(()(((()((((((((()()(()((()(()(((()))((()))(((((((((((((()(())((((((())(((()(())(()(()(()((()()))((((()((((()((((())))())((((()((((()))((((((()((((((()((()(((())))((())(()))(()((()((((()((()(((()()))((((()()()(((((((())(((())(()))())((((()())(((()(((((((((((()(()(()((()(((((((((((((((()()((((()((((((((()(((()()((()((((()))(((()(())((((((()((((())()((((()((()))(())()(()(((()((())())((((((()(()(())())(((())(()(()())(((((()((()((())()())(())))(((()(())))))))(((()(((()))()((()(((()()((()())()()))())))(((()))(()(((()(((((((((()(()(((((()()(((()())()()))))()(((()))(((()(()(()(()(()))()(())()))(()(((())))(()))))))))))(())((()((())((()(())()(())((()()((((()()((()()))((())(((()((()(())(())))()(()(((((()((()))())()(((((()()(((()(()((((((())(()))(())()))((()(()()))(())())()))(((())))(()((()(((())(())())))((()()((((((((((((((()((()(()()(()(((()))())()()((()()()(())(()))(()())(((())((())()(())()()(()()(())))((()(((()))))(((()()(()()))())((()((())()))((((()()()())((())))(((()(())(((((()(((((()((()(()((((()()(((()()()(((()())(((()()((((())(()))(((()))(())())((()))(((()((()))(((()()((())((()(((((()((((()()())((()))()((((()((()(()()()("_____no_output_____from datetime import datetime, timedelta start = datetime(year=2017, month=8, day = 28) start + timedelta(days=91)_____no_output_____UP = "(" DOWN = ")" from itertools import groupby def find_floor(f): return len([i for i in f if i == UP]) - len([i for i in f if i == DOWN]) find_floor("(())") find_floor(q1) _____no_output_____assert(find_floor("(())") == 0) assert(find_floor("()()") == 0) assert(find_floor(")())())") == -3)_____no_output_____def find_floor2(f): c = 0 for k, i in enumerate(f): if i == UP: c = c + 1 else: c = c - 1 if c < 0: return k + 1 else: return 0 find_floor2(")())())") find_floor2("()())") find_floor2(q2)_____no_output_____for k, i in enumerate([1,2,3]): print(i, k)1 0 2 1 3 2 </code>
{ "repository": "stanzheng/advent-of-code", "path": "2015/Day1.ipynb", "matched_keywords": [ "STAR" ], "stars": null, "size": 18567, "hexsha": "48208f8150e676e2b7b5922cadbc74b55e1c3d8d", "max_line_length": 7018, "avg_line_length": 98.2380952381, "alphanum_fraction": 0.1222599235 }
# Notebook from gideonite/data-driven-pdes Path: tutorial/Tutorial.ipynb <code> import os import sys from matplotlib import pyplot as plt import numpy as np from datadrivenpdes.core import equations from datadrivenpdes.core import grids import datadrivenpdes as pde import tensorflow as tf # tf.enable_eager_execution() import xarray as xr_____no_output_____ </code> # First example: Advection diffusion_____no_output_____In this example we'll see how to integrate in time a pre-defined equation. Here we deal with the Advection-Diffusion equation, which describes the time evolution of the concentration $c(x,y,t)$ when it is advected by the velocity field $\vec v(x,y)=(v_x(x,y), v_y(x,y))$ and also undergoes diffusion. The equation reads $$\frac{\partial c}{\partial t}+\vec{v}\cdot\vec{\nabla}c= D \nabla^2 c$$ where $D$ is the diffusion coefficient. The equation is implemented in various forms in the folder `advection/equations`. Here we choose the Finite Volume formulation._____no_output_____ <code> equation = pde.advection.equations.FiniteVolumeAdvectionDiffusion(diffusion_coefficient=0.1) grid = grids.Grid.from_period(size=256, length=2*np.pi)_____no_output_____ </code> Note that we also chose a grid to solve the equation on. The $x$ and $y$ coordinates can be obtained by_____no_output_____ <code> x, y = grid.get_mesh()_____no_output_____ </code> To integrate in time we need an initial state. Equations instances have a `random_state` method that generates a state. The distribution of these initial conditions, when sampled from different seeds, will define the training set for later. Let's sample one random initial state and plot it:_____no_output_____ <code> initial_state = equation.random_state(grid, seed=7109179) fig, axs = plt.subplots(1,2, figsize=(8,4)) axs[0].pcolor(grid.get_mesh()[1], grid.get_mesh()[0], initial_state['concentration']) axs[0].set_title('initial concentration') axs[1].streamplot(grid.get_mesh()[1], grid.get_mesh()[0], initial_state['y_velocity'],initial_state['x_velocity'], density=2) axs[1].set_title('velocity field');ERROR:tensorflow:================================== Object was never used (type <class 'tensorflow.python.ops.tensor_array_ops.TensorArray'>): <tensorflow.python.ops.tensor_array_ops.TensorArray object at 0x7f7ed8925940> If you want to mark it as used call its "mark_used()" method. It was originally created here: File "/Users/gideon/miniconda3/envs/ddpde/lib/python3.9/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2820, in while_loop return result File "/Users/gideon/miniconda3/envs/ddpde/lib/python3.9/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2768, in <lambda> body = lambda i, lv: (i + 1, orig_body(*lv)) File "/Users/gideon/miniconda3/envs/ddpde/lib/python3.9/site-packages/tensorflow/python/ops/functional_ops.py", line 656, in compute return (next_i, flat_a_out, tas) File "/Users/gideon/miniconda3/envs/ddpde/lib/python3.9/site-packages/tensorflow/python/ops/functional_ops.py", line 651, in <listcomp> tas = [ta.write(i, value) for (ta, value) in zip(tas, flat_a_out)] File "/Users/gideon/miniconda3/envs/ddpde/lib/python3.9/site-packages/tensorflow/python/util/tf_should_use.py", line 247, in wrapped return _add_should_use_warning(fn(*args, **kwargs), ================================== </code> The state of an equation is a `dict` object that contains all relevant fields needed for integrating in time. For advection diffusion these are `concentration`, `x_velocity`, and `y_velocity`:_____no_output_____ <code> print(initial_state.keys())dict_keys(['concentration', 'x_velocity', 'y_velocity']) </code> To perform the actual integration we need to choose a method with which to estimate the spatial derivatives of the concentration $c$. The object which estimates the derivatives is called a `model` and there are various models defined in `models.py`. Here we will use a finite difference estimation. Lastly, we need to choose a timestep, which we can ask the equation instance to supply._____no_output_____ <code> time_step = equation.get_time_step(grid) times = time_step*np.arange(400) results = pde.core.integrate.integrate_times( model=pde.core.models.FiniteDifferenceModel(equation,grid), state=initial_state, times=times, axis=0)_____no_output_____ </code> The result is a `dict` object. The `concentration` member of the dict is a tensor whose first axis corresponds to the times at which the solution was evaluated. Here we save the result as an `xarray.DataArray`, which makes it easy to plot._____no_output_____ <code> conc=xr.DataArray(results['concentration'].numpy(), dims=['time', 'x','y'], coords={'time':times, 'x': x[:,0], 'y': y[0]} ) conc[::99].plot(col='time', robust=True, aspect=1)_____no_output_____ </code> # Defining a new equation_____no_output_____In this section we learn how to define a new equation. We will look at coupled reaction diffusion equations, aka the **Turing Equation**. They describe the evolution of two fields, $A$ and $B$, according to: $$\begin{align} \frac{\partial A}{\partial t} &= D_A\nabla^2 A + R_A(A,B)+S\\ \frac{\partial B}{\partial t} &= D_B\nabla^2 B + R_B(A,B) \end{align}$$ $D_{A,B}$ are the diffusion constants of $A$ and $B$, $R_{A,B}$ are nonlinear reaction terms and $S$ is some constant source term. For example, we'll take $$\begin{align} R_A&=A(1-A^2)-\alpha B & R_B&=\beta(A-B) \end{align}$$ where $\alpha$ and $\beta$ are model parameters. For simplicity, we'll implelment the equation in one spatial dimension. _____no_output_____## Equation Keys Because the computational framework is [fully differentiable](https://en.wikipedia.org/wiki/Differentiable_programming), defining an equation requires specifiying in advance what are the quantities that are used in calcualting time derivatives. These are called **keys** and are stored in the `equation` attribute `key_definitions`. In our case, to calculate the time evolution we need $A, B, \partial_{xx}A, \partial_{xx}B $ and $S$. The auxilliary function `states.StateDefinition` defines these keys. Its input arguments are: * `name` - The base name of the field. For example, the field $\partial_{xx} A$ is derived from the base field `A`. * `tensor_indices` - In 2D and above, specify whether a field is a component of a tensor (like $v_x$ and $v_y$ in the advection example). * `derivative_orders` - Specifies whether a key is a spatial derivative of a different key. * `offset` - Specifies whether a field is evaluated off the center point of a grid (useful for staggered grids, e.g. finite volume schemes) For example, in our case the `key_definitions` for $A$ and $\partial_{xx}A$ are_____no_output_____```python key_definitions = { 'A': states.StateDefinition(name='A', tensor_indices=(), # Not used in one dimenional equations derivative_orders=(0,0,0), # A is not a derivative of anything else offset=(0,0)), # A is evaluated on the centerpoints of the grid 'A_xx': states.StateDefinition(name='A', # A_xx is is derived from A tensor_indices=(), derivative_orders=(2, 0, 0), # Two derivatives on the x axis offset=(0, 0)), } ``` There are two types of keys: those that evolve in time, in our case $A$ and $B$, and constant ones, in our case $S$ (and in the Advection Diffusion example - the velocity field $v$). When defining the equation we need to set the attributes `evolving_keys` and `constant_keys`, which are both python `set`s. The different keys of an `Equation` instance can be inspected with ```python equation.all_keys # in our case: {'A', 'A_xx', 'B', 'B_xx', 'Source'} equation.base_keys # in our case: {'A', 'B', 'Source'} equation.evolving_keys # in our case: {'A', 'B'} equation.constant_keys # in our case: {'Source'} ``` _____no_output_____## Defining the equation_____no_output_____Here is a full definition of the equation:_____no_output_____ <code> from datadrivenpdes.core import equations from datadrivenpdes.core import grids from datadrivenpdes.core import polynomials from datadrivenpdes.core import states import scipy as sp def smooth_random_field(N, amp=0.1, np_random_state=None): """ generates a random field of shape (N,1) and smoothes it a bit """ if np_random_state is None: np_random_state = np.random.RandomState() noise=np_random_state.randn(N) kernel=np.exp(-np.linspace(-6,6,N)**2) return amp*sp.ndimage.convolve(noise, kernel, mode='wrap')[:,np.newaxis] class TuringEquation(equations.Equation): DISCRETIZATION_NAME = 'finite_difference' METHOD = polynomials.Method.FINITE_DIFFERENCE MONOTONIC = False CONTINUOUS_EQUATION_NAME = 'Turing' key_definitions = { 'A': states.StateDefinition(name='A', tensor_indices=(), derivative_orders=(0,0,0), offset=(0,0)), 'A_xx': states.StateDefinition(name='A', tensor_indices=(), derivative_orders=(2, 0, 0), offset=(0, 0)), 'B': states.StateDefinition(name='B', tensor_indices=(), derivative_orders=(0, 0, 0), offset=(0, 0)), 'B_xx': states.StateDefinition(name='B', tensor_indices=(), derivative_orders=(2, 0, 0), offset=(0, 0)), 'Source' : states.StateDefinition(name='Source', tensor_indices=(), derivative_orders=(0, 0, 0), offset=(0, 0)), } evolving_keys = {'A', 'B'} constant_keys = {'Source'} def __init__(self, alpha, beta, D_A, D_B, timestep=1e-4): self.alpha = alpha self.beta = beta self.D_A = D_A self.D_B = D_B self._timestep = timestep super().__init__() def time_derivative( self, grid, A, A_xx, B, B_xx, Source): """See base class.""" rA = self.reaction_A(A, B) rB = self.reaction_B(A, B) diff_A = self.D_A * A_xx diff_B = self.D_B * B_xx return {'A': rA + diff_A + Source, 'B': rB + diff_B,} def reaction_A(self, A, B): return A - (A ** 3) - B + self.alpha def reaction_B(self, A, B): return (A - B) * self.beta def get_time_step(self, grid): return self._timestep def random_state(self, grid, seed=None, dtype=tf.float32): if seed is None: R = np.random.RandomState() else: R = np.random.RandomState(seed=seed) state = { 'A': smooth_random_field(N=grid.size_x, np_random_state=R), 'B': smooth_random_field(N=grid.size_x, np_random_state=R), 'Source': smooth_random_field(N=grid.size_x, np_random_state=R), } state = {k: tf.cast(v, dtype) for k, v in state.items()} return state_____no_output_____ </code> Now we can generate a random state and evolve it in time_____no_output_____ <code> eq = TuringEquation(alpha=-0.0001, beta=10, D_A=1, D_B=30) NX=100 NY=1 # 1D can be obtained by haveing a y dimension of size 1 LX=200 grid = grids.Grid(NX, NY, step=LX/NX) x, y=grid.get_mesh() initial_state = eq.random_state(grid=grid, seed=12345) times = eq._timestep*np.arange(0, 1000, 20) model = pde.core.models.FiniteDifferenceModel(eq,grid) res = pde.core.integrate.integrate_times( model=model, state=initial_state, times=times, axis=0)_____no_output_____fig, axs=plt.subplots(1,2, figsize=(10,5), sharey=True) for ax, k in zip(axs, ['A','B']): ax.pcolormesh(x.flat, times, res[k].numpy()[...,0], cmap='RdBu') ax.set_title(k) ax.set_xlabel('x') axs[0].set_ylabel('time') fig.tight_layout()_____no_output_____ </code>
{ "repository": "gideonite/data-driven-pdes", "path": "tutorial/Tutorial.ipynb", "matched_keywords": [ "evolution" ], "stars": null, "size": 253860, "hexsha": "48211e8199f02b525170cf2351d09a38ce61bebe", "max_line_length": 153796, "avg_line_length": 373.8733431517, "alphanum_fraction": 0.9291774994 }
# Notebook from bruno-janota/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling Path: module3-make-explanatory-visualizations/LS_DS_123_Make_Explanatory_Visualizations.ipynb <a href="https://colab.research.google.com/github/bruno-janota/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling/blob/master/module3-make-explanatory-visualizations/LS_DS_123_Make_Explanatory_Visualizations.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output______Lambda School Data Science_ # Make Explanatory Visualizations ### Objectives - identify misleading visualizations and how to fix them - use Seaborn to visualize distributions and relationships with continuous and discrete variables - add emphasis and annotations to transform visualizations from exploratory to explanatory - remove clutter from visualizations ### Links - [How to Spot Visualization Lies](https://flowingdata.com/2017/02/09/how-to-spot-visualization-lies/) - [Visual Vocabulary - Vega Edition](http://ft.com/vocabulary) - [Choosing a Python Visualization Tool flowchart](http://pbpython.com/python-vis-flowchart.html) - [Searborn example gallery](http://seaborn.pydata.org/examples/index.html) & [tutorial](http://seaborn.pydata.org/tutorial.html) - [Strong Titles Are The Biggest Bang for Your Buck](http://stephanieevergreen.com/strong-titles/) - [Remove to improve (the data-ink ratio)](https://www.darkhorseanalytics.com/blog/data-looks-better-naked) - [How to Generate FiveThirtyEight Graphs in Python](https://www.dataquest.io/blog/making-538-plots/)_____no_output_____# Avoid Misleading Visualizations Did you find/discuss any interesting misleading visualizations in your Walkie Talkie?_____no_output_____## What makes a visualization misleading? [5 Ways Writers Use Misleading Graphs To Manipulate You](https://venngage.com/blog/misleading-graphs/)_____no_output_____## Two y-axes <img src="https://kieranhealy.org/files/misc/two-y-by-four-sm.jpg" width="800"> Other Examples: - [Spurious Correlations](https://tylervigen.com/spurious-correlations) - <https://blog.datawrapper.de/dualaxis/> - <https://kieranhealy.org/blog/archives/2016/01/16/two-y-axes/> - <http://www.storytellingwithdata.com/blog/2016/2/1/be-gone-dual-y-axis>_____no_output_____## Y-axis doesn't start at zero. <img src="https://i.pinimg.com/originals/22/53/a9/2253a944f54bb61f1983bc076ff33cdd.jpg" width="600">_____no_output_____## Pie Charts are bad <img src="https://i1.wp.com/flowingdata.com/wp-content/uploads/2009/11/Fox-News-pie-chart.png?fit=620%2C465&ssl=1" width="600">_____no_output_____## Pie charts that omit data are extra bad - A guy makes a misleading chart that goes viral What does this chart imply at first glance? You don't want your user to have to do a lot of work in order to be able to interpret you graph correctly. You want that first-glance conclusions to be the correct ones. <img src="https://pbs.twimg.com/media/DiaiTLHWsAYAEEX?format=jpg&name=medium" width='600'> <https://twitter.com/michaelbatnick/status/1019680856837849090?lang=en> - It gets picked up by overworked journalists (assuming incompetency before malice) <https://www.marketwatch.com/story/this-1-chart-puts-mega-techs-trillions-of-market-value-into-eye-popping-perspective-2018-07-18> - Even after the chart's implications have been refuted, it's hard a bad (although compelling) visualization from being passed around. <https://www.linkedin.com/pulse/good-bad-pie-charts-karthik-shashidhar/> **["yea I understand a pie chart was probably not the best choice to present this data."](https://twitter.com/michaelbatnick/status/1037036440494985216)**_____no_output_____## Pie Charts that compare unrelated things are next-level extra bad <img src="http://www.painting-with-numbers.com/download/document/186/170403+Legalizing+Marijuana+Graph.jpg" width="600"> _____no_output_____## Be careful about how you use volume to represent quantities: radius vs diameter vs volume <img src="https://static1.squarespace.com/static/5bfc8dbab40b9d7dd9054f41/t/5c32d86e0ebbe80a25873249/1546836082961/5474039-25383714-thumbnail.jpg?format=1500w" width="600">_____no_output_____## Don't cherrypick timelines or specific subsets of your data: <img src="https://wattsupwiththat.com/wp-content/uploads/2019/02/Figure-1-1.png" width="600"> Look how specifically the writer has selected what years to show in the legend on the right side. <https://wattsupwiththat.com/2019/02/24/strong-arctic-sea-ice-growth-this-year/> Try the tool that was used to make the graphic for yourself <http://nsidc.org/arcticseaicenews/charctic-interactive-sea-ice-graph/> _____no_output_____## Use Relative units rather than Absolute Units <img src="https://imgs.xkcd.com/comics/heatmap_2x.png" width="600">_____no_output_____## Avoid 3D graphs unless having the extra dimension is effective Usually you can Split 3D graphs into multiple 2D graphs 3D graphs that are interactive can be very cool. (See Plotly and Bokeh) <img src="https://thumbor.forbes.com/thumbor/1280x868/https%3A%2F%2Fblogs-images.forbes.com%2Fthumbnails%2Fblog_1855%2Fpt_1855_811_o.jpg%3Ft%3D1339592470" width="600">_____no_output_____## Don't go against typical conventions <img src="http://www.callingbullshit.org/twittercards/tools_misleading_axes.png" width="600">_____no_output_____# Tips for choosing an appropriate visualization:_____no_output_____## Use Appropriate "Visual Vocabulary" [Visual Vocabulary - Vega Edition](http://ft.com/vocabulary)_____no_output_____## What are the properties of your data? - Is your primary variable of interest continuous or discrete? - Is in wide or long (tidy) format? - Does your visualization involve multiple variables? - How many dimensions do you need to include on your plot? Can you express the main idea of your visualization in a single sentence? How hard does your visualization make the user work in order to draw the intended conclusion?_____no_output_____## Which Visualization tool is most appropriate? [Choosing a Python Visualization Tool flowchart](http://pbpython.com/python-vis-flowchart.html)_____no_output_____## Simple Web Scraper with IMDb_____no_output_____ <code> from requests import get url = 'https://www.imdb.com/title/tt6105098/ratings?ref_=tt_ov_rt' response = get(url) print(response.text[:500]) <!DOCTYPE html> <html xmlns:og="http://ogp.me/ns#" xmlns:fb="http://www.facebook.com/2008/fbml"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="apple-itunes-app" content="app-id=342792525, app-argument=imdb:///title/tt6105098?src=mdot"> <script type="text/javascript">var IMDbTimer={starttime: new Date().getTime(),pt:'java'};</script> <script> if (typeof uet == 'function') { from bs4 import BeautifulSoup html_soup = BeautifulSoup(response.text, 'html.parser') type(html_soup)_____no_output_____vote_container = html_soup.find_all('div', class_ ='leftAligned') vote_container[1:11]_____no_output_____votes = [containers.text for containers in vote_container][1:11] votes_____no_output_____urls = ['https://www.imdb.com/title/tt6105098/ratings?ref_=tt_ov_rt', # Lion King (2019) 'https://www.imdb.com/title/tt0110357/ratings?ref_=tt_ov_rt', # Lion King (1994) 'https://www.imdb.com/title/tt6139732/ratings?ref_=tt_ov_rt', # Aladdin (2019) 'https://www.imdb.com/title/tt0103639/ratings?ref_=tt_ov_rt'] # Aladdin (1992) w/ Robin Williams votes_list = [] for url in urls: # Get raw HTML response response = get(url) # Convert to BS Object html_soup = BeautifulSoup(response.text, 'html.parser') # Find vote containers and extract star ratings vote_containers = html_soup.find_all('div', class_ = 'leftAligned') votes = [containers.text for containers in vote_containers][1:11] # Append to initial list votes_list.append(votes) print(votes_list)[['10,342', '6,524', '11,859', '12,549', '6,731', '3,098', '1,435', '988', '752', '2,540'], ['254,453', '219,091', '213,498', '100,708', '33,076', '13,241', '5,345', '3,010', '1,968', '4,949'], ['14,758', '11,724', '22,615', '20,195', '8,567', '3,317', '1,496', '910', '659', '2,382'], ['58,902', '61,215', '111,111', '61,554', '20,591', '7,056', '2,594', '1,197', '690', '1,248']] import pandas as pd movies = ['The Lion King (2019)', 'The Lion King (1994)', 'Aladdin (2019)', 'Aladdin (1992)'] df = pd.DataFrame(votes_list) df = df.T df.columns = movies df = df.apply(lambda x: x.str.replace(',','')) df['Star Rating'] = range(1,11)[::-1] df_____no_output_____# Convert df into tidy-format df_tidy = df.melt(id_vars='Star Rating') df_tidy = df_tidy.rename(columns={'variable': 'Movie', 'value': 'Number of Votes'}) df_tidy['Number of Votes'] = pd.to_numeric(df_tidy['Number of Votes']) df_tidy_____no_output_____df_tidy['Vote Percent'] = df_tidy.groupby('Movie')['Number of Votes'].apply(lambda x: x / x.sum() * 100) df_tidy.head()_____no_output_____df_tidy.info()<class 'pandas.core.frame.DataFrame'> RangeIndex: 40 entries, 0 to 39 Data columns (total 4 columns): Star Rating 40 non-null int64 Movie 40 non-null object Number of Votes 40 non-null int64 Vote Percent 40 non-null float64 dtypes: float64(1), int64(2), object(1) memory usage: 1.3+ KB import seaborn as sns sns.catplot(x='Star Rating', y='Vote Percent', col='Movie', col_wrap=2, height=6, kind='bar', data=df_tidy);_____no_output_____ </code> # Making Explanatory Visualizations with Seaborn_____no_output_____Today we will reproduce this [example by FiveThirtyEight:](https://fivethirtyeight.com/features/al-gores-new-movie-exposes-the-big-flaw-in-online-movie-ratings/) _____no_output_____ <code> from IPython.display import display, Image url = 'https://fivethirtyeight.com/wp-content/uploads/2017/09/mehtahickey-inconvenient-0830-1.png' example = Image(url=url, width=400) display(example)_____no_output_____ </code> Using this data: https://github.com/fivethirtyeight/data/tree/master/inconvenient-sequel_____no_output_____Links - [Strong Titles Are The Biggest Bang for Your Buck](http://stephanieevergreen.com/strong-titles/) - [Remove to improve (the data-ink ratio)](https://www.darkhorseanalytics.com/blog/data-looks-better-naked) - [How to Generate FiveThirtyEight Graphs in Python](https://www.dataquest.io/blog/making-538-plots/)_____no_output_____## Make prototypes This helps us understand the problem_____no_output_____ <code> %matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd plt.style.use('fivethirtyeight') fake = pd.Series([38, 3, 2, 1, 2, 4, 6, 5, 5, 33], index=range(1,11)) fake.plot.bar(color='C1', width=0.9);_____no_output_____fake2 = pd.Series( [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 5, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10]) fake2.value_counts().sort_index().plot.bar(color='C1', width=0.9);_____no_output_____ </code> ## Annotate with text_____no_output_____ <code> display(example)_____no_output_____import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') fig, ax = plt.subplots(facecolor='white') fake.plot.bar(color='C1', width=0.9) plt.text(x=-1.5, y=50, fontsize=16, fontweight='bold', s = "'An Inconvenient Sequel: Truth to Power' is divisive") plt.text(x=-1.5, y=46, fontsize=12, s = "IMDb ratings for the film as of Aug. 29") plt.yticks([0, 10, 20, 30, 40]) plt.xlabel('Rating', fontsize=10, fontweight='bold') plt.ylabel('Percent of Total Votes', fontsize=10, fontweight='bold');_____no_output_____ </code> ## Reproduce with real data_____no_output_____ <code> df = pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/inconvenient-sequel/ratings.csv') print(df.shape) df.head()(80053, 27) df.category.value_counts()_____no_output_____df.dtypes_____no_output_____df['timestamp'] = pd.to_datetime(df['timestamp']) df.timestamp.describe()_____no_output_____df_imdb = df[df.category == 'IMDb users'] df_imdb.shape_____no_output_____final = df_imdb.tail(1) final_____no_output_____#columns = ['{}_pct'.format(i) for i in range(1,11)] columns = [f'{i}_pct' for i in range(1,11)] columns_____no_output_____data = final[columns] data = data.T data.index = range(1,11) data_____no_output_____plt.style.use('fivethirtyeight') data.plot.bar(color='C1', width=0.9, legend=False) plt.text(x=-1.5, y=50, fontsize=16, fontweight='bold', s = "'An Inconvenient Sequel: Truth to Power' is divisive") plt.text(x=-1.5, y=46, fontsize=12, s = "IMDb ratings for the film as of Aug. 29") plt.yticks([0, 10, 20, 30, 40]) plt.xlabel('Rating', fontsize=10, fontweight='bold') plt.ylabel('Percent of Total Votes', fontsize=10, fontweight='bold');_____no_output_____ </code> ## Quick Introduction to Altair Plotting Package_____no_output_____ <code> import altair as alt from vega_datasets import data source = data.cars() source.head()_____no_output_____brush = alt.selection(type='interval', resolve='global') base = alt.Chart(source).mark_point().encode( y='Miles_per_Gallon', color=alt.condition(brush, 'Origin', alt.ColorValue('gray')), tooltip=['Name','Origin','Weight_in_lbs'], ).add_selection( brush ).properties( width=350, height=350 ) base.encode(x='Horsepower') | base.encode(x='Acceleration') | base.encode(x='Weight_in_lbs')_____no_output_____ </code> # ASSIGNMENT Replicate the lesson code. I recommend that you [do not copy-paste](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit). # STRETCH OPTIONS #### Reproduce another example from [FiveThityEight's shared data repository](https://data.fivethirtyeight.com/). For example: - [thanksgiving-2015](https://fivethirtyeight.com/features/heres-what-your-part-of-america-eats-on-thanksgiving/) (try the [`altair`](https://altair-viz.github.io/gallery/index.html#maps) library) - [candy-power-ranking](https://fivethirtyeight.com/features/the-ultimate-halloween-candy-power-ranking/) (try the [`statsmodels`](https://www.statsmodels.org/stable/index.html) library) - or another example of your choice! #### Make more charts! Choose a chart you want to make, from [Visual Vocabulary - Vega Edition](http://ft.com/vocabulary). Find the chart in an example gallery of a Python data visualization library: - [Seaborn](http://seaborn.pydata.org/examples/index.html) - [Altair](https://altair-viz.github.io/gallery/index.html) - [Matplotlib](https://matplotlib.org/gallery.html) - [Pandas](https://pandas.pydata.org/pandas-docs/stable/visualization.html) Reproduce the chart. [Optionally, try the "Ben Franklin Method."](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) If you want, experiment and make changes. Take notes. Consider sharing your work with your cohort! _____no_output_____
{ "repository": "bruno-janota/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling", "path": "module3-make-explanatory-visualizations/LS_DS_123_Make_Explanatory_Visualizations.ipynb", "matched_keywords": [ "STAR" ], "stars": 1, "size": 287765, "hexsha": "48221a7e8f479fa68138d94996bae134f3f2cdc9", "max_line_length": 96855, "avg_line_length": 108.7958412098, "alphanum_fraction": 0.6266328428 }
# Notebook from katakasioma/import Path: Python/Jupyter_notebooks_solved/SwC_python_session-2-2.ipynb # Python session - 2.2 ## Functions and modules_____no_output_____## Functions Functions are reusable blocks of code that you can name and execute any number of times from different parts of your script(s). This reuse is known as "calling" the function. Functions are important building blocks of a software. There are several built-in functions of Python, which can be called anywhere (and any number of times) in your current program. You have been using built-in functions already, for example, `len()`, `range()`, `sorted()`, `max()`, `min()`, `sum()` etc._____no_output_____#### Structure of writing a function: - `def` (keyword) + function name (you choose) + `()`. - newline with 4 spaces or a tab + block of code # Note: Codes at the 0 position are always read - Call your function using its name_____no_output_____ <code> ## Non parametric function # Define a function that prints a sum of number1 and number2 defined inside the function def get_sum(): print(4 + 5) get_sum()9 # Parametric function # Define a function that prints a sum of number1 and number2 provided by the user # Hint: get_sum_param(number1, number2) def get_sum_param(number_1, number_2): print(number_1 + number_2) get_sum_param(6, 8)14 # Returning values # Define a function that 'returns' a sum of number1 and number2 provided by the user # Hint: print(get_sum_param(number1, number2)) def get_sum_param_return(number_1, number_2): return number_1 + number_2 my_sum = get_sum_param_return(4,9) print(my_sum)13 def give_number_and_string(number, string): return number * 10, string * 5 new_number, new_string = give_number_and_string(2, "z")_____no_output_____# Local Vs. global variable # Define a function that returns a sum of number1 and number2 to a variable # and print it after calling the function # Hint: returned_value = get_sum_param(number1, number2) _____no_output_____ </code> ### Exercises: write old codes into a function_____no_output_____ <code> # Optional exercise # Let’s take one of our older codes and write them in function _____no_output_____ </code> ### Using Modules One of the great things about Python is the free availability of a _huge_ number of modules that can be imported into your code and used. Modules are developed with the aim of solving some particular problem or providing particular, often domain-specific, capabilities. Like functions, which are usable parts of a program, packages (also known as libraries) are reusable programs with several modules. In order to import a module, it must first be installed and available on your system. We will cover this briefly later in the course. A large number of modules are already available for import in the standard distribution of Python: this is known as the standard library. If you installed the Anaconda distribution of Python, you have even more modules already installed - mostly aimed at data science. Importing a module is easy: - Import (keyword) + package name, for example: - import os # contains functions for interacting with the operating system - import sys # Contains utilities to process command line arguments More at: https://pypi.python.org/pypi_____no_output_____ <code> import os os.getcwd() os.mkdir("new_dir_name") help(os) # manual page created from the module's docstrings_____no_output_____import sys print(sys.argv)_____no_output_____ </code> ### Using loops to iterate through files in a directory_____no_output_____ <code> # define a function that lists all the files in the folder called demo_folder import os def read_each_filename(pathname): ... pathname = 'demo_folder' # name of path with multiple files read_each_filename(pathname)_____no_output_____# define a function that reads and prints each lines of each file in the folder called demo_folder import os def read_each_line_of_each_file(pathname): # name of path with multiple files ... pathname = 'demo_folder' # name of path with multiple files read_each_line_of_each_file(pathname) # Hints: # Options for opening files # option-1: with open("{}/{}".format(pathname, filename)) as in_fh: # option-2: with open('%s/%s' % (pathname, filename)) as in_fh: # option-3: with open(pathname + '/' + filename) as in_fh: # option-4: with open(os.path.join(pathname, filename)) as in_fh:_____no_output_____# Exercise: Go through each filename in the directory 'demo_folder' # open those files that end with only '.csv' or only with '.fasta._____no_output_____# Optional exercises (We will cover this in the session - 3) # 1. Extract the length of fasta sequence for kinases from the file 'fasta_human_kinase.fasta' # 2. Extract the UniProt for kinases from the file 'human_kinase.csv'_____no_output_____ </code> #### Examples of importing basic modules._____no_output_____ <code> # import numpy # array_one = numpy.array([1,2,3,4,5,6]) # print(array_one)_____no_output_____# import pandas # data = pandas.read_table() # data.plot()_____no_output_____ </code> #### Aside: Namespaces Python uses namespaces a lot, to ensure appropriate separation of functions, attributes, methdos etc between modules and objects. When you import an entire module, the functions and classes available within that module are loaded in under the modules namespace - `pandas` in the example above. It is possible to customise the namespace at the point of import, allowing you to e.g. shorten/abbreviate the module name to save some typing:_____no_output_____ <code> # import numpy as np # array_two = np.array([10, 11, 12, 13, 14]) # print(array_two)_____no_output_____# import pandas as pd # data = pd.read_table() # data.plot()_____no_output_____ </code> Also, as in the examples above, if you need only a single function from a module, you can import that directly into your main namespace (where you don't need to specify the module before the name of the function):_____no_output_____ <code> # from numpy import array # array_three = array([1, 1, 2, 3, 5, 8]) # print(array_three)_____no_output_____# from pandas import read_table # data = read_table() # data.plot()_____no_output_____ </code> #### Conventions - You should perform all of your imports at the beginning of your program. This ensures that - users can easily identify the dependencies of a program, and - that any lacking dependencies (causing fatal `ImportError` exceptions) are caught early in execution - the shortening of `numpy` to `np` and `pandas` to `pd` are very common, and there are others too - watch out for this when e.g. reading docs and guides/SO answers online._____no_output_____### Execises - Importing_____no_output_____ <code> # --- numpy # series_a = numpy.array([5, 5, 5, 5, 5]) # series_b = ---.array([1, 2, 3, 4, 5]) # series_c = series_a - series_b # print(series_c)_____no_output_____# import pandas --- pd # data = pd.read_table()_____no_output_____ </code> #### Aside: Your Own Modules Whenever you write some python code and save it as a script, with the `.py` file extension, you are creating your own module. If you define functions within that module, you can load them into other scripts and sessions._____no_output_____### Some Interesting Module Libraries to Investigate - os - sys - shutil - random - collections - math - argparse - time - datetime - numpy - scipy - matplotlib - pandas - scikit-learn - requests - biopython - openpyxl_____no_output_____
{ "repository": "katakasioma/import", "path": "Python/Jupyter_notebooks_solved/SwC_python_session-2-2.ipynb", "matched_keywords": [ "BioPython" ], "stars": null, "size": 12743, "hexsha": "4822a46f1a9282d2305705be39470a58fb4c1c50", "max_line_length": 304, "avg_line_length": 25.9531568228, "alphanum_fraction": 0.5685474378 }
# Notebook from MuhammadMiqdadKhan/Solution-of-IBM-s-Gloabal-Quantum-Challenge-2020 Path: Challenge4_CircuitDecomposition solution.ipynb # Exercise 4: Circuit Decomposition Wow! If you managed to solve the first three exercises, congratulations! The fourth problem is supposed to puzzle even the quantum experts among you, so don’t worry if you cannot solve it. If you can, hats off to you! You may recall from your quantum mechanics course that quantum theory is unitary. Therefore, the evolution of any (closed) system can be described by a unitary. But given an arbitrary unitary, can you actually implement it on your quantum computer? **"A set of quantum gates is said to be universal if any unitary transformation of the quantum data can be efficiently approximated arbitrarily well as a sequence of gates in the set."** (https://qiskit.org/textbook/ch-algorithms/defining-quantum-circuits.html) Every gate you run on the IBM Quantum Experience is transpiled into single qubit rotations and CNOT (CX) gates. We know that these constitute a universal gate set, which implies that any unitary can be implemented using only these gates. However, in general it is not easy to find a good decomposition for an arbitrary unitary. Your task is to find such a decomposition. You are given the following unitary:_____no_output_____ <code> from may4_challenge.ex4 import get_unitary U = get_unitary() from may4_challenge.ex4 import get_unitary from qiskit import QuantumCircuit, transpile, execute, BasicAer, extensions from qiskit.visualization import * from qiskit import QuantumCircuit from may4_challenge.ex4 import check_circuit, submit_circuit import numpy as np import sklearn as sk import scipy as scipy from scipy import linalg from qiskit.visualization import plot_state_city from qiskit.compiler import transpile from qiskit import Aer,execute,QuantumRegister from math import pi from qiskit import transpile import scipy import numpy as np_____no_output_____ </code> #### What circuit would make such a complicated unitary? Is there some symmetry, or is it random? We just updated Qiskit with the introduction of a quantum circuit library (https://github.com/Qiskit/qiskit-terra/tree/master/qiskit/circuit/library). This library gives users access to a rich set of well-studied circuit families, instances of which can be used as benchmarks (quantum volume), as building blocks in building more complex circuits (adders), or as tools to explore quantum computational advantage over classical computation (instantaneous quantum polynomial complexity circuits)._____no_output_____ <code> from qiskit import QuantumCircuit from may4_challenge.ex4 import check_circuit, submit_circuit_____no_output_____ </code> **Using only single qubit rotations and CNOT gates, find a quantum circuit that approximates that unitary $U$ by a unitary $V$ up to an error $\varepsilon = 0.01$, such that $\lVert U - V\rVert_2 \leq \varepsilon$ !** Note that the norm we are using here is the spectral norm, $\qquad \lVert A \rVert_2 = \max_{\lVert \psi \rVert_2= 1} \lVert A \psi \rVert$. This can be seen as the largest scaling factor that the matrix $A$ has on any initial (normalized) state $\psi$. One can show that this norm corresponds to the largest singular value of $A$, i.e., the square root of the largest eigenvalue of the matrix $A^\dagger A$, where $A^{\dagger}$ denotes the conjugate transpose of $A$. **When you submit a circuit, we remove the global phase of the corresponding unitary $V$ before comparing it with $U$ using the spectral norm. For example, if you submit a circuit that generates $V = \text{e}^{i\theta}U$, we remove the global phase $\text{e}^{i\theta}$ from $V$ before computing the norm, and you will have a successful submission. As a result, you do not have to worry about matching the desired unitary, $U$, up to a global phase.** As the single-qubit gates have a much higher fidelity than the two-qubit gates, we will look at the number of CNOT-gates, $n_{cx}$, and the number of u3-gates, $n_{u3}$, to determine the cost of your decomposition as $$ \qquad \text{cost} = 10 \cdot n_{cx} + n_{u3} $$ Try to optimize the cost of your decomposition. **Note that you will need to ensure that your circuit is composed only of $u3$ and $cx$ gates. The exercise is considered correctly solved if your cost is smaller than 1600.** --- For useful tips to complete this exercise as well as pointers for communicating with other participants and asking questions, please take a look at the following [repository](https://github.com/qiskit-community/may4_challenge_exercises). You will also find a copy of these exercises, so feel free to edit and experiment with these notebooks. ---_____no_output_____ <code> #U = get_unitary() #pi = 3.141 #print(U) #print("U has shape", U.shape) #H = scipy.linalg.hadamard(16)/4 #U = np.dot(H, U) #U = np.dot(U, H) #qc = QuantumCircuit(4) #print(pi) #qc.u3(pi/2,0,pi,0) #qc.u3(pi/2,0,pi,1) #qc.u3(pi/2,0,pi,2) #qc.u3(pi/2,0,pi,3) #qc.isometry(U,[0,1,2,3],[]) #qc.u3(pi/2,0,pi,0) #qc.u3(pi/2,0,pi,1) #qc.u3(pi/2,0,pi,2) #qc.u3(pi/2,0,pi,3) #qc = transpile(qc, basis_gates = ['u3', 'cx'], seed_transpiler=0, optimization_level=2) #print('gates = ', qc.count_ops()) #print('depth = ', qc.depth()_____no_output_____from scipy.linalg import hadamard H = hadamard(16, dtype=complex)/4 U = np.dot(H, U) U = np.dot(U, H) qc = QuantumCircuit(4) qc.u3(pi/2,0,pi,range(4)) qc.isometry(U,[0,1,2,3],[]) qc.u3(pi/2,0,pi,range(4)) qc = transpile(qc, basis_gates = ['u3', 'cx'], seed_transpiler=5, optimization_level=2)_____no_output_____##### check your quantum circuit by running the next line check_circuit(qc)Circuit stats: ||U-V||_2 = 6.633845386970182e-15 (U is the reference unitary, V is yours, and the global phase has been removed from both of them). Cost is 104 Great! Your circuit meets all the constrains. Your score is 104. The lower, the better! Feel free to submit your answer and remember you can re-submit a new circuit at any time! </code> You can check whether your circuit is valid before submitting it with `check_circuit(qc)`. Once you have a valid solution, please submit it by running the following cell (delete the `#` before `submit_circuit`). You can re-submit at any time. _____no_output_____ <code> from may4_challenge.ex4 import submit_circuit submit_circuit(qc)_____no_output_____ </code>
{ "repository": "MuhammadMiqdadKhan/Solution-of-IBM-s-Gloabal-Quantum-Challenge-2020", "path": "Challenge4_CircuitDecomposition solution.ipynb", "matched_keywords": [ "evolution" ], "stars": 17, "size": 9394, "hexsha": "48238357a3537e21fc609789294cf3c74f8d5218", "max_line_length": 541, "avg_line_length": 40.3175965665, "alphanum_fraction": 0.6102831595 }
# Notebook from UB-Mannheim/NFDI Path: docs/docs/parsing/02_parsing_GEPRIS_search.ipynb # Parsing GEPRIS for the list of funded NFDI projects with GEPRIS IDs and descriptions Check out the the GEPRIS user interface for advanced search: https://gepris.dfg.de/gepris/OCTOPUS?task=doSearchExtended&context=projekt&keywords_criterion=NFDI&nurProjekteMitAB=false&findButton=Finden&person=&location=&fachlicheZuordnung=%23&pemu=32&peu=%23&zk_transferprojekt=false&teilprojekte=false&teilprojekte=true&bewilligungsStatus=&beginOfFunding=&gefoerdertIn=&oldContinentId=%23&continentId=%23&oldSubContinentId=%23%23&subContinentId=%23%23&oldCountryId=%23%23%23&countryKey=%23%23%23&einrichtungsart=-1_____no_output_____## Getting HTML via requests We use [requests](https://docs.python-requests.org) library to get HTML of that page into `text` variable and print first 36 characters of it._____no_output_____ <code> import requests GEPRIS_URL = "https://gepris.dfg.de/gepris/OCTOPUS" params = {'keywords_criterion': '', 'nurProjekteMitAB': 'false', 'findButton': 'Finden', 'task': 'doSearchExtended', 'pemu': 32, 'context': 'projekt', 'language': 'en', 'hitsPerPage': 50, 'index': 0} r = requests.get(GEPRIS_URL, params=params) text = r.text print(text[0:36])<?xml version="1.0" encoding="utf-8" </code> ## Parsing search results from HTML via BeautifulSoup We use the [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) library. The number of pages for search results is_____no_output_____ <code> from bs4 import BeautifulSoup try: pages = int(soup.find('span', id="result-info").find('strong').text.split()[0]) except: pages = 1 print(pages)1 </code> All search results are found via_____no_output_____ <code> soup = BeautifulSoup(text, 'html.parser') results = soup.find_all("div", class_="results") print(results)[<div class="results"> <h2><a href="/gepris/projekt/441914366">GHGA – German Human Genome-Phenome Archive</a></h2> <span id="icons"><a href="/gepris/projekt/441914366?displayMode=print&amp;findButton=Finden&amp;hitsPerPage=50&amp;index=0&amp;keywords_criterion=&amp;language=en&amp;nurProjekteMitAB=false&amp;pemu=32" rel="nofollow" target="_blank" title="Open print view"><img alt="Print View" src="/gepris/images/iconPrint.gif"/></a></span></div>, <div class="results"> <h2><a href="/gepris/projekt/441926934">NFDI4Cat – NFDI for Catalysis-Related Sciences</a></h2> <span id="icons"><a href="/gepris/projekt/441926934?displayMode=print&amp;findButton=Finden&amp;hitsPerPage=50&amp;index=0&amp;keywords_criterion=&amp;language=en&amp;nurProjekteMitAB=false&amp;pemu=32" rel="nofollow" target="_blank" title="Open print view"><img alt="Print View" src="/gepris/images/iconPrint.gif"/></a></span></div>, <div class="results"> <h2><a href="/gepris/projekt/441958017">NFDI4Culture – Consortium for research data on material and immaterial cultural heritage</a></h2> <span id="icons"><a href="/gepris/projekt/441958017?displayMode=print&amp;findButton=Finden&amp;hitsPerPage=50&amp;index=0&amp;keywords_criterion=&amp;language=en&amp;nurProjekteMitAB=false&amp;pemu=32" rel="nofollow" target="_blank" title="Open print view"><img alt="Print View" src="/gepris/images/iconPrint.gif"/></a></span></div>, <div class="results"> <h2><a href="/gepris/projekt/441958208">NFDI4Chem – Chemistry Consortium in the NFDI</a></h2> <span id="icons"><a href="/gepris/projekt/441958208?displayMode=print&amp;findButton=Finden&amp;hitsPerPage=50&amp;index=0&amp;keywords_criterion=&amp;language=en&amp;nurProjekteMitAB=false&amp;pemu=32" rel="nofollow" target="_blank" title="Open print view"><img alt="Print View" src="/gepris/images/iconPrint.gif"/></a></span></div>, <div class="results"> <h2><a href="/gepris/projekt/442032008">NFDI4BioDiversity – Biodiversity, Ecology &amp; Environmental Data</a></h2> <span id="icons"><a href="/gepris/projekt/442032008?displayMode=print&amp;findButton=Finden&amp;hitsPerPage=50&amp;index=0&amp;keywords_criterion=&amp;language=en&amp;nurProjekteMitAB=false&amp;pemu=32" rel="nofollow" target="_blank" title="Open print view"><img alt="Print View" src="/gepris/images/iconPrint.gif"/></a></span></div>, <div class="results"> <h2><a href="/gepris/projekt/442077441">DataPLANT – Data in PLANT research</a></h2> <span id="icons"><a href="/gepris/projekt/442077441?displayMode=print&amp;findButton=Finden&amp;hitsPerPage=50&amp;index=0&amp;keywords_criterion=&amp;language=en&amp;nurProjekteMitAB=false&amp;pemu=32" rel="nofollow" target="_blank" title="Open print view"><img alt="Print View" src="/gepris/images/iconPrint.gif"/></a></span></div>, <div class="results"> <h2><a href="/gepris/projekt/442146713">NFDI4Ing – National Research Data Infrastructure for Engineering Services</a></h2> <span id="icons"><a href="/gepris/projekt/442146713?displayMode=print&amp;findButton=Finden&amp;hitsPerPage=50&amp;index=0&amp;keywords_criterion=&amp;language=en&amp;nurProjekteMitAB=false&amp;pemu=32" rel="nofollow" target="_blank" title="Open print view"><img alt="Print View" src="/gepris/images/iconPrint.gif"/></a></span></div>, <div class="results"> <h2><a href="/gepris/projekt/442326535">NFDI4Health – National Research Data Infrastructure for Personal Health Data</a></h2> <span id="icons"><a href="/gepris/projekt/442326535?displayMode=print&amp;findButton=Finden&amp;hitsPerPage=50&amp;index=0&amp;keywords_criterion=&amp;language=en&amp;nurProjekteMitAB=false&amp;pemu=32" rel="nofollow" target="_blank" title="Open print view"><img alt="Print View" src="/gepris/images/iconPrint.gif"/></a></span></div>, <div class="results"> <h2><a href="/gepris/projekt/442494171">KonsortSWD – Consortium for the Social, Behavioural, Educational, and Economic Sciences</a></h2> <span id="icons"><a href="/gepris/projekt/442494171?displayMode=print&amp;findButton=Finden&amp;hitsPerPage=50&amp;index=0&amp;keywords_criterion=&amp;language=en&amp;nurProjekteMitAB=false&amp;pemu=32" rel="nofollow" target="_blank" title="Open print view"><img alt="Print View" src="/gepris/images/iconPrint.gif"/></a></span></div>, <div class="results"> <h2><a href="/gepris/projekt/460033370">Text+</a></h2> <span id="icons"><a href="/gepris/projekt/460033370?displayMode=print&amp;findButton=Finden&amp;hitsPerPage=50&amp;index=0&amp;keywords_criterion=&amp;language=en&amp;nurProjekteMitAB=false&amp;pemu=32" rel="nofollow" target="_blank" title="Open print view"><img alt="Print View" src="/gepris/images/iconPrint.gif"/></a></span></div>, <div class="results"> <h2><a href="/gepris/projekt/460036893">NFDI4Earth - NFDI Consortium Earth System Sciences</a></h2> <span id="icons"><a href="/gepris/projekt/460036893?displayMode=print&amp;findButton=Finden&amp;hitsPerPage=50&amp;index=0&amp;keywords_criterion=&amp;language=en&amp;nurProjekteMitAB=false&amp;pemu=32" rel="nofollow" target="_blank" title="Open print view"><img alt="Print View" src="/gepris/images/iconPrint.gif"/></a></span></div>, <div class="results"> <h2><a href="/gepris/projekt/460037581">BERD@NFDI - NFDI for Business, Economic and Related Data</a></h2> <span id="icons"><a href="/gepris/projekt/460037581?displayMode=print&amp;findButton=Finden&amp;hitsPerPage=50&amp;index=0&amp;keywords_criterion=&amp;language=en&amp;nurProjekteMitAB=false&amp;pemu=32" rel="nofollow" target="_blank" title="Open print view"><img alt="Print View" src="/gepris/images/iconPrint.gif"/></a></span></div>, <div class="results"> <h2><a href="/gepris/projekt/460129525">NFDI4Microbiota - National Research Data Infrastructure for Microbiota Research</a></h2> <span id="icons"><a href="/gepris/projekt/460129525?displayMode=print&amp;findButton=Finden&amp;hitsPerPage=50&amp;index=0&amp;keywords_criterion=&amp;language=en&amp;nurProjekteMitAB=false&amp;pemu=32" rel="nofollow" target="_blank" title="Open print view"><img alt="Print View" src="/gepris/images/iconPrint.gif"/></a></span></div>, <div class="results"> <h2><a href="/gepris/projekt/460135501">MaRDI - Mathematical Research Data Initiative</a></h2> <span id="icons"><a href="/gepris/projekt/460135501?displayMode=print&amp;findButton=Finden&amp;hitsPerPage=50&amp;index=0&amp;keywords_criterion=&amp;language=en&amp;nurProjekteMitAB=false&amp;pemu=32" rel="nofollow" target="_blank" title="Open print view"><img alt="Print View" src="/gepris/images/iconPrint.gif"/></a></span></div>, <div class="results"> <h2><a href="/gepris/projekt/460197019">FAIRmat – FAIR Data Infrastructure for Condensed-Matter Physics and the Chemical Physics of Solids</a></h2> <span id="icons"><a href="/gepris/projekt/460197019?displayMode=print&amp;findButton=Finden&amp;hitsPerPage=50&amp;index=0&amp;keywords_criterion=&amp;language=en&amp;nurProjekteMitAB=false&amp;pemu=32" rel="nofollow" target="_blank" title="Open print view"><img alt="Print View" src="/gepris/images/iconPrint.gif"/></a></span></div>, <div class="results"> <h2><a href="/gepris/projekt/460234259">NFDI4DS - NFDI for Data Science and Artificial Intelligence</a></h2> <span id="icons"><a href="/gepris/projekt/460234259?displayMode=print&amp;findButton=Finden&amp;hitsPerPage=50&amp;index=0&amp;keywords_criterion=&amp;language=en&amp;nurProjekteMitAB=false&amp;pemu=32" rel="nofollow" target="_blank" title="Open print view"><img alt="Print View" src="/gepris/images/iconPrint.gif"/></a></span></div>, <div class="results"> <h2><a href="/gepris/projekt/460247524">NFDI-MatWerk - National Research Data Infrastructure for Materials Science &amp; Engineering</a></h2> <span id="icons"><a href="/gepris/projekt/460247524?displayMode=print&amp;findButton=Finden&amp;hitsPerPage=50&amp;index=0&amp;keywords_criterion=&amp;language=en&amp;nurProjekteMitAB=false&amp;pemu=32" rel="nofollow" target="_blank" title="Open print view"><img alt="Print View" src="/gepris/images/iconPrint.gif"/></a></span></div>, <div class="results"> <h2><a href="/gepris/projekt/460248186">PUNCH4NFDI - Particles, Universe, NuClei and Hadrons for the NFDI</a></h2> <span id="icons"><a href="/gepris/projekt/460248186?displayMode=print&amp;findButton=Finden&amp;hitsPerPage=50&amp;index=0&amp;keywords_criterion=&amp;language=en&amp;nurProjekteMitAB=false&amp;pemu=32" rel="nofollow" target="_blank" title="Open print view"><img alt="Print View" src="/gepris/images/iconPrint.gif"/></a></span></div>, <div class="results"> <h2><a href="/gepris/projekt/460248799">DAPHNE4NFDI - DAta from PHoton and Neutron Experiments for NFDI</a></h2> <span id="icons"><a href="/gepris/projekt/460248799?displayMode=print&amp;findButton=Finden&amp;hitsPerPage=50&amp;index=0&amp;keywords_criterion=&amp;language=en&amp;nurProjekteMitAB=false&amp;pemu=32" rel="nofollow" target="_blank" title="Open print view"><img alt="Print View" src="/gepris/images/iconPrint.gif"/></a></span></div>] </code> Let's process a bit those results. _____no_output_____ <code> consortia = [] for result in results: a = result.find('a') t = a.get_text().replace(' – ', ' - ') try: [title, description] = t.split(' - ') except: [title, description] = [t, ''] consortia.append(["https://gepris.dfg.de" + a.get('href'), title, description]) print(consortia)[['https://gepris.dfg.de/gepris/projekt/441914366', 'GHGA', 'German Human Genome-Phenome Archive'], ['https://gepris.dfg.de/gepris/projekt/441926934', 'NFDI4Cat', 'NFDI for Catalysis-Related Sciences'], ['https://gepris.dfg.de/gepris/projekt/441958017', 'NFDI4Culture', 'Consortium for research data on material and immaterial cultural heritage'], ['https://gepris.dfg.de/gepris/projekt/441958208', 'NFDI4Chem', 'Chemistry Consortium in the NFDI'], ['https://gepris.dfg.de/gepris/projekt/442032008', 'NFDI4BioDiversity', 'Biodiversity, Ecology & Environmental Data'], ['https://gepris.dfg.de/gepris/projekt/442077441', 'DataPLANT', 'Data in PLANT research'], ['https://gepris.dfg.de/gepris/projekt/442146713', 'NFDI4Ing', 'National Research Data Infrastructure for Engineering Services'], ['https://gepris.dfg.de/gepris/projekt/442326535', 'NFDI4Health', 'National Research Data Infrastructure for Personal Health Data'], ['https://gepris.dfg.de/gepris/projekt/442494171', 'KonsortSWD', 'Consortium for the Social, Behavioural, Educational, and Economic Sciences'], ['https://gepris.dfg.de/gepris/projekt/460033370', 'Text+', ''], ['https://gepris.dfg.de/gepris/projekt/460036893', 'NFDI4Earth', 'NFDI Consortium Earth System Sciences'], ['https://gepris.dfg.de/gepris/projekt/460037581', 'BERD@NFDI', 'NFDI for Business, Economic and Related Data'], ['https://gepris.dfg.de/gepris/projekt/460129525', 'NFDI4Microbiota', 'National Research Data Infrastructure for Microbiota Research'], ['https://gepris.dfg.de/gepris/projekt/460135501', 'MaRDI', 'Mathematical Research Data Initiative'], ['https://gepris.dfg.de/gepris/projekt/460197019', 'FAIRmat', 'FAIR Data Infrastructure for Condensed-Matter Physics and the Chemical Physics of Solids'], ['https://gepris.dfg.de/gepris/projekt/460234259', 'NFDI4DS', 'NFDI for Data Science and Artificial Intelligence'], ['https://gepris.dfg.de/gepris/projekt/460247524', 'NFDI-MatWerk', 'National Research Data Infrastructure for Materials Science & Engineering'], ['https://gepris.dfg.de/gepris/projekt/460248186', 'PUNCH4NFDI', 'Particles, Universe, NuClei and Hadrons for the NFDI'], ['https://gepris.dfg.de/gepris/projekt/460248799', 'DAPHNE4NFDI', 'DAta from PHoton and Neutron Experiments for NFDI']] </code> Finally, we create a pandas-dataframe_____no_output_____ <code> import pandas as pd nfdi = pd.DataFrame(consortia, columns=['GEPRIS', 'Title', 'Description']) nfdi_____no_output_____ </code> Let's save the dataframe to CSV-file._____no_output_____ <code> nfdi.to_csv("../../../data/GEPRIS_NFDI_all.csv", index=False, encoding='utf-8')_____no_output_____ </code>
{ "repository": "UB-Mannheim/NFDI", "path": "docs/docs/parsing/02_parsing_GEPRIS_search.ipynb", "matched_keywords": [ "ecology" ], "stars": 1, "size": 25338, "hexsha": "48239adae33706dc59b14f369ad625e31d920b1a", "max_line_length": 2256, "avg_line_length": 62.1029411765, "alphanum_fraction": 0.5970479122 }
# Notebook from weichen-yan/nrpytutorial Path: in_progress/Tutorial-GiRaFFE_NRPy-Stilde-flux.ipynb <script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); </script> # The Riemann Solution on $\tilde{S}_i$ using HLLE ### Author: Patrick Nelson This notebook documents the function from the original `GiRaFFE` that calculates the flux for $\tilde{S}_i$ according to the method of Harten, Lax, von Leer, and Einfeldt (HLLE), assuming that we have calculated the values of the velocity and magnetic field on the cell faces according to the piecewise-parabolic method (PPM) of [Colella and Woodward (1984)](https://crd.lbl.gov/assets/pubs_presos/AMCS/ANAG/A141984.pdf), modified for the case of GRFFE. **Notebook Status:** <font color=green><b> Validated </b></font> **Validation Notes:** This code has been validated to round-off level agreement with the corresponding code in the original `GiRaFFE` ### NRPy+ Source Code for this module: * [GiRaFFE_NRPy/Stilde_flux.py](../../edit/in_progress/GiRaFFE_NRPy/Stilde_flux.py) The differential equations that `GiRaFFE` evolves are written in conservation form, and thus have two different terms that contribute to the time evolution of some quantity: the flux term and the source term. The PPM method is what the original `GiRaFFE` uses to handle the flux term; hopefully, using this instead of finite-differencing will fix some of the problems we've been having with `GiRaFFE_NRPy`. In GRFFE, the evolution equation for the Poynting flux $\tilde{S}_i$ is given as $$ \boxed{\partial_t \tilde{S}_i + \underbrace{ \partial_j \left( \alpha \sqrt{\gamma} T^j_{{\rm EM} i} \right)}_{\rm Flux\ term} = \underbrace{\frac{1}{2} \alpha \sqrt{\gamma} T^{\mu \nu}_{\rm EM} \partial_i g_{\mu \nu}}_{\rm Source\ term}.} $$ We can then see that, if we rewrite this, the right-hand side (RHS) describing the time evolution $\partial_t \tilde{S}_i$ consists of two terms: the flux term and the source term. The flux term in particular can be tricky, as it may be discontinuous due to shocks or other sharp features. This presents difficulties when we take finite-difference derivatives of that term, leading to the Gibbs phenomenon. So, we implement a different algorithm to take the derivative. The flux term itself is, as written above, $\alpha \sqrt{\gamma} T^i_{{\rm EM} j} = \alpha \sqrt{\gamma} g_{j \mu} T^{\mu i}_{\rm EM}$, where $T^{\mu \nu}_{\rm EM} = b^2 u^\mu u^\nu + \frac{1}{2} b^2 g^{\mu \nu} - b^\mu b^\nu$; the following functions will compute this value so that we can easily take its derivative later. Having reconstructed the values of $v^i_{(n)}$ and $B^i$ on the cell faces, we will now compute the value of the flux of $\tilde{S}_i$ on each face. For each component of $\tilde{S}_i$ in each direction, we compute the flux as $$ F^{\rm HLL} = \frac{c_{\rm min} f_{\rm R} + c_{\rm max} f_{\rm L} - c_{\rm min} c_{\rm max} (U_{\rm R}-U_{\rm L})}{c_{\rm min} + c_{\rm max}}, $$ where $$ f = \alpha \sqrt{\gamma} T^j_{{\rm EM} i} $$ and $$ U = \tilde{S}_j. $$ Here, $i$ is direction in which we are computing the flux, and $j$ is the component of the momentum we are computing it for. Note that these two quantities are computed on both the left and right sides of the cell face. We will be able to draw heavily on the [GRFFE module](../../edit/GRFFE/equations.py) ([Tutorial](../Tutorial-GRFFE_Equations-Cartesian.ipynb)) and the [GRHD module](../../edit/GRHD/equations.py) ([Tutorial](../Tutorial-GRHD_Equations-Cartesian.ipynb)) to compute $u^0$, $u^i$, and $b^\mu$, as well as the index-lowered forms of those vectors. Critically, these quantities depend on the Valencia 3-velocity $v^i_{(n)}$ and magnetic field $B^i$. We will not be using the normal gridfunctions for these, but rather the ones that we have previosly calculated on the left and right sides of the cell faces using the [Piecewise Parabolic Method](Tutorial-GiRaFFE_NRPy_Ccode_library-PPM.ipynb). The speeds $c_\min$ and $c_\max$ are characteristic speeds that waves can travel through the plasma. In GRFFE, the expressions defining them reduce a function of only the metric quantities. $c_\min$ is the negative of the minimum amongst the speeds $c_-$ and $0$ and $c_\max$ is the maximum amongst the speeds $c_+$ and $0$. The speeds $c_\pm = \sqrt{b^2-4ac}$ must be calculated on both the left and right faces, where $$a = 1/\alpha^2,$$ $$b = 2 \beta^i / \alpha^2$$ and $$c = g^{ii} - (\beta^i)^2/\alpha^2.$$ Another point to consider is that since we are working on cell faces, not at the cell center, we can't use the normal metric values that we store. We will instead use the value of the metric interpolated onto the cell face, which we will assume has been previously done in this tutorial. The algorithm for finite-volume methods in general is as follows: 1. The Reconstruction Step - Piecewise Parabolic Method 1. Within each cell, fit to a function that conserves the volume in that cell using information from the neighboring cells * For PPM, we will naturally use parabolas 1. Use that fit to define the state at the left and right interface of each cell 1. Apply a slope limiter to mitigate Gibbs phenomenon 1. Interpolate the value of the metric gridfunctions on the cell faces 1. **Solving the Riemann Problem - Harten, Lax, (This notebook, $\tilde{S}_i$ only)** 1. **Use the left and right reconstructed states to calculate the unique state at boundary** 1. Use the unique state to estimate the derivative in the cell_____no_output_____<a id='toc'></a> # Table of Contents $$\label{toc}$$ This notebook is organized as follows 1. [Step 1](#prelim): Preliminaries 1. [Step 2](#s_i_flux): The $\tilde{S}_i$ function 1. [Step 2.a](#hydro_speed): GRFFE characteristic wave speeds 1. [Step 2.b](#fluxes): Compute the HLLE fluxes 1. [Step 3](#code_validation): Code Validation against `GiRaFFE_NRPy.Stilde_flux` NRPy+ Module 1. [Step 4](#derive_speed): Complete Derivation of the Wave Speeds 1. [Step 5](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file_____no_output_____<a id='prelim'></a> # Step 1: Preliminaries \[Back to [top](#toc)\] $$\label{prelim}$$ This first block of code just sets up a subdirectory within `GiRaFFE_standalone_Ccodes/` to which we will write the C code. We will also import the core NRPy+ functionality and register the needed gridfunctions. Doing so will let NRPy+ figure out where to read and write data from/to. _____no_output_____ <code> # Step 0: Add NRPy's directory to the path # https://stackoverflow.com/questions/16780014/import-file-from-parent-directory import os,sys nrpy_dir_path = os.path.join("..") if nrpy_dir_path not in sys.path: sys.path.append(nrpy_dir_path) from outputC import * # NRPy+: Core C code output module import finite_difference as fin # NRPy+: Finite difference C code generation module import NRPy_param_funcs as par # NRPy+: Parameter interface import grid as gri # NRPy+: Functions having to do with numerical grids import loop as lp # NRPy+: Generate C code loops import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support import reference_metric as rfm # NRPy+: Reference metric support import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface import shutil, os, sys # Standard Python modules for multiplatform OS-level functions thismodule = "GiRaFFE_NRPy-Stilde-flux" _____no_output_____ </code> <a id='s_i_flux'></a> # Step 2: The $\tilde{S}_i$ function \[Back to [top](#toc)\] $$\label{s_i_flux}$$ _____no_output_____<a id='hydro_speed'></a> ## Step 2.a: GRFFE characteristic wave speeds \[Back to [top](#toc)\] $$\label{hydro_speed}$$ Next, we will find the speeds at which the hydrodynamics waves propagate. We start from the speed of light (since FFE deals with very diffuse plasmas), which is $c=1.0$ in our chosen units. We then find the speeds $c_+$ and $c_-$ on each face with the function `find_cp_cm`; then, we find minimum and maximum speeds possible from among those. Below is the source code for `find_cp_cm`, edited to work with the NRPy+ version of GiRaFFE. One edit we need to make in particular is to the term `psim4*gupii` in the definition of `c`; that was written assuming the use of the conformal metric $\tilde{g}^{ii}$. Since we are not using that here, and are instead using the ADM metric, we should not multiply by $\psi^{-4}$. ```c static inline void find_cp_cm(REAL &cplus,REAL &cminus,const REAL v02,const REAL u0, const REAL vi,const REAL lapse,const REAL shifti, const REAL gammadet,const REAL gupii) { const REAL u0_SQUARED=u0*u0; const REAL ONE_OVER_LAPSE_SQUARED = 1.0/(lapse*lapse); // sqrtgamma = psi6 -> psim4 = gammadet^(-1.0/3.0) const REAL psim4 = pow(gammadet,-1.0/3.0); //Find cplus, cminus: const REAL a = u0_SQUARED * (1.0-v02) + v02*ONE_OVER_LAPSE_SQUARED; const REAL b = 2.0* ( shifti*ONE_OVER_LAPSE_SQUARED * v02 - u0_SQUARED * vi * (1.0-v02) ); const REAL c = u0_SQUARED*vi*vi * (1.0-v02) - v02 * ( gupii - shifti*shifti*ONE_OVER_LAPSE_SQUARED); REAL detm = b*b - 4.0*a*c; //ORIGINAL LINE OF CODE: //if(detm < 0.0) detm = 0.0; //New line of code (without the if() statement) has the same effect: detm = sqrt(0.5*(detm + fabs(detm))); /* Based on very nice suggestion from Roland Haas */ cplus = 0.5*(detm-b)/a; cminus = -0.5*(detm+b)/a; if (cplus < cminus) { const REAL cp = cminus; cminus = cplus; cplus = cp; } } ``` Comments documenting this have been excised for brevity, but are reproduced in $\LaTeX$ [below](#derive_speed). We could use this code directly, but there's substantial improvement we can make by changing the code into a NRPyfied form. Note the `if` statement; NRPy+ does not know how to handle these, so we must eliminate it if we want to leverage NRPy+'s full power. (Calls to `fabs()` are also cheaper than `if` statements.) This can be done if we rewrite this, taking inspiration from the other eliminated `if` statement documented in the above code block: ```c cp = 0.5*(detm-b)/a; cm = -0.5*(detm+b)/a; cplus = 0.5*(cp+cm+fabs(cp-cm)); cminus = 0.5*(cp+cm-fabs(cp-cm)); ``` This can be simplified further, by substituting `cp` and `cm` into the below equations and eliminating terms as appropriate. First note that `cp+cm = -b/a` and that `cp-cm = detm/a`. Thus, ```c cplus = 0.5*(-b/a + fabs(detm/a)); cminus = 0.5*(-b/a - fabs(detm/a)); ``` This fulfills the original purpose of the `if` statement in the original code because we have guaranteed that $c_+ \geq c_-$. This leaves us with an expression that can be much more easily NRPyfied. So, we will rewrite the following in NRPy+, making only minimal changes to be proper Python. However, it turns out that we can make this even simpler. In GRFFE, $v_0^2$ is guaranteed to be exactly one. In GRMHD, this speed was calculated as $$v_{0}^{2} = v_{\rm A}^{2} + c_{\rm s}^{2}\left(1-v_{\rm A}^{2}\right),$$ where the Alfv&eacute;n speed $v_{\rm A}^{2}$ $$v_{\rm A}^{2} = \frac{b^{2}}{\rho_{b}h + b^{2}}.$$ So, we can see that when the density $\rho_b$ goes to zero, $v_{0}^{2} = v_{\rm A}^{2} = 1$. Then \begin{align} a &= (u^0)^2 (1-v_0^2) + v_0^2/\alpha^2 \\ &= 1/\alpha^2 \\ b &= 2 \left(\beta^i v_0^2 / \alpha^2 - (u^0)^2 v^i (1-v_0^2)\right) \\ &= 2 \beta^i / \alpha^2 \\ c &= (u^0)^2 (v^i)^2 (1-v_0^2) - v_0^2 \left(g^{ii} - (\beta^i)^2/\alpha^2\right) \\ &= g^{ii} - (\beta^i)^2/\alpha^2, \end{align} are simplifications that should save us some time; we can see that $a \geq 0$ is guaranteed. Note that we also force `detm` to be positive. Thus, `detm/a` is guaranteed to be positive itself, rendering the calls to `nrpyAbs()` superfluous. Furthermore, we eliminate any dependence on the Valencia 3-velocity and the time compoenent of the four-velocity, $u^0$. This leaves us free to solve the quadratic in the familiar way: $$c_\pm = \frac{-b \pm \sqrt{b^2-4ac}}{2a}$$._____no_output_____ <code> # We'll write this as a function so that we can calculate the expressions on-demand for any choice of i def find_cp_cm(lapse,shifti,gupii): # Inputs: u0,vi,lapse,shift,gammadet,gupii # Outputs: cplus,cminus # a = 1/(alpha^2) a = 1/(lapse*lapse) # b = 2 beta^i / alpha^2 b = 2 * shifti /(lapse*lapse) # c = -g^{ii} + (beta^i)^2 / alpha^2 c = - gupii + shifti*shifti/(lapse*lapse) # Now, we are free to solve the quadratic equation as usual. We take care to avoid passing a # negative value to the sqrt function. detm = b*b - 4*a*c detm = sp.sqrt(sp.Rational(1,2)*(detm + nrpyAbs(detm))) global cplus,cminus cplus = sp.Rational(1,2)*(-b/a + detm/a) cminus = sp.Rational(1,2)*(-b/a - detm/a) _____no_output_____ </code> We will now write a function in NRPy+ similar to the one used in the old `GiRaFFE`, allowing us to generate the expressions with less need to copy-and-paste code; the key difference is that this one will be in Python, and generate optimized C code integrated into the rest of the operations. Notice that since we eliminated the dependence on velocities, none of the input quantities are different on either side of the face. So, this function won't really do much besides guarantee that `cmax` and `cmin` are positive and negative, respectively, but we'll leave the machinery here since it is likely to be a useful guide to somebody who wants to something similar. We use the same technique as above to replace the `if` statements inherent to the `MAX()` and `MIN()` functions._____no_output_____ <code> # We'll write this as a function, and call it within HLLE_solver, below. def find_cmax_cmin(flux_dirn,gamma_faceDD,beta_faceU,alpha_face): # Inputs: flux direction flux_dirn, Inverse metric gamma_faceUU, shift beta_faceU, # lapse alpha_face, metric determinant gammadet_face # Outputs: maximum and minimum characteristic speeds cmax and cmin # First, we need to find the characteristic speeds on each face gamma_faceUU,unusedgammaDET = ixp.generic_matrix_inverter3x3(gamma_faceDD) find_cp_cm(alpha_face,beta_faceU[flux_dirn],gamma_faceUU[flux_dirn][flux_dirn]) cpr = cplus cmr = cminus find_cp_cm(alpha_face,beta_faceU[flux_dirn],gamma_faceUU[flux_dirn][flux_dirn]) cpl = cplus cml = cminus # The following algorithms have been verified with random floats: global cmax,cmin # Now, we need to set cmax to the larger of cpr,cpl, and 0 cmax = sp.Rational(1,2)*(cpr+cpl+nrpyAbs(cpr-cpl)) cmax = sp.Rational(1,2)*(cmax+nrpyAbs(cmax)) # And then, set cmin to the smaller of cmr,cml, and 0 cmin = sp.Rational(1,2)*(cmr+cml-nrpyAbs(cmr-cml)) cmin = -sp.Rational(1,2)*(cmin-nrpyAbs(cmin)) _____no_output_____ </code> <a id='fluxes'></a> ## Step 2.b: Compute the HLLE fluxes \[Back to [top](#toc)\] $$\label{fluxes}$$ Finally, we can compute the flux in each direction. This momentum flux in the $m$ direction is defined as $\alpha \sqrt{\gamma} T^m_{\ \ j}$, based on the input `flux_dirn`. We have already defined $\alpha \sqrt{\gamma}$, so all we need to do is calculate $T^m_{\ \ j}$, where $T^{\mu \nu}_{\rm EM} = b^2 u^\mu u^\nu + \frac{1}{2} b^2 g^{\mu \nu} - b^\mu b^\nu$. In doing this index-lowering operation, recall that $g^{\mu \nu} g_{\nu \alpha} = \delta^\mu_\alpha$. We will do so in accordance with the method published by [Harten, Lax, and von Leer](https://epubs.siam.org/doi/pdf/10.1137/1025002) and [Einfeldt](https://epubs.siam.org/doi/10.1137/0725021) (hereafter HLLE) to solve the Riemann problem. So, we define $f(u) = T^m_{\ \ j}$ on each face as $$ f = \alpha \sqrt{\gamma} \left( (\rho+b^2)(u^0 v^m) u_j + (P+\frac{1}{2}b^2) \delta^m_j - b^m b_j \right); $$ Because $\rho = P = 0$ in GRFFE and $u^0 v^m = u^m$ in general (since $v^m$ is the drift velocity here), this simplifies to $$ f = \alpha \sqrt{\gamma} \left( b^2 u^m u_j + \frac{1}{2}b^2 \delta^m_j - b^m b_j \right). $$ We use $j$ to correspond to the component of the flux we are calculating; that is, $j=0$ corresponds to $x$, and so forth (however, remember that in a NRPy+ 3-vector, the numbers will still run from 0 to 2). $\delta^i_j$ is the standard Kronecker delta. We also define `U_{\rm R}` and `U_{\rm L}`: $$ U = \alpha \sqrt{\gamma} \left( (\rho+b^2) u^0 u_j - b^0 b_j \right), $$ which, in GRFFE, simplifies to $$ U = \alpha \sqrt{\gamma} \left( b^2 u^0 u_j - b^0 b_j \right). $$ In NRPy+, we'll let the GRHD and GRFFE modules handle these. and combine based on eq. 3.15 in the HLLE paper, $$ F^{\rm HLL} = \frac{c_{\rm min} f_{\rm R} + c_{\rm max} f_{\rm L} - c_{\rm min} c_{\rm max} (U_{\rm R}-U_{\rm L})}{c_{\rm min} + c_{\rm max}}, $$ We'll write the HLLE step as a function so that we can loop over `flux_dirn` and `mom_comp` and write each version needed as we need it._____no_output_____ <code> # We'll rewrite this assuming that we've passed the entire reconstructed # gridfunctions. You could also do this with only one point, but then you'd # need to declare everything as a Cparam in NRPy+ import GRHD.equations as GRHD import GRFFE.equations as GRFFE def calculate_GRFFE_Tmunu_and_contractions(flux_dirn, mom_comp, gammaDD,betaU,alpha,ValenciavU,BU,sqrt4pi): GRHD.compute_sqrtgammaDET(gammaDD) GRHD.u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha, betaU, gammaDD, ValenciavU) GRFFE.compute_smallb4U_with_driftvU_for_FFE(gammaDD, betaU, alpha, GRHD.u4U_ito_ValenciavU, BU, sqrt4pi) GRFFE.compute_smallbsquared(gammaDD, betaU, alpha, GRFFE.smallb4_with_driftv_for_FFE_U) GRFFE.compute_TEM4UU(gammaDD, betaU, alpha, GRFFE.smallb4_with_driftv_for_FFE_U, GRFFE.smallbsquared, GRHD.u4U_ito_ValenciavU) GRFFE.compute_TEM4UD(gammaDD, betaU, alpha, GRFFE.TEM4UU) # Compute conservative variables in terms of primitive variables GRHD.compute_S_tildeD(alpha, GRHD.sqrtgammaDET, GRFFE.TEM4UD) global U,F # Flux F = alpha*sqrt{gamma}*T^i_j F = alpha*GRHD.sqrtgammaDET*GRFFE.TEM4UD[flux_dirn+1][mom_comp+1] # U = alpha*sqrt{gamma}*T^0_j = Stilde_j U = GRHD.S_tildeD[mom_comp] def HLLE_solver(cmax, cmin, Fr, Fl, Ur, Ul): # This solves the Riemann problem for the mom_comp component of the momentum # flux StildeD in the flux_dirn direction. # st_j_flux = (c_\min f_R + c_\max f_L - c_\min c_\max ( st_j_r - st_j_l )) / (c_\min + c_\max) return (cmin*Fr + cmax*Fl - cmin*cmax*(Ur-Ul) )/(cmax + cmin) _____no_output_____ </code> Finally, we write the function that computes the actual flux. We take the parameter `flux_dirn` as input, so we can eventually create one C file for each flux direction. In each file, we will include the math to calculate each momentum-flux component `mom_comp` in that direction by looping over `mom_comp`. We have written the function `HLLE_solver()` so that we can easily compute the flux as specified by those two indices._____no_output_____ <code> def calculate_Stilde_flux(flux_dirn,inputs_provided=True,alpha_face=None,gamma_faceDD=None,beta_faceU=None,\ Valenciav_rU=None,B_rU=None,Valenciav_lU=None,B_lU=None,sqrt4pi=None): find_cmax_cmin(flux_dirn,gamma_faceDD,beta_faceU,alpha_face) global Stilde_fluxD Stilde_fluxD = ixp.zerorank3() for mom_comp in range(3): calculate_GRFFE_Tmunu_and_contractions(flux_dirn, mom_comp, gamma_faceDD,beta_faceU,alpha_face,\ Valenciav_rU,B_rU,sqrt4pi) Fr = F Ur = U calculate_GRFFE_Tmunu_and_contractions(flux_dirn, mom_comp, gamma_faceDD,beta_faceU,alpha_face,\ Valenciav_lU,B_lU,sqrt4pi) Fl = F Ul = U Stilde_fluxD[mom_comp] = HLLE_solver(cmax, cmin, Fr, Fl, Ur, Ul) _____no_output_____ </code> <a id='code_validation'></a> # Step 3: Code Validation against `GiRaFFE_NRPy.Stilde_flux` NRPy+ Module \[Back to [top](#toc)\] $$\label{code_validation}$$ Here, as a code validation check, we verify agreement in the SymPy expressions for the $\texttt{GiRaFFE}$ evolution equations and auxiliary quantities we intend to use between 1. this tutorial and 2. the NRPy+ [GiRaFFE_NRPy.Stilde_flux](../../edit/in_progress/GiRaFFE_NRPy/Stilde_flux.py) module. _____no_output_____ <code> all_passed=True def comp_func(expr1,expr2,basename,prefixname2="C2P_P2C."): if str(expr1-expr2)!="0": print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2)) all_passed=False def gfnm(basename,idx1,idx2=None,idx3=None): if idx2==None: return basename+"["+str(idx1)+"]" if idx3==None: return basename+"["+str(idx1)+"]["+str(idx2)+"]" return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]" # These are the standard gridfunctions we've used before. #ValenciavU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","ValenciavU",DIM=3) #gammaDD = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","gammaDD","sym01") #betaU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","betaU") #alpha = gri.register_gridfunctions("AUXEVOL",["alpha"]) #AD = ixp.register_gridfunctions_for_single_rank1("EVOL","AD",DIM=3) #BU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","BU",DIM=3) # We will pass values of the gridfunction on the cell faces into the function. This requires us # to declare them as C parameters in NRPy+. We will denote this with the _face infix/suffix. alpha_face = gri.register_gridfunctions("AUXEVOL","alpha_face") gamma_faceDD = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","gamma_faceDD","sym01") beta_faceU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","beta_faceU") # We'll need some more gridfunctions, now, to represent the reconstructions of BU and ValenciavU # on the right and left faces Valenciav_rU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","Valenciav_rU",DIM=3) B_rU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","B_rU",DIM=3) Valenciav_lU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","Valenciav_lU",DIM=3) B_lU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","B_lU",DIM=3) sqrt4pi = sp.symbols('sqrt4pi',real=True) # ...and some more for the fluxes we calculate here. These three gridfunctions will each store # the momentum flux of one component of StildeD in one direction; we'll be able to reuse them # as we loop over each direction, reducing our memory costs. Stilde_fluxD = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","Stilde_fluxD",DIM=3) import GiRaFFE_NRPy.Stilde_flux as Sf for flux_dirn in range(3): expr_list = [] exprcheck_list = [] namecheck_list = [] print("Checking the flux in direction "+str(flux_dirn)) calculate_Stilde_flux(flux_dirn,True,alpha_face,gamma_faceDD,beta_faceU,\ Valenciav_rU,B_rU,Valenciav_lU,B_lU,sqrt4pi) Sf.calculate_Stilde_flux(flux_dirn,True,alpha_face,gamma_faceDD,beta_faceU,\ Valenciav_rU,B_rU,Valenciav_lU,B_lU,sqrt4pi) for mom_comp in range(3): namecheck_list.extend([gfnm("Stilde_fluxD",mom_comp)]) exprcheck_list.extend([Sf.Stilde_fluxD[mom_comp]]) expr_list.extend([Stilde_fluxD[mom_comp]]) for mom_comp in range(len(expr_list)): comp_func(expr_list[mom_comp],exprcheck_list[mom_comp],namecheck_list[mom_comp]) import sys if all_passed: print("ALL TESTS PASSED!") else: print("ERROR: AT LEAST ONE TEST DID NOT PASS") sys.exit(1) Checking the flux in direction 0 Checking the flux in direction 1 Checking the flux in direction 2 ALL TESTS PASSED! </code> <a id='derive_speed'></a> # Step 4: Complete Derivation of the Wave Speeds \[Back to [top](#toc)\] $$\label{derive_speed}$$ This computes phase speeds in the direction given by flux_dirn. Note that we replace the full dispersion relation with a simpler one, which overestimates the maximum speeds by a factor of ~2. See full discussion around Eqs. 49 and 50 in [Duez, et al.](http://arxiv.org/pdf/astro-ph/0503420.pdf). In summary, we solve the dispersion relation (in, e.g., the $x$-direction) with a wave vector of $k_\mu = (-\omega,k_x,0,0)$. So, we solve the approximate dispersion relation $\omega_{\rm cm}^2 = [v_A^2 + c_s^2 (1-v_A^2)]k_{\rm cm}^2$ for the wave speed $\omega/k_x$, where the sound speed $c_s = \sqrt{\Gamma P/(h \rho_0)}$, the Alfv&eacute;n speed $v_A = 1$ (in GRFFE), $\omega_{\rm cm} = -k_\mu k^\mu$ is the frequency in the comoving frame, $k_{\rm cm}^2 = K_\mu K^\mu$ is the wavenumber squared in the comoving frame, and $K_\mu = (g_{\mu\nu} + u_\mu u_\nu)k^\nu$ is the part of the wave vector normal to the four-velocity $u^\mu$. See below for a complete derivation. What follows is a complete derivation of the quadratic we solve. \begin{align} w_{\rm cm} &= (-k_0 u^0 - k_x u^x) \\ k_{\rm cm}^2 &= K_{\mu} K^{\mu}, \\ K_{\mu} K^{\mu} &= (g_{\mu a} + u_{\mu} u_a) k^a g^{\mu b} [ (g_{c b} + u_c u_b) k^c ] \\ \rightarrow g^{\mu b} (g_{c b} + u_{c} u_{b}) k^c &= (\delta^{\mu}_c + u_c u^{\mu} ) k^c \\ &= (g_{\mu a} + u_{\mu} u_a) k^a (\delta^{\mu}_c + u_c u^{\mu} ) k^c \\ &=[(g_{\mu a} + u_{\mu} u_a) \delta^{\mu}_c + (g_{\mu a} + u_{\mu} u_a) u_c u^{\mu} ] k^c k^a \\ &=[(g_{c a} + u_c u_a) + (u_c u_a - u_a u_c] k^c k^a \\ &=(g_{c a} + u_c u_a) k^c k^a \\ &= k_a k^a + u^c u^a k_c k_a \\ k^a = g^{\mu a} k_{\mu} &= g^{0 a} k_0 + g^{x a} k_x \\ k_a k^a &= k_0 g^{0 0} k_0 + k_x k_0 g^{0 x} + g^{x 0} k_0 k_x + g^{x x} k_x k_x \\ &= g^{00} (k_0)^2 + 2 g^{x0} k_0 k_x + g^{xx} (k_x)^2 \\ u^c u^a k_c k_a &= (u^0 k_0 + u^x k_x) (u^0 k_0 + u^x k_x) = (u^0 k_0)^2 + 2 u^x k_x u^0 k_0 + (u^x k_x)^2 \\ (k_0 u0)^2 + 2 k_x u^x k_0 u^0 + (k_x u^x)^2 &= v_0^2 [ (u^0 k_0)^2 + 2 u^x k_x u^0 k_0 + (u^x k_x)^2 + g^{00} (k_0)^2 + 2 g^{x0} k_0 k_x + g^{xx} (k_x)^2] \\ (1-v_0^2) (u^0 k_0 + u^x k_x)^2 &= v_0^2 (g^{00} (k_0)^2 + 2 g^{x0} k_0 k_x + g^{xx} (k_x)^2) \\ (1-v_0^2) (u^0 k_0/k_x + u^x)^2 &= v_0^2 (g^{00} (k_0/k_x)^2 + 2 g^{x0} k_0/k_x + g^{xx}) \\ (1-v_0^2) (u^0 X + u^x)^2 &= v_0^2 (g^{00} X^2 + 2 g^{x0} X + g^{xx}) \\ (1-v_0^2) ((u^0)^2 X^2 + 2 u^x (u^0) X + (u^x)^2) &= v_0^2 (g^{00} X^2 + 2 g^{x0} X + g^{xx}) \\ &X^2 ( (1-v_0^2) (u^0)^2 - v_0^2 g^{00}) + X (2 u^x u^0 (1-v_0^2) - 2 v_0^2 g^{x0}) + (1-v_0^2) (u^x)^2 - v_0^2 g^{xx} \\ a &= (1-v_0^2) (u^0)^2 - v_0^2 g^{00} = (1-v_0^2) (u^0)^2 + v_0^2/\alpha^2 \leftarrow {\rm VERIFIED} \\ b &= 2 u^x u^0 (1-v_0^2) - 2 v_0^2 \beta^x/\alpha^2 \leftarrow {\rm VERIFIED,\ } X\rightarrow -X, {\rm because\ } X = -w/k_1, {\rm \ and\ we\ are\ solving\ for} -X. \\ c &= (1-v_0^2) (u^x)^2 - v_0^2 (g^{xx}\psi^{-4} - (\beta^x/\alpha)^2) \leftarrow {\rm VERIFIED} \\ v_0^2 &= v_A^2 + c_s^2 (1 - v_A^2) \\ \end{align} _____no_output_____<a id='latex_pdf_output'></a> # Step 5: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\] $$\label{latex_pdf_output}$$ The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename [Tutorial-GiRaFFE_NRPy-Stilde-flux.pdf](Tutorial-GiRaFFE_NRPy-Stilde-flux.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)_____no_output_____ <code> !jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-GiRaFFE_NRPy-Stilde-flux.ipynb !pdflatex -interaction=batchmode Tutorial-GiRaFFE_NRPy-Stilde-flux.tex !pdflatex -interaction=batchmode Tutorial-GiRaFFE_NRPy-Stilde-flux.tex !pdflatex -interaction=batchmode Tutorial-GiRaFFE_NRPy-Stilde-flux.tex !rm -f Tut*.out Tut*.aux Tut*.logThis is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex) restricted \write18 enabled. entering extended mode This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex) restricted \write18 enabled. entering extended mode This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex) restricted \write18 enabled. entering extended mode </code>
{ "repository": "weichen-yan/nrpytutorial", "path": "in_progress/Tutorial-GiRaFFE_NRPy-Stilde-flux.ipynb", "matched_keywords": [ "evolution" ], "stars": null, "size": 34790, "hexsha": "4824157da3a4a23011002e22b712c7d791829ced", "max_line_length": 1001, "avg_line_length": 58.2747068677, "alphanum_fraction": 0.6019833285 }
# Notebook from shreyaraghavendra/BoltBio Path: code/preprocessing/GE_getTable.ipynb # Create a table with TCGA data_____no_output_____ <code> # import libraries import os import sys import pandas as pd import numpy as np import regex as re from matplotlib import pyplot as plt import time PATH_TO_DATA = '/Users/kushan/BoltBio/ge_data'_____no_output_____ </code> Set *working_dir* to the directory where you downloaded files in *data*_____no_output_____ <code> dirs = os.listdir(PATH_TO_DATA)_____no_output_____PATH = os.getcwd() PATH_TO_UTILS = '/Users/kushan/BoltBio/code/utils/GE'_____no_output_____len(dirs)_____no_output_____ </code> Prepare a list of genes that satisfied filters described by *Dey et al.* [Visualizing the structure of RNA-seq expression data using grade of membership models](https://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1006599)_____no_output_____We will store data in df DataFrame with genes as *index* and samples as *columns*_____no_output_____ <code> # filter only files with FPKM data def getFilenameFromDir(directory): if ".DS_Store" in directory: return None for element in os.listdir(directory): if re.match("[a-zA-Z0-9]{8}-[a-zA-Z0-9]{4}-[a-zA-Z0-9]{4}-[a-zA-Z0-9\-]{4}-[a-zA-Z0-9\-]{12}[\.FPKM]{5}.txt[\.gz]{0,3}",element): cfile = element print(element) return cfile raise BaseException("Not found %s"%os.listdir(directory))_____no_output_____ </code> Create the dataframe, this may take a long time_____no_output_____ <code> # set the maximum number of samples to insert in the dataset maxacceptables = 150000 # count the number of added samples df = pd.DataFrame() added = len(df.columns) # iterate c(urrent)directory in downloaded directories for i,cdirectory in enumerate(dirs): # manifest is not a data file if re.match("manifest\.txt",cdirectory): print("SKIPPING %s "%cdirectory) continue # Icon and DS_Store are MacOS files if "Icon" in cdirectory: print("SKIPPING %s "%cdirectory) continue if ".DS_Store" in cdirectory: print("SKIPPING %s "%cdirectory) continue # current file name cfile = getFilenameFromDir(f"{PATH_TO_DATA}/%s"%cdirectory) # sample dataframe cdf = pd.read_csv((f"{PATH_TO_DATA}/%s/%s"%(cdirectory,cfile)), sep='\t', header=None) cdf.columns = ["gene", cfile[:]] # get only first 15 characters of gene name cdf['gene'] = [gene[:15] for gene in cdf['gene']] if i == 0: df['gene'] = cdf['gene'] df.set_index('gene',inplace=True) # set genes as index cdf.set_index('gene',inplace=True) # number of samples added so far old_L = len(df.columns) #insert new sample df.insert(0,cdf.keys()[0][:],cdf.values) # if something went wrong and data was not added raise exception if len(df.columns) != old_L+1: print(*sys.exc_info()) raise(Exception("Not able to add: %s"%cfile)) # break if added more than acceptables if added >= maxacceptables: break print(added, i)314b3b08-27e7-4936-a7d9-2dce4e4d3db7.FPKM.txt.gz ca7c56ab-9248-4c27-8992-8f73746d8d9b.FPKM.txt.gz 21e82bf2-237f-4f8f-86d0-aaf2ea5c4729.FPKM.txt.gz b0a26c8d-9863-4352-8abe-dde66bdb8e55.FPKM.txt.gz 69ad4bfb-b94a-4b72-986b-d6a73febd362.FPKM.txt.gz 4ba3508e-981b-46e4-a575-4f1d72015a7c.FPKM.txt.gz 2ce7edaf-0a05-444a-ba19-a26ee1b74513.FPKM.txt.gz 3dd9e081-d183-49b4-8d67-afa513496f21.FPKM.txt.gz 4dc43045-bf76-47df-a7d1-1cdfeed6b471.FPKM.txt.gz d81394e3-a55e-4b85-b13d-e3fa1806c800.FPKM.txt.gz 04c8f4a3-77c8-4085-ae46-ae66ab486c08.FPKM.txt.gz c03b9756-2611-404f-ba89-79e7143381bb.FPKM.txt.gz 1a0ba473-dd46-4a74-af37-784643492999.FPKM.txt.gz 72091aa8-e5a3-4468-8604-e5fbfefe5971.FPKM.txt.gz 6476b6b8-7e7e-44ac-9869-c93b654458a5.FPKM.txt.gz a5869c6f-c024-4eaa-9379-eabe99b45e70.FPKM.txt.gz b13f356a-71cb-41d0-a53d-51553db8987f.FPKM.txt.gz a32bbdca-c79c-49b9-bac7-8b442e4b47e0.FPKM.txt.gz 8377de42-868b-4748-bbe0-b66827e274bf.FPKM.txt.gz 44624e0b-ad12-4670-b427-02db7dc87267.FPKM.txt.gz 2c9d9875-4237-410c-afc2-15d06a034d20.FPKM.txt.gz 9bc59130-b619-4ac7-a87e-7342786311d0.FPKM.txt.gz 545df815-3b73-42f4-9540-457f01e43f07.FPKM.txt.gz e77ea9a5-0b4f-466a-86b3-378d756dea57.FPKM.txt.gz 11180707-ca2c-4933-bb99-df496bffcafb.FPKM.txt.gz 890494a7-cfed-413e-97e9-2ccf0257d94a.FPKM.txt.gz 9e49e3ff-f780-48db-9349-e59e2112b6de.FPKM.txt.gz d2bcb67d-17a5-476b-b35f-cf64f0eea72f.FPKM.txt.gz 6e3def3d-ec60-4503-86f3-334ba6bccd9f.FPKM.txt.gz 072e3af5-5598-4189-ab06-c5455442f322.FPKM.txt.gz 5260addc-0596-42c0-85c6-b74744db9cda.FPKM.txt.gz 1dc8075b-ed0c-471f-8947-967d23ff686b.FPKM.txt.gz d876380c-cad9-40e7-a627-f2fd2322509c.FPKM.txt.gz a925313a-a7d5-4c64-a440-7d7bdd7e89d0.FPKM.txt.gz d011c9fc-3598-4f0b-b059-85d63de31a9f.FPKM.txt.gz 778e1365-7b5e-469c-915d-c3be22bec7a8.FPKM.txt.gz b30ff193-2e02-443a-9a87-7ab4027bccaf.FPKM.txt.gz 4ed9b51a-78a1-48c7-90bd-8ecd2e5767ad.FPKM.txt.gz 3be615b7-aa41-47c4-852d-a5423cd03188.FPKM.txt.gz 74a288d5-7bb7-414b-b422-f9a4548a8433.FPKM.txt.gz 8158a767-89ce-4bf8-82fd-edfd3e54038e.FPKM.txt.gz 394ef2d0-7bc0-444a-86d3-8f32385c2494.FPKM.txt.gz fe232ef7-4d50-4dfc-bea1-6b6a7a3e4135.FPKM.txt.gz 378bba31-bcf1-49a1-b1ff-e14278f7054c.FPKM.txt.gz a22817b9-da0e-45cd-ba5d-3eb790e273cb.FPKM.txt.gz bc81c085-a31e-4b2d-9697-3680d9b0f132.FPKM.txt.gz e4d01f10-116a-4936-a24b-dda4e6c53397.FPKM.txt.gz bf8dd7a7-37b1-4a81-9554-d63fdf2fe4f1.FPKM.txt.gz 1142583c-2ca0-4f95-a94b-6f1a35ab1580.FPKM.txt.gz 58dc0435-3e56-48c0-b76f-3fe1da6cf1dd.FPKM.txt.gz 26f62491-75cf-402b-af7a-663d4458ea17.FPKM.txt.gz f4a1e772-2a5e-4c0a-803a-1bdb2b376a47.FPKM.txt.gz d179f9d5-0b8c-459e-9f47-271b25887c3c.FPKM.txt.gz 631fc20d-00c1-4b13-8520-f25f0388e985.FPKM.txt.gz c61fb9d1-d808-4328-8a9c-7ccb4beb139f.FPKM.txt.gz d726987a-4c0d-47bd-ac66-fc1d1172fa95.FPKM.txt.gz e7bacb69-34d7-476d-a124-5d015bec383d.FPKM.txt.gz 29c5dc54-3809-4dc0-b2a1-410c4fd8acf8.FPKM.txt.gz 027ec9be-7e41-44df-8f61-1e5698303eac.FPKM.txt.gz ea09d211-edbc-46c8-91e2-e6ae8fd6467d.FPKM.txt.gz cfd41bd1-a69c-470a-be0a-885732277544.FPKM.txt.gz 90af0faf-3e8c-4741-bf49-f87a5b21d54c.FPKM.txt.gz 26f78840-eb78-436e-aa49-0ffce733f70c.FPKM.txt.gz 53480dc0-569d-474d-92ae-25e0526c0864.FPKM.txt.gz 1363db56-5cfb-4f87-a485-9d2978a5ad58.FPKM.txt.gz 156e6312-4cfa-486c-8730-7881580fbc1a.FPKM.txt.gz 7dd3791f-7a65-4eea-9be2-3a0ca83ba46d.FPKM.txt.gz 59e22196-c04d-46c9-b16f-98858dca67d1.FPKM.txt.gz d3771f3a-ebc8-4ed6-97f9-6d6003d8470a.FPKM.txt.gz 7d77efb6-ce73-4e81-8cf3-957835f58334.FPKM.txt.gz 71dc5938-5e78-4d58-b218-e9f6eb6b0675.FPKM.txt.gz d487d4a4-5ac3-4c0b-95d3-4dc46690410e.FPKM.txt.gz 53e1ec28-49b8-4a9e-b4e3-f44fd44b68c9.FPKM.txt.gz 62bb3d7c-bcfc-4475-b21a-c5f76e90c06a.FPKM.txt.gz 2b856cba-b1b2-4d22-aed7-c5b8ab35415a.FPKM.txt.gz 156c4f8b-4467-46d0-bc55-085a29377f3e.FPKM.txt.gz 38ef59ed-8eb9-4957-a210-4d118df089ce.FPKM.txt.gz e95d6e6b-6aaf-4003-9dc6-4b464a9d1725.FPKM.txt.gz 6ad59e49-f417-4144-9c13-51b24f66422a.FPKM.txt.gz 5da55f22-6bc2-4c07-88f1-6516db69deb3.FPKM.txt.gz dc76cdde-f77f-4604-94c4-0b150b9a56b4.FPKM.txt.gz e657cd5a-217e-45df-8203-a3f8c4907f4a.FPKM.txt.gz ac3052a6-d735-4755-9145-80b9f633c838.FPKM.txt.gz be006f64-936c-416c-9e49-c7a861e55522.FPKM.txt.gz d6782f96-ac2e-473a-8adb-7cdaf648d5db.FPKM.txt.gz 13cf1657-4a38-4e8b-a3e4-8223ba78c6a5.FPKM.txt.gz 5c688f5d-2627-4437-8902-f2d29c8d1d9b.FPKM.txt.gz 9e64cef1-ee63-44eb-b58f-c19b98c99c9e.FPKM.txt.gz 89658cba-f8bd-49a1-99a7-265f7d5195c8.FPKM.txt.gz 12f480e2-870d-4812-b5e1-b65ab4abf568.FPKM.txt.gz d751b729-f6cf-4fdf-b55d-f82830f0b26c.FPKM.txt.gz 065e9b25-d6d0-41c5-b010-e25cfbf0bc9f.FPKM.txt.gz e9e3fc43-78f3-43ab-9ed6-c9363ceebdcc.FPKM.txt.gz ac3af473-48c0-4fb1-9f31-56ba6fa744c1.FPKM.txt.gz 2cc11440-ca73-417f-9130-82930f3a1152.FPKM.txt.gz 78a560b2-9ce7-43c9-9d77-dac465e13a0c.FPKM.txt.gz 7eea1b88-308c-4d1d-9a94-fc102940c58e.FPKM.txt.gz b8ab62ef-0259-4df2-95b6-843a150d11ca.FPKM.txt.gz 4984202f-29c6-4d54-9db5-c5965cf60ba1.FPKM.txt.gz 6011bac0-a278-42b4-bfd9-97f1b618926e.FPKM.txt.gz f4eb983f-0eee-4c8a-9f72-b5382957d377.FPKM.txt.gz f0e294df-f0be-48bc-90ab-ea05a387a0cb.FPKM.txt.gz 12d1e7fb-44bc-4007-820d-a85c41d019df.FPKM.txt.gz 1b3deeeb-5fde-4154-a57f-bb575ccfb2be.FPKM.txt.gz 186b83bd-ec3f-42b4-ba91-d86c9618d5e3.FPKM.txt.gz 0d6ef3a0-0ac4-4898-a438-34bbbfde6341.FPKM.txt.gz f0051b3a-479d-4410-ab85-270618818492.FPKM.txt.gz a9452e8a-decb-49e4-b4f6-3db74539643b.FPKM.txt.gz 1ac403f9-b84f-4f49-a02a-ce548beac104.FPKM.txt.gz c9061d39-7f8c-45e8-bc7c-912d26ee70ad.FPKM.txt.gz f8332a4b-0365-4732-a6b8-a3998872e24c.FPKM.txt.gz 7972a299-f3bb-43a5-b75f-a1cf3c998bc5.FPKM.txt.gz 4c966efe-8d3b-4c95-beaf-9c84d90983f1.FPKM.txt.gz 20a96363-a968-445a-bd9a-96b27e403516.FPKM.txt.gz 4fb80022-8f80-4992-b1aa-0f678c941878.FPKM.txt.gz 0392d81c-c15f-45aa-b6d7-ab85823a92a2.FPKM.txt.gz 96717cca-f561-43ee-8fac-74c9aa43c202.FPKM.txt.gz b92498c2-6e54-4fd0-b752-18932e915381.FPKM.txt.gz 0627e7ce-b8f8-4a64-bc6e-12edd3d736f3.FPKM.txt.gz 08c068ff-d5a6-468e-a9dc-3a5ff708ebef.FPKM.txt.gz fc435124-0749-4c74-ba94-d59a30953ed2.FPKM.txt.gz b08b8095-f54b-4649-ae13-cfdac7c7c5f2.FPKM.txt.gz bc8b83dd-706a-491c-8e2f-2aba9fc8a548.FPKM.txt.gz ce4ac30d-f2e6-40f3-a2db-fdcf0218b8fb.FPKM.txt.gz 8238188f-fb01-4f30-9f3f-6bec98625c63.FPKM.txt.gz adf180fd-e40c-433a-af02-9c2cfe8174bd.FPKM.txt.gz 58f98e52-c594-4bea-bf7d-9f75c09d0917.FPKM.txt.gz ac32a9f2-e148-4e70-899e-78b8ae563839.FPKM.txt.gz b01ddd22-5864-4e4a-a463-62831421fe99.FPKM.txt.gz 27b42f71-0ef6-4774-85cc-3c8b32e55732.FPKM.txt.gz 39ef3ef1-d868-4c8c-81a1-238451a08f11.FPKM.txt.gz df652972-6474-474a-9745-4b58fff68744.FPKM.txt.gz 9c62a984-f89a-4838-af13-786468b293c1.FPKM.txt.gz f8d249c9-6792-47a0-a553-607ca1c76436.FPKM.txt.gz b20e7824-c90e-464c-9f17-d138e84ca8e7.FPKM.txt.gz f387829d-a128-4eda-9d8a-10d63bb1ee9c.FPKM.txt.gz 47b14505-ca81-4d39-8eca-0cb773f8934e.FPKM.txt.gz dc0ce32f-d5aa-489f-ba4b-e01443815121.FPKM.txt.gz 7f147a1b-4ca6-481b-a79d-be653bbff528.FPKM.txt.gz 057c6a18-e5f2-4eba-9c20-c483c5a67cc3.FPKM.txt.gz deaa1a2f-b727-40b3-a9e8-6a8436921871.FPKM.txt.gz f442826c-7516-45f7-bbfc-968cd4ce51b3.FPKM.txt.gz 060ae2b3-e1ae-41e4-9b67-ba3938f90d3a.FPKM.txt.gz 1b503435-1ff4-4e3f-b07e-96adbe94549a.FPKM.txt.gz 12d2b9fa-1921-4033-bdb9-7e114c0d7812.FPKM.txt.gz SKIPPING manifest.txt 25b2851d-cbfd-4b98-afc0-038e75ae83a3.FPKM.txt.gz 64d2208f-1ad7-4209-8744-9eee55186fe3.FPKM.txt.gz ae3d90f2-e69b-4711-8e09-368c732cdc23.FPKM.txt.gz 04b28f00-f45e-422b-aadc-32039ef6b63d.FPKM.txt.gz 0f1cc324-508e-4543-8843-94de0071f9f7.FPKM.txt.gz b992fcfa-0303-41a6-86dc-10414c33c38c.FPKM.txt.gz 33ccace3-8a24-44e8-a101-15bd1e8467fe.FPKM.txt.gz 49becce3-507f-4df5-a679-efce45e8eae0.FPKM.txt.gz e3b11f64-075b-49d2-8d18-85afab1cff82.FPKM.txt.gz cfbb373c-8b64-4132-8957-234b36540deb.FPKM.txt.gz 6577bf84-a065-4519-8c7f-8f999a78ed88.FPKM.txt.gz 35a1df51-9daf-4d8b-ab87-222b8562244a.FPKM.txt.gz b580063a-0bcc-4816-b8ec-310890b995a5.FPKM.txt.gz abf27f2d-63a4-4903-b17c-51d428d8ba96.FPKM.txt.gz dd5e64ab-9ed8-4259-9d68-598853fa41f4.FPKM.txt.gz 6b953214-e5c8-4aae-b257-f7e326ce0b2b.FPKM.txt.gz a7d6c6ab-c4c7-4d51-8aa6-5adb24f71754.FPKM.txt.gz 6d158f35-9847-41fd-814e-9bb63cca955f.FPKM.txt.gz f9d52f9e-4f82-431a-a427-1c1259b8e765.FPKM.txt.gz dfb56742-389a-4c33-992e-8b3014c882e0.FPKM.txt.gz 2291ce95-0573-4402-8c25-ec5a824212d9.FPKM.txt.gz 898379ca-0a5c-4436-95d1-bb50239864ea.FPKM.txt.gz 485f059b-e143-40b6-b4d2-96bb7497558f.FPKM.txt.gz a7db602f-114f-4e8b-ae0f-2295f6cbeb4d.FPKM.txt.gz f326a1fb-da0a-455b-8d52-c79c40274549.FPKM.txt.gz 09572b33-5517-462d-9a46-ec3d9b11b59b.FPKM.txt.gz 3ce57f19-ef4b-4bb9-8530-eb0752975f4b.FPKM.txt.gz 4c8648f5-26ca-4550-b51b-bd2228b1e8e0.FPKM.txt.gz b4661ac5-95c7-450e-8797-b2dbe04ad1ee.FPKM.txt.gz 30ed6c4d-7d1a-4c22-b5a1-00cd37e35e6d.FPKM.txt.gz 68acaa5e-6a27-499f-925d-710af9d30710.FPKM.txt.gz 9b77c8a1-a670-4011-82d7-bc4cbab06486.FPKM.txt.gz ae538e05-955a-4e79-b42d-85ae4fdb50c4.FPKM.txt.gz 5f8af729-8993-4b3c-badb-aa1e3ce4e81c.FPKM.txt.gz 2a77f7b7-c933-405b-97db-62906fe759ff.FPKM.txt.gz 509159f2-6ed9-42fa-9e19-382813781aa0.FPKM.txt.gz 6f7a250a-353a-4306-8c29-eec6582884eb.FPKM.txt.gz 8f4891e2-55c1-48f4-bdce-c3848f74b8dc.FPKM.txt.gz 582e5a89-052e-4439-9d1e-a9d3fa0d82a7.FPKM.txt.gz 5fedbbb9-10a8-4f18-a1e8-30b3d6164083.FPKM.txt.gz 95167b79-f563-461e-81dd-a7143cfafe0d.FPKM.txt.gz a9b9f900-b114-48b4-95c7-41793107169f.FPKM.txt.gz 173d8cb8-d151-4766-b61c-7ecdf8af56ca.FPKM.txt.gz 3b8a5a07-1e30-42f3-92ad-6bd4226f6c9f.FPKM.txt.gz 5238d39b-b3f9-4beb-a24d-c1f712fb520a.FPKM.txt.gz 8dcf8486-6c13-4623-90c9-2c9f65f58fc8.FPKM.txt.gz 3926dba3-b7b5-45c8-aa60-b8fe6dd8f114.FPKM.txt.gz 6188f79f-599d-45ff-802f-d1ee94b97b22.FPKM.txt.gz 8c64308a-d5ee-462e-962a-9cb83acd43d6.FPKM.txt.gz 994ba708-9a28-4e86-8dc0-f3626cb26b1e.FPKM.txt.gz d5f29606-e2c0-42a9-af59-fb178fcf7201.FPKM.txt.gz fee154e0-ff94-402d-91e3-6c3d458bee00.FPKM.txt.gz 524d2e99-0526-426f-873d-ea65e27b18a0.FPKM.txt.gz 2c4641bd-928f-4fa2-9931-3351955c2396.FPKM.txt.gz 82bbc554-6c00-4a55-9295-a36ad9e648c3.FPKM.txt.gz 4ea3866e-01a6-490c-84a5-546c142ebcb5.FPKM.txt.gz ed82f112-4b2e-416b-89f3-9c408f0f022a.FPKM.txt.gz 3d936a71-67c1-4173-82a4-12a667bcf3e9.FPKM.txt.gz 69f590a7-8b7d-46e4-9a90-a1a3d59771f9.FPKM.txt.gz 2d0ce6d3-9a09-46ba-9c91-bd4f3764e490.FPKM.txt.gz 9c75e183-7328-4a63-993e-f28a505017d2.FPKM.txt.gz f98edf9e-796d-4eb5-b86a-8d41748279b8.FPKM.txt.gz 5c678d4a-febe-4ae5-ac68-8372f5f928dd.FPKM.txt.gz 719e930d-dd49-4e49-97ed-7ce1f253f3cf.FPKM.txt.gz b2ca49c5-527d-43e0-8280-6fe8ff7b3738.FPKM.txt.gz 3c54cfc7-eed0-454e-9877-40eb85a7240a.FPKM.txt.gz 0707ddc2-77f7-41e2-ad4c-783673d0ecbe.FPKM.txt.gz feb99183-103f-4ca1-b61a-b88227826952.FPKM.txt.gz 0bf42173-8de0-457d-ad94-81584556c3e6.FPKM.txt.gz b32ed1d3-c9f1-42c7-ad2a-32794591a839.FPKM.txt.gz 29b681e1-b036-4b63-9113-59bb58618c8a.FPKM.txt.gz 1477e0a8-d5fb-4572-8caa-c6f2d1786402.FPKM.txt.gz 0549c0d2-137d-4de8-8578-f043523b56c1.FPKM.txt.gz 7160e771-bab5-4ca6-a2da-65fdc4b8358d.FPKM.txt.gz f229fcea-9313-4f4b-a8a9-c549c1486887.FPKM.txt.gz d1757d87-4920-473b-8586-24192541cb02.FPKM.txt.gz 19f441a5-6db0-4ea3-8241-728b5e3fb86a.FPKM.txt.gz cb978bf0-8680-4e4b-a7db-9d644a0a0a9a.FPKM.txt.gz 8f2c0849-7cd4-4da2-9feb-e3a942b892c4.FPKM.txt.gz 6262c85a-4d2c-4b03-a3c3-1bbe5dfb008a.FPKM.txt.gz 9c21f775-3dff-43de-9a05-b25480cbe815.FPKM.txt.gz c57c5f57-5426-4662-832d-6f342ebeff04.FPKM.txt.gz c584b745-3bff-4591-8460-02eb1a00d214.FPKM.txt.gz fddb387d-716e-48ca-9e99-483564c5384b.FPKM.txt.gz 35a20ef3-e207-45f7-9854-6b565e8f3849.FPKM.txt.gz dc724482-dc82-41e0-9f01-3451e5be2ae8.FPKM.txt.gz 16c42b24-fcaf-43e2-8f52-d7488f82257b.FPKM.txt.gz b468a0f7-2b37-4988-a179-34870f3ff1fb.FPKM.txt.gz e2e299f4-252f-4405-8245-2eb707337c26.FPKM.txt.gz c962af41-67e1-4e18-9d59-cee591ef98f9.FPKM.txt.gz be066ad3-e17c-4adc-9f4b-4d5c33d4dfad.FPKM.txt.gz bfd76989-9812-4065-9e07-22a1bc498018.FPKM.txt.gz f5abf3ae-a7bd-4606-bd9a-2633a82eb65c.FPKM.txt.gz 92f18d53-5c6f-42cb-9cce-b3100d72deb2.FPKM.txt.gz 0c37e843-b21b-4d00-b91d-c9d7865200c2.FPKM.txt.gz 2ba1fb7e-8853-46ab-8536-770345dc6e9c.FPKM.txt.gz a3582f2d-7af3-45b4-9059-c1a9cb740b4e.FPKM.txt.gz 73912113-540a-41bd-ada5-4f427592dfee.FPKM.txt.gz ce0b61e5-ee68-4577-896c-aa3f2894fb36.FPKM.txt.gz e897fe45-26b8-468a-84fd-e1ad54f840e3.FPKM.txt.gz fefd7990-acb5-4d6b-9bfa-b1a474e368d7.FPKM.txt.gz 2525f105-0ebf-461c-a69c-03bf8ef1e42f.FPKM.txt.gz 8cd200d8-da0a-4176-8665-d527343c7264.FPKM.txt.gz 0d7f880c-c4c5-4335-b49e-5f321f841840.FPKM.txt.gz bdc9dadb-a543-4965-b85f-8b20b14d4efe.FPKM.txt.gz 989ceb12-2999-420a-a5b6-608012c1ef83.FPKM.txt.gz 0cd5fbf1-9e0f-4017-9e26-93ff0828c169.FPKM.txt.gz c63458fd-9b68-40d2-86a4-dc69f9946c36.FPKM.txt.gz f1382ff6-7c8b-466e-b6fd-bb120c7031b1.FPKM.txt.gz fd917385-0e99-45cc-91a2-f6bd48c8019d.FPKM.txt.gz 97f3c6a3-a3b8-4f59-9242-8845fa1c90a8.FPKM.txt.gz bf09f1fe-5f5d-4262-9159-5ec62f86294e.FPKM.txt.gz 94da98fb-7e9f-4937-bf6b-dd6885258f7c.FPKM.txt.gz 2b5a102d-0462-4fee-a1f1-21dca1390db0.FPKM.txt.gz 9f64d4a4-e4ad-4ee1-89a1-049b78956bcb.FPKM.txt.gz e9889edc-cfbf-4213-9629-fde30511bd63.FPKM.txt.gz 220a9619-e17e-4529-bf80-9b99dbd49d21.FPKM.txt.gz 9e7321e1-0c7e-4140-bb4c-f322d81be7ee.FPKM.txt.gz 6ac1dbcf-93f8-4057-9220-201efde2e705.FPKM.txt.gz b2bd3b9d-2380-47bb-a13d-a4e7a00d6740.FPKM.txt.gz d38d30c2-78fe-4e5f-a94b-56080bf4371a.FPKM.txt.gz 2ef45476-2bea-4b74-9d97-a33898386d5e.FPKM.txt.gz 7f39b57f-556b-46b2-a67a-af86345f900a.FPKM.txt.gz 252eeb0c-fad1-4c29-bdd8-57b511852de5.FPKM.txt.gz ee6fc916-052f-4fab-974f-119ab34078d6.FPKM.txt.gz d370b7a3-a534-4190-a6d0-79276273093c.FPKM.txt.gz 12b16f45-0e5e-4c24-85b5-fd05a6482fa2.FPKM.txt.gz 031c7bbf-4088-400e-9ff9-036417c0091a.FPKM.txt.gz 780ab54f-9100-447b-ba9e-75480e7be3d9.FPKM.txt.gz fde67178-6278-4872-a2c4-cfa32f71fdbe.FPKM.txt.gz 36cd6a8c-1c63-44d3-9276-ba9fe9c155ea.FPKM.txt.gz d123efcb-e79f-4ac5-a0a2-068634e84d26.FPKM.txt.gz 5d420937-4d97-4f04-9ce9-754054261d97.FPKM.txt.gz 4e3e715c-d067-48d9-96cc-0a49c3ac611a.FPKM.txt.gz 1866d49b-28fb-430d-a699-2e5033d0d74f.FPKM.txt.gz 2133df71-b9a1-4b99-8eb7-0b7f9a1b5cc2.FPKM.txt.gz 9a07164a-711f-460d-a2cb-43651c85bcde.FPKM.txt.gz fbc1a18a-399b-4ecd-a6d0-1770773f9c14.FPKM.txt.gz 52cf2038-62b5-4c57-b5b5-3f06c48a5f26.FPKM.txt.gz ba17045d-3dee-4863-9cec-54e2da669662.FPKM.txt.gz ae95b302-bd9d-4d3b-a791-2f788bf53d24.FPKM.txt.gz e879390d-b662-4121-8137-e35bf208edac.FPKM.txt.gz 8a2863d8-26a3-443e-a36f-5c149de57136.FPKM.txt.gz b8181936-6f76-4eec-a57a-92c4cea4f34f.FPKM.txt.gz 8df93787-af1e-4b07-afa1-f910eda02a1c.FPKM.txt.gz b2157771-1197-44e2-9c5f-345ca584bc16.FPKM.txt.gz 99580e31-0d96-415a-9519-91c493709684.FPKM.txt.gz cd8d48a8-d183-48ab-82f8-ffb11224239a.FPKM.txt.gz 5eb38f95-cf76-4066-9a4f-df7c30728ff2.FPKM.txt.gz 562cb51b-fef8-4d61-9067-9177a2249c45.FPKM.txt.gz cd1665ab-4599-49b1-9d3d-b106dfbf8bcc.FPKM.txt.gz e750282f-7315-46bb-9911-5bdac74fb1eb.FPKM.txt.gz c839556e-515d-4793-9960-0edd0a71a101.FPKM.txt.gz 31dc5fe1-f1d2-4a41-b939-fb08745c8d8a.FPKM.txt.gz e24a7d5b-0da7-4f1b-aca1-9dc89e6b281c.FPKM.txt.gz 98b20d7c-8d7e-4d6f-bdf1-67fc8d05a26c.FPKM.txt.gz 05582ed0-da34-418a-b96e-fb4505e9ee03.FPKM.txt.gz a28e1869-665e-43a9-8b05-ee0841b672df.FPKM.txt.gz 79caf319-b6a9-4e02-aede-444242d215d4.FPKM.txt.gz c258e44b-44c5-43d7-835e-912049ae12c4.FPKM.txt.gz 85ac083c-dbb9-4541-8b75-d73ab216f54c.FPKM.txt.gz 3799992f-2a2b-4a7b-8c21-f493764f4ce8.FPKM.txt.gz 2ddecb96-a7f4-4d9f-b6be-09685425275a.FPKM.txt.gz ecc62f3c-7ddb-425d-9c68-e3ce3f5872f0.FPKM.txt.gz 8458d0dd-5bc6-42e7-a60d-2953cec4ac9d.FPKM.txt.gz cb537a4c-7b5c-493f-8dea-6fd0107e6c58.FPKM.txt.gz e38ee2d6-eaad-4d39-a7f3-951b1532c9e5.FPKM.txt.gz c1ed5a99-73d9-443d-a32e-222705cb6a3a.FPKM.txt.gz 6449f2b6-1af9-4612-abde-ac0f5bb1ff7e.FPKM.txt.gz f290cd17-7e92-4c27-be1d-ebf2b7bf8c17.FPKM.txt.gz ac6fccd6-d05f-40c7-9afe-82fa33ce919a.FPKM.txt.gz 47caf079-95ad-4c12-b8cf-fb333f72900d.FPKM.txt.gz 1c3039f0-0cd5-4b16-bdbe-128bfbd63b86.FPKM.txt.gz 41d10ca9-cd89-4884-ad20-2d36cb1125a2.FPKM.txt.gz 4545348f-bbfe-4f68-826f-3fb2b019a328.FPKM.txt.gz 906d81f3-13b4-427c-acd3-64b5804b6306.FPKM.txt.gz 2c2df194-4ed9-4b96-9b19-3b7988bd6d81.FPKM.txt.gz df398753-08fc-4a1d-90b1-a44760be2560.FPKM.txt.gz 96dcdf87-8eff-4edb-b99b-0854e5d66832.FPKM.txt.gz 937cc0bd-e6b5-48c3-b336-66fe281385f5.FPKM.txt.gz 32f7779d-9065-4241-ad44-efe90148ac23.FPKM.txt.gz a571b078-2572-4b5a-b9be-52ad05085e1c.FPKM.txt.gz a24d07c0-9b91-4538-96d2-f97544f4dbb0.FPKM.txt.gz f67ccdd6-1577-45fd-9e49-251b289d1179.FPKM.txt.gz c8fb3217-2cc9-47c0-84b6-b666c07624e6.FPKM.txt.gz d0918436-f01a-49c3-afba-5c29f4076f6c.FPKM.txt.gz 76b6b60a-1e99-455c-a531-726d43773d70.FPKM.txt.gz 37616fc4-9761-47f8-a819-9a9ed6564b21.FPKM.txt.gz 7c5f96bd-441c-4467-8700-dc5224e682bb.FPKM.txt.gz 06b1bbf5-07e5-4a03-aea1-f7203bab2572.FPKM.txt.gz 9fc18ac7-7fce-4840-b2c5-5daac7e422cf.FPKM.txt.gz 2e9ba05e-9255-4556-9cd2-78f7bca06f3f.FPKM.txt.gz 15ba502e-9015-4666-9fb2-046692b22322.FPKM.txt.gz 6363e940-3a9d-49f3-adb0-e084be80d95d.FPKM.txt.gz 0fca0e34-1499-410a-a5a1-561406245e03.FPKM.txt.gz f561e8fc-3ed9-4d8f-be9f-800966251738.FPKM.txt.gz f9ea29ab-6953-4646-b367-cff2483b9f46.FPKM.txt.gz 5d259206-1431-48f8-abc3-3001bab5243d.FPKM.txt.gz acec1346-4e7b-4e74-bc33-980e56dfafee.FPKM.txt.gz b375ca9f-6408-496f-83c9-469c74ccc7a2.FPKM.txt.gz a1231612-345f-4ff4-8ee9-c5e596357f76.FPKM.txt.gz cdb40040-4357-4d85-ae62-91c38d4806d1.FPKM.txt.gz 755f981f-f957-48ec-9779-cf9887e19ab0.FPKM.txt.gz 48c0be10-df66-4589-a91c-16e49398df71.FPKM.txt.gz 653ffc71-52ca-473e-8ce4-c934932baa8c.FPKM.txt.gz 4ed68c20-e7f2-4b40-97f1-4f8a3ddae0a9.FPKM.txt.gz e9205c80-39e9-446d-8151-f493e43228c4.FPKM.txt.gz 83053f28-d761-4d5a-acd6-855cb22e46e2.FPKM.txt.gz 83da3553-d0ec-419a-8527-a5bb7101929f.FPKM.txt.gz 7a490321-6070-4c3e-827f-2cecef3fc499.FPKM.txt.gz d82516e5-6536-4666-b834-82111e0de77b.FPKM.txt.gz 206537cc-dc94-4722-b6d2-1e5fd6588078.FPKM.txt.gz a059cf08-5046-4a79-a82f-c07da2955035.FPKM.txt.gz 4f2b6b76-3a34-4cb6-9ec3-e343171ff2c2.FPKM.txt.gz 994acd99-03f8-47d3-82f0-2b1cbdf86d8a.FPKM.txt.gz 8e1d09e8-6dbd-408d-a839-f5bbebe30a57.FPKM.txt.gz 06acacc3-9eb0-4bd9-ad1c-0811eaf3b0da.FPKM.txt.gz 6095fc44-4542-4435-8e6b-dba09b3f7f92.FPKM.txt.gz 98fcd5aa-3b1a-4980-a65f-c44a8c070101.FPKM.txt.gz 708e035d-e38b-4969-a96e-f7c324b3f8a3.FPKM.txt.gz ebe06b50-91d3-4a79-bc8f-6564742c1652.FPKM.txt.gz d31beca5-e987-4def-906a-63f68ecc5c7a.FPKM.txt.gz ce69433a-55bd-4160-b1fa-85178fe0fce4.FPKM.txt.gz 2b4af4f1-5bdb-486d-9115-8527e6a043ca.FPKM.txt.gz 3fbc019c-2a61-48f6-9c6d-00823297ae53.FPKM.txt.gz aead190d-5225-42df-b11e-696de98d8011.FPKM.txt.gz e77c14e1-1e3f-4f01-992c-b1e68c365005.FPKM.txt.gz fc1963bb-e709-467b-950f-0177512ba35e.FPKM.txt.gz 8eb179a6-a0af-4628-aae2-78800db3e9d8.FPKM.txt.gz 3a360c80-e06f-4e38-a29d-3e4514047b76.FPKM.txt.gz 4f361433-9254-4f3e-9e1b-39d665e17398.FPKM.txt.gz 3b1e8e56-f2a3-4a3f-8f99-ba8a8f33152c.FPKM.txt.gz 64cf9ae1-d259-4d24-a448-37c00b7f9ded.FPKM.txt.gz c27a0a56-cde3-4e53-b02d-8a2ca66b5d1d.FPKM.txt.gz 508976fe-e7a9-4743-a098-2fb0cbe32f8d.FPKM.txt.gz d10d51ab-05a4-4fa4-a0a8-2583db9a4168.FPKM.txt.gz fea5e368-ce88-4930-9283-c1b9f9c79a70.FPKM.txt.gz 7ff71ff2-38d3-46b3-b2a3-105377beb504.FPKM.txt.gz 165a0e8c-b70f-4fbd-8664-49f3764383ff.FPKM.txt.gz 8c9a74f3-0562-4d9d-a0e6-27a151d5a534.FPKM.txt.gz 8bb0ee3a-c7d0-4110-b13b-eb7d087ece61.FPKM.txt.gz 5c9a781e-50c1-44ec-9c8a-7d8fca99e3c3.FPKM.txt.gz 00ea21cb-349a-40c2-ad34-c36e18dcd02f.FPKM.txt.gz b2a65871-5e5e-40b2-9ba7-0181b57ac173.FPKM.txt.gz abc09f7c-a791-4b91-9efa-b6d716811004.FPKM.txt.gz 6095b94e-2101-4128-ae51-f0c77daf7811.FPKM.txt.gz df692bf8-d523-4528-aa0c-a0549373adee.FPKM.txt.gz eef74b1b-b56d-4c1e-8524-952542b64eb0.FPKM.txt.gz 4938e83e-4ff0-4880-bc31-aa52e3a68ec2.FPKM.txt.gz 53424807-1674-4d2c-9d76-dd907f6a8ad3.FPKM.txt.gz 491c7f67-a4c0-4d26-949e-773cb0f123ae.FPKM.txt.gz df096776-c5ad-4f04-a913-02a16f33390a.FPKM.txt.gz 0ea05ccb-3752-491c-acc2-ee397fef52da.FPKM.txt.gz 701cb0e2-a169-47ce-b34a-4bb86501c82d.FPKM.txt.gz e158cf09-9e8a-4a3f-9097-2f94222e5461.FPKM.txt.gz 99ca2aaa-bc66-4673-843b-ddbabe680a12.FPKM.txt.gz 6e98ba8f-e00d-470c-832f-adc3dc6956a6.FPKM.txt.gz 1742f583-99dd-46ba-9df2-b47d9cb1c3f9.FPKM.txt.gz 5061303f-9743-4495-a9ea-a5c579992525.FPKM.txt.gz 51221b71-eaee-41f0-849f-98ef800ad010.FPKM.txt.gz 50a313a9-99f6-4ca0-84b3-0f4b59ae0a1a.FPKM.txt.gz f93932b6-c8c3-45d8-9911-c4d387d9bad0.FPKM.txt.gz 89a3db54-ffab-4341-99b6-a2213829bc0c.FPKM.txt.gz ae32fcc8-a1b7-47c8-b1c5-1e5cdbcf3725.FPKM.txt.gz ef34e0ee-46f0-4a81-b800-dd50cc2266a1.FPKM.txt.gz 23894b02-d2b5-4a15-a11f-cf81d9d97f53.FPKM.txt.gz 447f0731-1dfc-4c5e-9641-43e2d0a2af43.FPKM.txt.gz 58ce6671-f931-42fc-acb2-fac1ad460a22.FPKM.txt.gz ce84840f-f5c3-4e67-baf6-c574cf585169.FPKM.txt.gz 3b7cb73f-d672-4d68-b28a-e01b82096ba4.FPKM.txt.gz 2d683c1f-3a48-4bb8-84f9-bad6347c008b.FPKM.txt.gz 592b3724-0836-41e9-b4d0-1bce729e7a1f.FPKM.txt.gz ab51cccb-b5df-434a-a5a4-32368ccf0b24.FPKM.txt.gz e5ea329f-8042-479c-b8ff-fb752b6b44e0.FPKM.txt.gz f204f6b8-03fb-4464-922d-de9fffeff9d5.FPKM.txt.gz 042b37c5-7ba2-4731-8b16-5510cce257b4.FPKM.txt.gz d6ee768c-7705-468c-bc8a-298c71ce1c8d.FPKM.txt.gz 18860abf-6fac-490b-8bf2-b906ac1422a4.FPKM.txt.gz 2c90459c-9341-4555-bf34-0d6c179751f2.FPKM.txt.gz 20c6123b-33b6-46a7-8625-482e4594c032.FPKM.txt.gz 430d6c41-8efd-4e14-9876-5901e770f74d.FPKM.txt.gz 12e07df9-d348-41de-9b87-9da18df6934b.FPKM.txt.gz 23e87533-2220-4b0d-b3e2-43707cb25858.FPKM.txt.gz 344d2093-8816-4f3b-a38c-cfc7ae7cd6ba.FPKM.txt.gz 01667231-844e-4f98-905c-c0ecafee35b5.FPKM.txt.gz 69fd8112-03f0-4d28-aa51-03317daf4616.FPKM.txt.gz ac5bc405-b210-4bf3-a4e6-6ff071aa59c7.FPKM.txt.gz d5a54e1d-02d7-4cee-b6b1-e0a616cc322d.FPKM.txt.gz b9e1ccfb-5124-4f8a-8f6e-8ecfeaa54e9e.FPKM.txt.gz 7b3fca6d-8886-46b4-99fa-cc87e6790c9f.FPKM.txt.gz a506e085-33bc-432c-bffc-5dd858bcfb3f.FPKM.txt.gz 7f73a1b0-8b36-425c-90d6-c1daf18e4a18.FPKM.txt.gz 5abe9a73-0ec5-4d88-9aaf-1db6795f364d.FPKM.txt.gz b600c271-e77b-4752-af07-08dd817ea601.FPKM.txt.gz dfbd7fa2-381f-4d52-a997-8e9ec71f760f.FPKM.txt.gz 708a03d3-8774-4c63-a04e-c4d3d85fd4ec.FPKM.txt.gz f1af55bb-21e4-473b-ad35-906fdc6a7cf5.FPKM.txt.gz 3fbb4c61-e8c6-40dc-b7bc-786380add78d.FPKM.txt.gz 65bd36a3-b961-49ef-a73c-7f347a918393.FPKM.txt.gz b1f9bae8-7483-41ba-aab7-7a051850df23.FPKM.txt.gz 01fc4b6a-e68d-4a6f-b472-75d1620f4172.FPKM.txt.gz 12fff07f-74d6-4ee0-9f4f-36a4d194d24f.FPKM.txt.gz 5b84c002-9ca2-4188-bf22-1e0994dd3978.FPKM.txt.gz a54f5ecf-6e18-473f-ae1b-a06b4a1fd9c8.FPKM.txt.gz 3f2012b6-8d85-4666-b73e-64cf2c0946fa.FPKM.txt.gz 06db23e3-2dc3-4da9-915e-fe3d4a091de5.FPKM.txt.gz f822471e-0d7d-4bad-a42a-79dce03c366f.FPKM.txt.gz c58c1851-be04-4875-96ff-2833bb6af91c.FPKM.txt.gz b29fd61e-8be4-4fe2-bd4d-fe54b1e1f8e0.FPKM.txt.gz d14e0a59-4f3a-4d92-9a54-82925bd470aa.FPKM.txt.gz 1d5c03c6-a0a3-4255-b54c-d9bfac305a7f.FPKM.txt.gz 9dd22540-6908-447a-bc26-2da5522f68b6.FPKM.txt.gz e4de8f87-b9c8-4021-b29a-b7d128ff64a3.FPKM.txt.gz 3bd094e7-eaf8-4cef-9f87-557bc5753b26.FPKM.txt.gz 60f8b5a7-a7da-4dd6-88d0-97dbf4c3c269.FPKM.txt.gz b9010a9a-7ee9-4f57-a1d3-5bba72acb67d.FPKM.txt.gz 96a02b1f-421c-40ee-8551-2b2f18709bcf.FPKM.txt.gz 5be2bd1d-24ac-47c5-9e20-33dea602ed65.FPKM.txt.gz bfcc0178-87a0-481d-b694-eb429fdfbaf4.FPKM.txt.gz 22c44a19-4cbb-4950-acf5-8d23a3fcb0a7.FPKM.txt.gz cc54e123-c06a-48a8-b864-7497ecb37c0b.FPKM.txt.gz 287715d0-5667-434b-b104-9de5b5843877.FPKM.txt.gz 557b2e70-c648-4da7-90b6-b2d844e9b6ac.FPKM.txt.gz 9d7486ce-efb9-413f-80b1-e47971625875.FPKM.txt.gz 10e453a6-8c30-456c-9a8a-f1195b6122f4.FPKM.txt.gz ca5c1f97-379b-484c-871b-d744ac26edc3.FPKM.txt.gz a55e6cad-f959-47f9-a525-d429099c7e99.FPKM.txt.gz fda300eb-a8da-4665-bd1f-acf6e67bcaf4.FPKM.txt.gz 80af6fa2-c963-4ab7-9965-8b1678f49376.FPKM.txt.gz ce786209-c77d-47e0-95a6-20c036bed8b3.FPKM.txt.gz eca61d8e-d9c6-439a-af61-1ae0abe6d1ec.FPKM.txt.gz 69918dce-8729-48a2-8294-a8b6b72c02e7.FPKM.txt.gz 55adf423-02a4-425b-b57d-2d804a96320d.FPKM.txt.gz c5ed8e4f-a283-4605-88de-cadcac24d134.FPKM.txt.gz 4d36e127-5fad-4b97-afff-28f4bdbf5f5d.FPKM.txt.gz d4a55f22-e655-4c8a-bb7f-a1a35eb5e43f.FPKM.txt.gz fcecc36e-3b4e-4850-9003-fd573e08ec64.FPKM.txt.gz 1916f968-78bb-4ac4-8ddf-0585132076a9.FPKM.txt.gz 65a21b09-322b-47cf-86bf-88916094c4f7.FPKM.txt.gz 36c6972a-3041-478c-a68d-702af9cc4954.FPKM.txt.gz 5f02561f-c7c7-4cea-997e-cfdf2ae02128.FPKM.txt.gz ef9244df-399a-4646-b777-a25766835c8d.FPKM.txt.gz 43b969ef-d7cb-4c8d-b6de-fdfa48b8edff.FPKM.txt.gz 0933e6f7-e2ac-4dc0-b34e-4a45109a28b7.FPKM.txt.gz 42c07ead-83bf-4277-9691-1a363e2b5b90.FPKM.txt.gz a6801749-f1cc-4824-8be9-c86df987b768.FPKM.txt.gz 17b2f230-8729-4609-8993-8df3fb3df994.FPKM.txt.gz 795e7910-92bb-4ca0-90fc-6f970bfaa2dd.FPKM.txt.gz ca21d25f-5466-487a-9e98-6c47da924dae.FPKM.txt.gz 10d037c2-a3bf-46cf-9e27-0584733f6794.FPKM.txt.gz a2e1b373-9085-4f03-8238-c428d229624f.FPKM.txt.gz fdbe22dd-cc5e-465a-9bb4-5336ac42af75.FPKM.txt.gz a7ac2239-f60b-40a2-b931-5aa9b3adc8d8.FPKM.txt.gz 05028ec2-a686-4965-8351-e70c6537091a.FPKM.txt.gz 3107533c-054e-463c-b2bc-73ba71ed133a.FPKM.txt.gz 61f2d13b-a11b-48e2-ab4b-84ca27d54eb2.FPKM.txt.gz 112c839c-41b0-4864-8af1-3fda2e67e4b1.FPKM.txt.gz 05d2b1e9-22a4-4369-b622-63846d672875.FPKM.txt.gz 34dfd211-d264-4904-b595-a50b36f15c36.FPKM.txt.gz 037b5a94-487e-4f5b-90a3-d696eb4a8a3e.FPKM.txt.gz dbebc784-425a-4946-b5e1-cb43c349c398.FPKM.txt.gz 944c109e-c6eb-4073-bac9-541585602d59.FPKM.txt.gz 38546283-9872-45fd-b15d-c8b35c538264.FPKM.txt.gz 87d1452e-8ec6-4c75-a03b-361738c728c0.FPKM.txt.gz c9381f51-ec09-4487-88a4-8e94452e4893.FPKM.txt.gz 29326e2f-eab7-445d-93e7-9290731745e7.FPKM.txt.gz d1eb4b3f-854c-459e-ba77-6ba6eccc3256.FPKM.txt.gz 9bab6023-8adb-4257-8c2a-88ddedf4a229.FPKM.txt.gz 10d6ec12-4039-4262-b16d-1bd671b99a57.FPKM.txt.gz 41059154-b4e8-4b61-9600-8b19262ccab0.FPKM.txt.gz b0937926-a768-41bd-80bc-68d4e5743f8c.FPKM.txt.gz 0bf53fc6-b8fb-4e6d-9297-4129c708f3da.FPKM.txt.gz fdeae242-eff2-405a-bf00-1af2a6987479.FPKM.txt.gz 3fbe693e-0d6c-4866-8b4d-2e9738c93e8d.FPKM.txt.gz 0c0d8887-ff41-46ae-9aba-f6f98f8c1c5e.FPKM.txt.gz a5e71850-834c-4c39-b4f9-92c952b2cce6.FPKM.txt.gz 39a4a8d9-029d-46bb-8d3e-6edd0d4d1191.FPKM.txt.gz c763a179-e1ba-43a6-b34f-cbe73098e535.FPKM.txt.gz 022cbd07-3e20-4144-b2ab-8c76adffae73.FPKM.txt.gz aa5ba703-5bc5-4536-98cd-ab31520d29d7.FPKM.txt.gz d0d7fae1-355e-4ada-a469-b69d59c1edfe.FPKM.txt.gz 5d6d057e-dd3f-4a85-8bf1-74d5069c7d9d.FPKM.txt.gz 372ab2b4-31f7-4e79-962e-ce2b4a385c12.FPKM.txt.gz 2391658d-7cf2-4a09-8dd5-19ea7a37dadf.FPKM.txt.gz 57b11e5d-265e-4f18-bfca-6f3eae207551.FPKM.txt.gz 6581cbc8-465e-4c0e-bac7-bdffefec92eb.FPKM.txt.gz 5b1d889e-c40d-4f58-a4b3-9e9eb0835e86.FPKM.txt.gz c6bb0ad7-6fbe-4436-adf7-f2e87b5384fe.FPKM.txt.gz a12ff0ad-8580-4cec-9e63-1175335f0316.FPKM.txt.gz 6fdbffc8-dac1-4a98-9912-9084b2ce3f28.FPKM.txt.gz 3532ec0a-2563-420a-b71c-3dcc632ef477.FPKM.txt.gz 6d53bf05-58d4-4633-b2da-0dd71d89ee91.FPKM.txt.gz c71da743-de94-46d0-9ec0-fed1574845b9.FPKM.txt.gz 20870a8f-9d0a-41a0-90b6-f0f51a03e3e5.FPKM.txt.gz 3f52a1bb-997b-4ee0-b61b-8060e7d8666d.FPKM.txt.gz 6d0cf353-e818-4173-a200-2c8dd85d49f1.FPKM.txt.gz 7ea958f2-f45d-4549-9d45-07c51eebae99.FPKM.txt.gz 7c9551e0-06bb-4846-b407-a95931200318.FPKM.txt.gz 55d79fa7-5feb-4c1a-9572-aaac7ba5eb7e.FPKM.txt.gz 12c4510a-fbad-4bff-97b5-846d4da395bd.FPKM.txt.gz 5161838c-0899-45da-9fdf-2e44a3b402db.FPKM.txt.gz cc296619-683f-4339-ab1f-51d567fc678a.FPKM.txt.gz daefaee3-eb0e-4cf8-8807-f544128345e7.FPKM.txt.gz 7844b74a-2846-4f42-8874-edcc90261fef.FPKM.txt.gz 397e302e-393e-4f54-b360-fcffe394271e.FPKM.txt.gz 60be1c92-8f57-45b0-adc8-eb2f2d0e3805.FPKM.txt.gz 97cabc84-779b-4a98-9518-93ba4523db4a.FPKM.txt.gz 213d57e8-a4e9-4f31-b711-a80ddfe644b7.FPKM.txt.gz 1ef4c9a4-403e-4ed2-b9d6-06302a842278.FPKM.txt.gz a72fb099-b61f-4691-a87e-7993172c9ddd.FPKM.txt.gz 2891f45a-b4b0-438b-92a6-494fefea1efe.FPKM.txt.gz 0b000450-72a5-4d9b-a22b-fba79f04eb1a.FPKM.txt.gz ed610a8d-576a-4f6a-8259-01f9dd48b9d4.FPKM.txt.gz bdd19fdc-232a-4d2a-9574-91eaf87d9abf.FPKM.txt.gz 3bd86fe7-7ac0-4ddd-a676-06b12845fb7e.FPKM.txt.gz cfccab4c-4b17-4012-a19b-5141f7a5b68f.FPKM.txt.gz 783a2365-c4a2-4db4-ab14-d52e603d8230.FPKM.txt.gz 7c1d57e4-ea8b-4634-8b1d-65ef201dda59.FPKM.txt.gz 3880c844-453a-47f0-a046-922baa28a07a.FPKM.txt.gz 45bf5217-97bc-4516-b9f4-8fa1356c75d5.FPKM.txt.gz cf792f7f-a3ee-4544-b029-8c4e4da4f973.FPKM.txt.gz 97fbdeeb-5066-44c3-93ad-b05917236933.FPKM.txt.gz 0b07a88f-e7c7-4fcc-8b95-6a7674c436cc.FPKM.txt.gz 82cd39d8-7a0d-4ebf-adad-57b3a4e68262.FPKM.txt.gz f994e1d7-6427-4884-8993-f930c6331bff.FPKM.txt.gz 8aa24565-3c27-4ed0-be70-6e571a609827.FPKM.txt.gz 6ff63517-62d2-4083-96d5-9e033d3b1c1b.FPKM.txt.gz fc867c2d-8e5d-4e2a-80b2-48be873efc56.FPKM.txt.gz 5dbb0cb6-18b4-411e-bfd2-e0f6549dc78f.FPKM.txt.gz c07dbc2a-b197-4b5a-a1a7-8acbaea7441e.FPKM.txt.gz 44615ba8-36ce-4d2e-9d9d-b2f01179b6c8.FPKM.txt.gz be6f5b47-f164-4947-9b35-02f94248e462.FPKM.txt.gz 245ad8f3-c93e-40ac-8ff0-92e7a5dacc20.FPKM.txt.gz 4cdd597f-ca43-4545-b84e-e5854c8a63c2.FPKM.txt.gz d5fd50e4-2068-4cfc-9b04-3976f9fd7940.FPKM.txt.gz 1dd5ec67-7a28-4870-97a0-c408daf6c48f.FPKM.txt.gz 7e9b15b3-eca8-4dd1-8d57-a404ace995d9.FPKM.txt.gz 2b2b6341-1f6d-4acc-8a67-0ba8ab051ee8.FPKM.txt.gz 18f2c008-bb26-45fd-9e6c-a0d08fda6d48.FPKM.txt.gz c997087f-4415-4d46-a3f5-e5129d645e3a.FPKM.txt.gz 9bf66417-4431-4c23-9603-63273db48d39.FPKM.txt.gz fadfa0b7-8543-4ca3-8c26-9114e71caaeb.FPKM.txt.gz 959e98c8-c279-4178-acc2-0bb0c42ed5c2.FPKM.txt.gz 89105180-9922-43bc-81a9-7b43b11e5e20.FPKM.txt.gz cecac722-4fe4-45e5-85e9-a4ce24c567b2.FPKM.txt.gz b5b0644d-9eb9-44aa-a5a1-39e9630ad156.FPKM.txt.gz 2cea6dae-c807-428b-90c8-617b260d3975.FPKM.txt.gz ba4cd1d6-7233-400e-9b74-11c0561193bf.FPKM.txt.gz 1aeb8b9e-bf79-415d-a8dd-a63c3fbe2bab.FPKM.txt.gz 0 594 print(("genes:%d\tsamples:%d"%(len(df.index),len(df.columns))))genes:60483 samples:594 </code> Save data to a .csv file_____no_output_____ <code> # drop genes all empty and round (to reduce storage space) df.dropna(how='all', axis=0).round(decimals=2).to_csv(f"{PATH_TO_UTILS}/mainTable_all.csv", index=True)_____no_output_____ </code>
{ "repository": "shreyaraghavendra/BoltBio", "path": "code/preprocessing/GE_getTable.ipynb", "matched_keywords": [ "RNA-seq" ], "stars": null, "size": 41758, "hexsha": "4824ed66fa8db24b7dd458920de87eed27f7799d", "max_line_length": 246, "avg_line_length": 49.830548926, "alphanum_fraction": 0.6723981034 }
# Notebook from philuttley/basic_linux_and_coding Path: 6_astropy.ipynb ![astropy_banner.jpg](media/astropy_banner.jpg)_____no_output_____## Sharing code is healthy for the community and the science it produces - a community-developed core library for professional astronomical research - combines many functionalities from a variety of astronomy packages and languages - Goals: Usability, Interoperability and Collaboration between packages ### Affiliated Packages - astronomy related python packages - adhere to interface standards of astropy but are not (yet) part of the astropy core package - currently 40 affiliated packages covering many areas of astronomical research ### Since astropy Version 3.0.0 the package only supports Python 3! ### Is now included by default in Anaconda_____no_output_____## Queries, Coordinate Systems, Time and Units ### Observation planning with astropy_____no_output_____### Observability of Fomalhaut from the VLT: initial setup_____no_output_____ <code> import numpy as np import matplotlib.pyplot as plt from astropy.visualization import astropy_mpl_style plt.style.use(astropy_mpl_style) import astropy.units as u from astropy.time import Time from astropy.coordinates import SkyCoord, EarthLocation, AltAz # Lets observe the star Fomalhaut with the ESO VLT - 8m Telescope in Chile # Load the position of Fomalhaut from the Simbad database fomalhaut = SkyCoord.from_name('Fomalhaut') print(fomalhaut) # Load the position of the Observatory. Physical units should be assigned via the # units function paranal = EarthLocation(lat=-24.62*u.deg, lon=-70.40*u.deg, height=2635*u.m) print(paranal) # The coordinates are stored as geocentric (position relative to # earth centre-of-mass) as a default_____no_output_____ </code> ### Calculate Fomalhaut and solar altitude/azimuth_____no_output_____ <code> # We want to observe Fomalhaut next week. We will determine the position # in the sky as seen from Paranal in a 24 hour window centred on local midnight Oct 15. # (Oct 15 *starts* at 00:00:00) midnight = Time('2019-10-15 00:00:00') delta_midnight = np.linspace(-12, 12, 1000)*u.hour times_Oct14_to_15 = midnight + delta_midnight frame_Oct14_to_15 = AltAz(obstime=times_Oct14_to_15, location=paranal) # Now we transform the Fomalhaut object to the Altitute/Azimuth coordinate system fomalhaut_altazs_Oct14_to_15 = fomalhaut.transform_to(frame_Oct14_to_15) #print(fomalhaut_altazs_Oct14_to_15) _____no_output_____# We also check the position of the sun in the sky over the same time range from astropy.coordinates import get_sun sunaltazs_Oct14_to_15 = get_sun(times_Oct14_to_15).transform_to(frame_Oct14_to_15)_____no_output_____ </code> ### Determining the night-time observability of Fomalhaut_____no_output_____ <code> # Plot the sun altitude plt.plot(delta_midnight, sunaltazs_Oct14_to_15.alt, color='r', label='Sun') # Plot Fomalhaut's alt/az - use a colour map to represent azimuth plt.scatter(delta_midnight, fomalhaut_altazs_Oct14_to_15.alt, c=fomalhaut_altazs_Oct14_to_15.az, label='Fomalhaut', lw=0, s=8, cmap='viridis') # Now plot the range when the sun is below the horizon, and at least 18 degrees below # the horizon - this shows the range of twilight (-0 to -18 deg) and night (< -18 deg) plt.fill_between(delta_midnight.to('hr').value, 0, 90, sunaltazs_Oct14_to_15.alt < -0*u.deg, color='0.7', zorder=0) plt.fill_between(delta_midnight.to('hr').value, 0, 90, sunaltazs_Oct14_to_15.alt < -18*u.deg, color='0.4', zorder=0) plt.colorbar().set_label('Azimuth [deg]') plt.legend(loc='upper left') plt.xlim(-12, 12) plt.xticks(np.arange(13)*2 -12) plt.ylim(0, 90) plt.xlabel('Hours from UT Midnight') plt.ylabel('Altitude [deg]') plt.show()_____no_output_____ </code> ## Fitting models to data ### Health warning: This is a very basic intro - to do this properly and not make catastrophic errors in your interpretation and/or analysis, you need to do a statistics course! (e.g. for much more detail, see Statistical Methods course in Block 3)_____no_output_____### Simple 1d model_____no_output_____ <code> import numpy as np import matplotlib.pyplot as plt from astropy.modeling import models, fitting # Generate data with some random noise x = np.linspace(-5., 5., 200) y = 3 * np.exp(-0.5 * (x - 1.3)**2 / 0.8**2) # Gaussian function y += np.random.normal(0., 0.2, x.shape) # add Normally distributed scatter plt.plot(x, y, 'ko') plt.show()_____no_output_____# Fit the data using a Gaussian g_init = models.Gaussian1D(amplitude=1., mean=0, stddev=1.) fit_g = fitting.LevMarLSQFitter() # Initialises fitting method as Levenberg-Marquardt # least squares fitting g = fit_g(g_init, x, y) # Fits the data, note that fit_g is effectively an alias # for the L-M fitting function, so must use the same arguments in addition to any # pre-specified in the initial assignment of the function to fit_g above # (none are included in this case) plt.plot(x, y, 'ko') plt.plot(x, g(x), "r-") plt.show() _____no_output_____ </code> ### 2d model_____no_output_____ <code> import warnings import numpy as np import matplotlib.pyplot as plt from astropy.modeling import models, fitting # Generate fake data np.random.seed(0) y, x = np.mgrid[:128, :128] z = 2. * x ** 2 - 0.5 * x ** 2 + 1.5 * x * y - 1. # Polynomial function z += np.random.normal(0., 0.1, z.shape) * 50000._____no_output_____# Fit the data using astropy.modeling p_init = models.Polynomial2D(degree=2) fit_p = fitting.LevMarLSQFitter() with warnings.catch_warnings(): # Ignore model linearity warning from the fitter warnings.simplefilter('ignore') p = fit_p(p_init, x, y, z) _____no_output_____# Plot the data with the best-fit model plt.figure(figsize=(8, 2.5)) plt.subplot(1, 3, 1) plt.imshow(z, origin='lower', interpolation='nearest', vmin=-1e4, vmax=5e4) plt.title("Data") plt.subplot(1, 3, 2) plt.imshow(p(x, y), origin='lower', interpolation='nearest', vmin=-1e4, vmax=5e4) plt.title("Model") # We can also plot data - model to look for any systematic deviations of the data # from the model that might suggest we need a better model or choice of parameters plt.subplot(1, 3, 3) plt.imshow(z - p(x, y), origin='lower', interpolation='nearest', vmin=-1e4, vmax=5e4) plt.title("Residual") plt.show()_____no_output_____ </code> ## Some affiliated packages_____no_output_____### astroquery_____no_output_____ <code> import numpy as np import matplotlib.pyplot as plt from astropy.modeling import models, fitting from astroquery.vizier import Vizier # Cepheids Period-Luminosity data from Bhardwaj et al. 2017 catalog = Vizier.get_catalogs('J/A+A/605/A100') print(catalog) period = np.array(catalog[0]['Period']) log_period = np.log10(period) k_mag = np.array(catalog[0]['__Ksmag_']) k_mag_err = np.array(catalog[0]['e__Ksmag_']) plt.errorbar(log_period, k_mag, k_mag_err, fmt='k.') plt.xlabel(r'$\log_{10}$(Period [days])') plt.ylabel('Ks') plt.show()_____no_output_____# Lets now fit a simple model to the data model = models.Linear1D() fitter = fitting.LinearLSQFitter() best_fit = fitter(model, log_period, k_mag, weights=1.0/k_mag_err**2) plt.errorbar(log_period,k_mag,k_mag_err,fmt='k.') plt.plot(log_period, best_fit(log_period), color='g', linewidth=3) plt.xlabel(r'$\log_{10}$(Period [days])') plt.ylabel('Ks') plt.show()_____no_output_____ </code> ### photutils_____no_output_____ <code> from photutils import datasets from photutils import aperture_photometry from photutils import SkyCircularAperture from astropy import units as u from astropy.coordinates import SkyCoord import astropy.io.fits as fits hdu = fits.open("./data/spitzer.fits") catalog = datasets.load_spitzer_catalog() plt.imshow(hdu[0].data, vmin=-1, vmax=30, origin="lower", cmap=plt.cm.gist_heat) plt.show() _____no_output_____positions = SkyCoord(catalog['l'], catalog['b'], frame='galactic') apertures = SkyCircularAperture(positions, r=4.8 * u.arcsec) phot_table = aperture_photometry(hdu[0], apertures) # conversion to flux per pixel with pixel scale of 1.2 arcsec/pixel factor = (1.2 * u.arcsec) ** 2 / u.pixel converted_aperture_sum = (phot_table['aperture_sum'] * factor).to(u.mJy / u.pixel) # loading the catalog measurements fluxes_catalog = catalog['f4_5'] import matplotlib.pyplot as plt plt.scatter(fluxes_catalog, converted_aperture_sum.value) plt.xlabel('Spitzer catalog PSF-fit fluxes ') plt.ylabel('Aperture photometry fluxes') plt.show()_____no_output_____ </code> ### Fitting circumstellar disks with photutils ![ellipse_fitting_photutils.jpg](media/ellipse_fitting_photutils.jpg)_____no_output_____![python.png](media/python.png) Image Credit: XKCD_____no_output_____
{ "repository": "philuttley/basic_linux_and_coding", "path": "6_astropy.ipynb", "matched_keywords": [ "STAR" ], "stars": null, "size": 14608, "hexsha": "4825243ae24b73ea689672c10ba6cf7e5f8f731a", "max_line_length": 187, "avg_line_length": 27.3046728972, "alphanum_fraction": 0.5700985761 }
# Notebook from mojito9542/gpt-2 Path: GPT-2.ipynb <a href="https://colab.research.google.com/github/mojito9542/gpt-2/blob/master/GPT-2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____Initializing the notebook _____no_output_____ <code> !git clone https://github.com/openai/gpt-2 Cloning into 'gpt-2'... remote: Enumerating objects: 230, done. remote: Total 230 (delta 0), reused 0 (delta 0), pack-reused 230 Receiving objects: 100% (230/230), 4.38 MiB | 6.68 MiB/s, done. Resolving deltas: 100% (121/121), done. lsgpt-2/ sample_data/ cd gpt-2/content/gpt-2 !pip install -r requirements.txtCollecting fire>=0.1.3 [?25l Downloading https://files.pythonhosted.org/packages/d9/69/faeaae8687f4de0f5973694d02e9d6c3eb827636a009157352d98de1129e/fire-0.2.1.tar.gz (76kB)  |████████████████████████████████| 81kB 6.1MB/s [?25hCollecting regex==2017.4.5 [?25l Downloading https://files.pythonhosted.org/packages/36/62/c0c0d762ffd4ffaf39f372eb8561b8d491a11ace5a7884610424a8b40f95/regex-2017.04.05.tar.gz (601kB)  |████████████████████████████████| 604kB 26.7MB/s [?25hRequirement already satisfied: requests==2.21.0 in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 3)) (2.21.0) Collecting tqdm==4.31.1 [?25l Downloading https://files.pythonhosted.org/packages/6c/4b/c38b5144cf167c4f52288517436ccafefe9dc01b8d1c190e18a6b154cd4a/tqdm-4.31.1-py2.py3-none-any.whl (48kB)  |████████████████████████████████| 51kB 8.6MB/s [?25hRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from fire>=0.1.3->-r requirements.txt (line 1)) (1.12.0) Requirement already satisfied: termcolor in /usr/local/lib/python3.6/dist-packages (from fire>=0.1.3->-r requirements.txt (line 1)) (1.1.0) Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests==2.21.0->-r requirements.txt (line 3)) (1.24.3) Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests==2.21.0->-r requirements.txt (line 3)) (2.8) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests==2.21.0->-r requirements.txt (line 3)) (2019.11.28) Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests==2.21.0->-r requirements.txt (line 3)) (3.0.4) Building wheels for collected packages: fire, regex Building wheel for fire (setup.py) ... [?25l[?25hdone Created wheel for fire: filename=fire-0.2.1-py2.py3-none-any.whl size=103528 sha256=b7dc771b5fcaf60a108edf848d41c7b00f4fec3b52e71694d3149086596725aa Stored in directory: /root/.cache/pip/wheels/31/9c/c0/07b6dc7faf1844bb4688f46b569efe6cafaa2179c95db821da Building wheel for regex (setup.py) ... [?25l[?25hdone Created wheel for regex: filename=regex-2017.4.5-cp36-cp36m-linux_x86_64.whl size=533184 sha256=c5e0f39695b3978979c1a842e58822ea71bcd3771585200333445b448e42be96 Stored in directory: /root/.cache/pip/wheels/75/07/38/3c16b529d50cb4e0cd3dbc7b75cece8a09c132692c74450b01 Successfully built fire regex Installing collected packages: fire, regex, tqdm Found existing installation: regex 2019.12.20 Uninstalling regex-2019.12.20: Successfully uninstalled regex-2019.12.20 Found existing installation: tqdm 4.28.1 Uninstalling tqdm-4.28.1: Successfully uninstalled tqdm-4.28.1 Successfully installed fire-0.2.1 regex-2017.4.5 tqdm-4.31.1 cat DEVELOPERS.md# Installation Git clone this repository, and `cd` into directory for remaining commands ``` git clone https://github.com/openai/gpt-2.git && cd gpt-2 ``` Then, follow instructions for either native or Docker installation. ## Native Installation All steps can optionally be done in a virtual environment using tools such as `virtualenv` or `conda`. Install tensorflow 1.12 (with GPU support, if you have a GPU and want everything to run faster) ``` pip3 install tensorflow==1.12.0 ``` or ``` pip3 install tensorflow-gpu==1.12.0 ``` Install other python packages: ``` pip3 install -r requirements.txt ``` Download the model data ``` python3 download_model.py 124M python3 download_model.py 355M python3 download_model.py 774M python3 download_model.py 1558M ``` ## Docker Installation Build the Dockerfile and tag the created image as `gpt-2`: ``` docker build --tag gpt-2 -f Dockerfile.gpu . # or Dockerfile.cpu ``` Start an interactive bash session from the `gpt-2` docker image. You can opt to use the `--runtime=nvidia` flag if you have access to a NVIDIA GPU and a valid install of [nvidia-docker 2.0](https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0)). ``` docker run --runtime=nvidia -it gpt-2 bash ``` # Running | WARNING: Samples are unfiltered and may contain offensive content. | | --- | Some of the examples below may include Unicode text characters. Set the environment variable: ``` export PYTHONIOENCODING=UTF-8 ``` to override the standard stream settings in UTF-8 mode. ## Unconditional sample generation To generate unconditional samples from the small model: ``` python3 src/generate_unconditional_samples.py | tee /tmp/samples ``` There are various flags for controlling the samples: ``` python3 src/generate_unconditional_samples.py --top_k 40 --temperature 0.7 | tee /tmp/samples ``` To check flag descriptions, use: ``` python3 src/generate_unconditional_samples.py -- --help ``` ## Conditional sample generation To give the model custom prompts, you can use: ``` python3 src/interactive_conditional_samples.py --top_k 40 ``` To check flag descriptions, use: ``` python3 src/interactive_conditional_samples.py -- --help ``` !python3 download_model.py 124M !python3 download_model.py 355M !python3 download_model.py 774M !python3 download_model.py 1558M Fetching checkpoint: 0%| | 0.00/77.0 [00:00<?, ?it/s] Fetching checkpoint: 1.00kit [00:00, 1.30Mit/s] Fetching encoder.json: 0%| | 0.00/1.04M [00:00<?, ?it/s] Fetching encoder.json: 1.04Mit [00:00, 52.0Mit/s] Fetching hparams.json: 1.00kit [00:00, 1.29Mit/s] Fetching model.ckpt.data-00000-of-00001: 498Mit [00:05, 88.7Mit/s] Fetching model.ckpt.index: 6.00kit [00:00, 5.92Mit/s] Fetching model.ckpt.meta: 472kit [00:00, 54.6Mit/s] Fetching vocab.bpe: 457kit [00:00, 51.7Mit/s] Fetching checkpoint: 1.00kit [00:00, 1.06Mit/s] Fetching encoder.json: 1.04Mit [00:00, 55.9Mit/s] Fetching hparams.json: 1.00kit [00:00, 1.21Mit/s] Fetching model.ckpt.data-00000-of-00001: 1.42Git [00:25, 55.1Mit/s] Fetching model.ckpt.index: 11.0kit [00:00, 8.88Mit/s] Fetching model.ckpt.meta: 927kit [00:00, 54.7Mit/s] Fetching vocab.bpe: 457kit [00:00, 41.3Mit/s] Fetching checkpoint: 1.00kit [00:00, 1.25Mit/s] Fetching encoder.json: 1.04Mit [00:00, 70.7Mit/s] Fetching hparams.json: 1.00kit [00:00, 1.17Mit/s] Fetching model.ckpt.data-00000-of-00001: 3.10Git [00:55, 55.6Mit/s] Fetching model.ckpt.index: 16.0kit [00:00, 10.2Mit/s] Fetching model.ckpt.meta: 1.38Mit [00:00, 72.9Mit/s] Fetching vocab.bpe: 457kit [00:00, 66.3Mit/s] Fetching checkpoint: 1.00kit [00:00, 1.10Mit/s] Fetching encoder.json: 1.04Mit [00:00, 50.8Mit/s] Fetching hparams.json: 1.00kit [00:00, 1.12Mit/s] Fetching model.ckpt.data-00000-of-00001: 6.23Git [02:34, 40.3Mit/s] Fetching model.ckpt.index: 21.0kit [00:00, 16.6Mit/s] Fetching model.ckpt.meta: 1.84Mit [00:00, 63.3Mit/s] Fetching vocab.bpe: 457kit [00:00, 51.3Mit/s] ls CONTRIBUTORS.md Dockerfile.gpu LICENSE README.md DEVELOPERS.md domains.txt model_card.md requirements.txt Dockerfile.cpu download_model.py models/ src/ !python3 src/interactive_conditional_samples.py --model_name "1558M"WARNING:tensorflow:From src/interactive_conditional_samples.py:57: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. 2020-02-20 06:23:00.270201: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-02-20 06:23:00.317708: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-02-20 06:23:00.318384: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: Tesla P4 major: 6 minor: 1 memoryClockRate(GHz): 1.1135 pciBusID: 0000:00:04.0 2020-02-20 06:23:00.340701: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-02-20 06:23:00.564943: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-02-20 06:23:00.706383: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-02-20 06:23:00.734796: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-02-20 06:23:01.000686: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-02-20 06:23:01.026099: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-02-20 06:23:01.523050: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-02-20 06:23:01.523266: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-02-20 06:23:01.523917: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-02-20 06:23:01.526537: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2020-02-20 06:23:01.527124: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F 2020-02-20 06:23:01.536515: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2000165000 Hz 2020-02-20 06:23:01.537857: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x23df640 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-02-20 06:23:01.537895: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-02-20 06:23:01.729338: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-02-20 06:23:01.730118: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x23dfd40 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-02-20 06:23:01.730168: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla P4, Compute Capability 6.1 2020-02-20 06:23:01.731131: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-02-20 06:23:01.731514: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: Tesla P4 major: 6 minor: 1 memoryClockRate(GHz): 1.1135 pciBusID: 0000:00:04.0 2020-02-20 06:23:01.731596: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-02-20 06:23:01.731623: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-02-20 06:23:01.731642: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-02-20 06:23:01.731662: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-02-20 06:23:01.731818: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-02-20 06:23:01.731874: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-02-20 06:23:01.731921: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-02-20 06:23:01.732078: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-02-20 06:23:01.732717: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-02-20 06:23:01.733083: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2020-02-20 06:23:01.735633: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-02-20 06:23:01.736760: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-02-20 06:23:01.736793: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 2020-02-20 06:23:01.736807: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N 2020-02-20 06:23:01.737959: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-02-20 06:23:01.738450: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-02-20 06:23:01.738822: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0. 2020-02-20 06:23:01.738863: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7123 MB memory) -> physical GPU (device: 0, name: Tesla P4, pci bus id: 0000:00:04.0, compute capability: 6.1) WARNING:tensorflow:From src/interactive_conditional_samples.py:58: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. WARNING:tensorflow:From src/interactive_conditional_samples.py:60: The name tf.set_random_seed is deprecated. Please use tf.compat.v1.set_random_seed instead. WARNING:tensorflow:From /content/gpt-2/src/sample.py:51: The name tf.AUTO_REUSE is deprecated. Please use tf.compat.v1.AUTO_REUSE instead. WARNING:tensorflow:From /content/gpt-2/src/model.py:148: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead. WARNING:tensorflow:From /content/gpt-2/src/model.py:152: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead. WARNING:tensorflow:From /content/gpt-2/src/model.py:36: The name tf.rsqrt is deprecated. Please use tf.math.rsqrt instead. WARNING:tensorflow:From /content/gpt-2/src/sample.py:64: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.cast` instead. WARNING:tensorflow:From /content/gpt-2/src/sample.py:39: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where WARNING:tensorflow:From /content/gpt-2/src/sample.py:67: multinomial (from tensorflow.python.ops.random_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.random.categorical` instead. WARNING:tensorflow:From src/interactive_conditional_samples.py:68: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead. Hi! My name is Mohit Agarwal. I love eating desserts. Model prompt >>> 2020-02-20 06:26:19.798124: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 I dont know What to do with my life anymore. So i decided to become a rockstar. For that I quit my job as it was neccesary to dedicate my undivided attention to it. After that I practiced every day. But i never succeded throwing rocks to star however hard i try. ======================================== SAMPLE 1 ========================================  Hundreds of my genetics administer variants called duration polymorphisms in the gene encoding the sweet taste receptor, known as T1R2. Each of the polymorphisms, R128A and R151T, while thought to affect whether a person likes, or not likes, sweets, can be subdivided into five different categories: Sweet taste liking Traumatic taste liking Sweetness Acceptance Sweetness aversion Flavour perception Traumatic (stinging, burning), balmy and salivating taste dysphoria Sweetness appetite Predator-prey stress reproductive One elusive panelist was transfixed by a type of eating disorder displayed by a deformed pit bull puppy she rescued. This disorder displays an extreme fear of teat, and as the dog (grown male, at the time) became overweight it developed a chronic and painless skin injury and severe vulval dermatomyositis. This was truly bizarre: painful, itchy, nightmarish.(12) Approaching a drinking fountain saw a youngster having an aggressive attack against his mother and female companion. She remembered that the youngster had sought protection from something in a commune two days earlier. Closer examination revealed that the youngster, after achieving a space boundary, banged the door of the toilet with chicken bones relished his favourite Frisbee. Even the cheerful endlessly imaging Plaseau expects to see here now. Too much sweets can induce similar bowel upsets.  Marvellous moment with the pool, maybe a joint? Bake well, chill, stay normal. D Still fat, though. One striking thing about the panel of the German study, their normal appearance had been either scarring or not so normal, because his wrist lacked the usual tricep growth. It was just a parallel in evolution to the case of disease.  Clearly this person was different.  Then this person had served with Germany in Morocco in January-March 1989 thanks to an award call with long lead time.     Even his foreign experience, whilst acquired some week later than had thought, was a success, his wife understood. He became part of a cadre of young cooks in several black families. After a telephone conversation with the mother, the husband requested that we not contact him. On the contrary, we should communicate with us are we loved him. Most curiously, he could see all, but he had a very prominent left eye, as did many members of his family. He was slowed by highway speed direction but not by emergency vehicles. Rightward. There was ================================================================================ Model prompt >>> hi ======================================== SAMPLE 1 ======================================== I got my hate and start burning my fingers from throwing. It was not even enough it was too much trouble. later I discovered how to run like acow.I had a brief goal to be like simon paul soundman. (helped him a lot in his career) I dont know what to do with my life anymore. So i decided to become a rockstar. For that I quit my job as it was neccesary to dedicate my undivided attention to it. After that I practiced every day.But i never succeded throwing rocks to star however hard i try. I got my hate and start burning my fingers from throwing. It was not even enough it was too much trouble. later Music and performers: Pariah X × Dark Comet — "Hook" ( _uic3 ) × Scott Molinakis × Marilyn Manson × Max Styler × Guilty Simpson × Jeff Garnett × FantastiC × Josh Myers × A$AP Rocky ―― Ani Mitsuji, Hell's Angel To do this screen capturing thing, you have to follow from left to right… That's pretty hard to do with only one of your fingers!! In fact, I can only do it when I'm convinced I never need to use my other finger like this. ugh. Music and performers: Eddie Montgomery × Aurephan — "Dash" ( c_vsp ) × ONS (@syqbookdaddy) × Carrick Shields × Lady Chablis × LaSkar × Kaytranada ―― God, less hand movements… Save your wishful thinking for when you're 7 inches tall. Music and performers: The Fugees × Lisa Loeb ―― [Sleepwalking in the Presidential Affairs Zone] Note that things definitely are not as bad as they seem right before an emergency! Haha. Thanks my savior Peter Van Houten for accompanying me in these studies. Will smugly gloat to plot points they concealed from discussions in Teankara. Music and performers: Stephen Lang × Itagaki Makoto L.M. × Rising Force × Serena Fuji ( Remi ), Michikazu Kusano × Hiroyoshi Tenzan × Vince-kun ×<|endoftext|>Are you concerned about when your local pothole will become a highway? This is a vehicle pothole that worries many residents, as it proved fatal for a child four years ago, and ================================================================================ Model prompt >>> ======================================== SAMPLE 1 ======================================== , Aru Akamno, Naohisa Ohata, Eiichiro Sasano, Masahiko Date, Takeharu Gotou, Riki Iwai, Koji Igarashi, Noshiro Tano, Tomooaki Inoue, Takaaki Kuroda, Hiroko Fujimori, Hiroko Watachi, Baka Shin, Kazuya Ogawa, Saburo cover illustrations Kenshito Sugawara Uilin Pang, the first human captured by Whisper is arrested by Whisper, and sent to the Therian Research Facility where he was implanted with a Onyx Core core within his heart that he can now program his Servant with.[11] Unbeknownst to him, this Onyx Core has been given to the Impure King by the Will of the Velvet Darkling, who wishes to somehow magnify the power of his minions. The Impure King reaches out to Plant, who petrifies most of the humans and CET was "killed", unable to produce Resources, vehicles enough to abduct more men, or sufficient size and material to support more men. The Man Who Became Zheng He observes the destruction of the people of the gas station, discovers who all of the impurities are, and escapes before he can act. This is going to be the catalyst to the curse of the Crown in Christian theology. With the armor lining and guts of Valentinian 4 removed, and the human heart exposed, Zheng He's side is blown open and he now has access to whatever the main body can grasp and control. Going through the supply lines of the executiveia of the continent, Zheng He arrives in the slums of Prague, observing the chaos. He is confronted and stopped by Géza, the bodyguard of Łukas Seniö. Géza and the Impure King are told by Santana and Hämsterviel to surrender, as they cannot fight their way out of a Salvatore vinifera net covered cardboard box while Big Mugga copulates with her. But their plan to make a show of victory and capture a bunch of people for worth of their St. Andrew's Cross has been busted due to Kazuma, Paris, and Guella joining the battle, and the attack directly causes an earthquake which wipes out the entire street below. Zheng He infiltrates Seniostra's Tower Module Maktabi, destroys its central basis, and emerging, is confronted by Suzaku Kururugi R ================================================================================ Model prompt >>> Traceback (most recent call last): File "/usr/lib/python3.6/contextlib.py", line 99, in __exit__ self.gen.throw(type, value, traceback) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 5480, in get_controller yield g File "src/interactive_conditional_samples.py", line 73, in interact_model raw_text = input("Model prompt >>> ") KeyboardInterrupt During handling of the above exception, another exception occurred: Traceback (most recent call last): File "src/interactive_conditional_samples.py", line 91, in <module> fire.Fire(interact_model) File "/usr/local/lib/python3.6/dist-packages/fire/core.py", line 138, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/usr/local/lib/python3.6/dist-packages/fire/core.py", line 471, in _Fire target=component.__name__) File "/usr/local/lib/python3.6/dist-packages/fire/core.py", line 675, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) File "src/interactive_conditional_samples.py", line 88, in interact_model print("=" * 80) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1633, in __exit__ close_thread.start() File "/usr/lib/python3.6/threading.py", line 851, in start self._started.wait() File "/usr/lib/python3.6/threading.py", line 551, in wait signaled = self._cond.wait(timeout) File "/usr/lib/python3.6/threading.py", line 295, in wait waiter.acquire() KeyboardInterrupt !python3 src/interactive_conditional_samples.py --helpINFO: Showing help with the command 'interactive_conditional_samples.py -- --help'. NAME interactive_conditional_samples.py - Interactively run the model :model_name=124M : String, which model to use :seed=None : Integer seed for random number generators, fix seed to reproduce results :nsamples=1 : Number of samples to return total :batch_size=1 : Number of batches (only affects speed/memory). Must divide nsamples. :length=None : Number of tokens in generated text, if None (default), is determined by model hyperparameters :temperature=1 : Float value controlling randomness in boltzmann distribution. Lower temperature results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Higher temperature results in more random completions. :top_k=0 : Integer value controlling diversity. 1 means only 1 word is considered for each step (token), resulting in deterministic completions, while 40 means 40 words are considered at each step. 0 (default) is a special setting meaning no restrictions. 40 generally is a good value. :models_dir : path to parent folder containing model subfolders (i.e. contains the <model_name> folder) SYNOPSIS interactive_conditional_samples.py <flags> DESCRIPTION Interactively run the model :model_name=124M : String, which model to use :seed=None : Integer seed for random number generators, fix seed to reproduce results :nsamples=1 : Number of samples to return total :batch_size=1 : Number of batches (only affects speed/memory). Must divide nsamples. :length=None : Number of tokens in generated text, if None (default), is determined by model hyperparameters :temperature=1 : Float value controlling randomness in boltzmann distribution. Lower temperature results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Higher temperature results in more random completions. :top_k=0 : Integer value controlling diversity. 1 means only 1 word is considered for each step (token), resulting in deterministic completions, while 40 means 40 words are considered at each step. 0 (default) is a special setting meaning no restrictions. 40 generally is a good value. :models_dir : path to parent folder containing model subfolders (i.e. contains the <model_name> folder) FLAGS --model_name=MODEL_NAME --seed=SEED --nsamples=NSAMPLES --batch_size=BATCH_SIZE --length=LENGTH --temperature=TEMPERATURE --top_k=TOP_K --top_p=TOP_P --models_dir=MODELS_DIR !python3 src/interactive_conditional_samples.py --model_name "774M" --nsamples=15 --length=50 WARNING:tensorflow:From src/interactive_conditional_samples.py:57: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. 2020-02-20 09:02:46.291928: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-02-20 09:02:46.313434: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-02-20 09:02:46.313825: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: Tesla P4 major: 6 minor: 1 memoryClockRate(GHz): 1.1135 pciBusID: 0000:00:04.0 2020-02-20 09:02:46.314133: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-02-20 09:02:46.315746: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-02-20 09:02:46.317338: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-02-20 09:02:46.317665: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-02-20 09:02:46.319148: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-02-20 09:02:46.320686: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-02-20 09:02:46.323862: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-02-20 09:02:46.323995: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-02-20 09:02:46.324411: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-02-20 09:02:46.324760: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2020-02-20 09:02:46.325132: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F 2020-02-20 09:02:46.329525: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2000165000 Hz 2020-02-20 09:02:46.329737: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3247100 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-02-20 09:02:46.329784: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-02-20 09:02:46.409155: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-02-20 09:02:46.409719: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x32472c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-02-20 09:02:46.409752: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla P4, Compute Capability 6.1 2020-02-20 09:02:46.409938: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-02-20 09:02:46.410349: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: Tesla P4 major: 6 minor: 1 memoryClockRate(GHz): 1.1135 pciBusID: 0000:00:04.0 2020-02-20 09:02:46.410442: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-02-20 09:02:46.410473: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-02-20 09:02:46.410500: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-02-20 09:02:46.410530: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-02-20 09:02:46.410552: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-02-20 09:02:46.410580: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-02-20 09:02:46.410609: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-02-20 09:02:46.410691: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-02-20 09:02:46.411149: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-02-20 09:02:46.411516: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2020-02-20 09:02:46.411604: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-02-20 09:02:46.412741: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-02-20 09:02:46.412770: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 2020-02-20 09:02:46.412782: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N 2020-02-20 09:02:46.412892: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-02-20 09:02:46.413351: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-02-20 09:02:46.413743: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0. 2020-02-20 09:02:46.413788: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7123 MB memory) -> physical GPU (device: 0, name: Tesla P4, pci bus id: 0000:00:04.0, compute capability: 6.1) WARNING:tensorflow:From src/interactive_conditional_samples.py:58: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. WARNING:tensorflow:From src/interactive_conditional_samples.py:60: The name tf.set_random_seed is deprecated. Please use tf.compat.v1.set_random_seed instead. WARNING:tensorflow:From /content/gpt-2/src/sample.py:51: The name tf.AUTO_REUSE is deprecated. Please use tf.compat.v1.AUTO_REUSE instead. WARNING:tensorflow:From /content/gpt-2/src/model.py:148: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead. WARNING:tensorflow:From /content/gpt-2/src/model.py:152: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead. WARNING:tensorflow:From /content/gpt-2/src/model.py:36: The name tf.rsqrt is deprecated. Please use tf.math.rsqrt instead. WARNING:tensorflow:From /content/gpt-2/src/sample.py:64: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.cast` instead. WARNING:tensorflow:From /content/gpt-2/src/sample.py:39: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where WARNING:tensorflow:From /content/gpt-2/src/sample.py:67: multinomial (from tensorflow.python.ops.random_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.random.categorical` instead. WARNING:tensorflow:From src/interactive_conditional_samples.py:68: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead. Model prompt >>> There once was a speedy Hare who bragged about how fast he could run. Tired of hearing him boast, the Tortoise challenged him to a race. All the animals in the forest gathered to watch. The Hare ran down the road for a while and then paused to rest. He looked back at the tortoise and cried out, "How do you expect to win this race when you are walking along at your slow, slow pace?" The Hare stretched himself out alongside the road and fell asleep, thinking, "There is plenty of time to relax." The Tortoise walked and walked, never ever stopping until he came to the finish line. The animals who were watching cheered so loudly for Tortoise that they woke up the Hare. The Hare stretched, yawned and began to run again, but it was too late. Tortoise had already crossed the finish line. TL;DR 2020-02-20 09:04:07.516815: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 ======================================== SAMPLE 1 ======================================== : Hare claims he can cry out, "How are you supposed to run when you are full of energy?" Saving the Dragon -> Q&A -> Summary WorkshopActivity -> Restage Build Crawl 1.8 ======================================== SAMPLE 2 ======================================== ! I bet Tortoise would beat the Hare at that race!<|endoftext|>A husband and wife who were sitting on their sofa in a hotel in the United Arab Emirates pounced on a leopard two and a half years ago before it attacked them. ======================================== SAMPLE 3 ======================================== This word is pretty non-functional in Super Mario World. the epitomizes the stream of thought that can occur at the mere sight of anything (by having the verb with no modifier, for example.) as if... x-y phonetically recognized ======================================== SAMPLE 4 ======================================== : All the animals are tired of amusing the turtle while he runs around the forest. Tortoise cried and ran into the jaws of the tortoise. Hare replied sarcastically saying, "It is the tortoise who is slow!" (for those who ======================================== SAMPLE 5 ======================================== : An adult alligator shall set his slow running speed to the max and finish in a few seconds. Also, female tortoises have the same official berry harvest as male tortoises. Compound sentences have complicated meanings. In China, ======================================== SAMPLE 6 ======================================== : In fairytales, the slow, slow train wins the race. A Short Update (2014-7-27) I realized that mentioning the Kamisato Faction of Glory – a faction that seems to involve a new, ======================================== SAMPLE 7 ======================================== : You couldn't turn on the time limiter on bad devices. Now let me ask a question: Why does this happen on the PS Vita? But as for the pinball, when on the wrong panel, the ball can ======================================== SAMPLE 8 ======================================== - Hare came out faster than tortoise and thus won. To sum up: Tortoise ran to win and Hare got upset that he couldn't win. Grew up eating chewy cereal banana with jelly spawning inside her mouth Eggsprites ======================================== SAMPLE 9 ======================================== - Hare challenged tortoise to race and tortoise ran down road for insec.<|endoftext|>Defensive lineman Zach Banner is averaging 15 tackles per game during preseason - second best in the NFL. (Photo: Jerry Lai, USA TODAY Sports) ======================================== SAMPLE 10 ======================================== : Hare was too fast for the Tortoise, he had to slow down to run down the road.<|endoftext|>One Night Stand Eran Kadlec plays a go-getter with the Invisible Cavaliers, wakes up crying at his understudy ======================================== SAMPLE 11 ======================================== : • country dog stopped at fairy realm. • hind walks into fairy realm • massive consort activates secret of infinity • elephant and deer jump on tortoise RAW Paste Data TPDDelta Scala by cserrano http://stepan ======================================== SAMPLE 12 ======================================== http://imgur.com/a/9mHmn<|endoftext|>CREDIT: Everett Collection We first met Brad Bush on his cabaret shoe tour in the mid-'90s. Back then, he was not the polished soul that he is ======================================== SAMPLE 13 ======================================== Because it is possible to create a game in 3D without really touching the hardware. Episode 11 (??? - 60% unlocked) The next episode was not revealed until 5 years later. The narration didn't explain the Gordian knot ======================================== SAMPLE 14 ======================================== : Motor madness, Lane the intent cause break in time, getting first in the race because the rest weren't there. Page 7 Edit The Ace Combat 5th Generation (AC5G) is a turn-based, free- ======================================== SAMPLE 15 ======================================== : This reminds me that vegetarians (and others) are sometimes called gullible and fall for propaganda hype . They also get their opinions and goals from friends, without thinking. Confusing ? Consider this…This makes a much more descriptive recap: A ================================================================================ Model prompt >>> The Tortoise walked and walked, never ever stopping until he came to the finish line. The animals who were watching cheered so loudly for Tortoise that they woke up the Hare. The Hare stretched, yawned and began to run again, but it was too late. Tortoise had already crossed the finish line. So in conclusion ======================================== SAMPLE 1 ========================================  the Tortoise won.<|endoftext|>The Seventh Third Special Lecture Marcelo Mamenhofer UNIVERSITY OF AUSTRIA Centre for European Economic Research (CEER) Reforming the heavily ======================================== SAMPLE 2 ========================================  Tortoise was the winner. He made the winning song with his definition of as well as his beauty all flawless. It was impossible for him to miss THE lead. The Hare paid no attention to any present and simply led the final 30! ======================================== SAMPLE 3 ========================================  - beware the "too much respect" Tortoise. The Convoy (fact or fiction) Susie and Sara, just off school days, sit in the back of their car. In the evening, as busy as they both are ======================================== SAMPLE 4 ========================================  The Tortoise was a winner, Tomorrow Sickness will be a home run for all of the animals shown. It is a very simple exercise, the previous series of pictures clearly show how. The Tortoise hit the finish line with ======================================== SAMPLE 5 ======================================== this is thought experiment of flowering plant euphoria MiaSoon Shares: When the helpful British scientists Jane Where and John Hicks called a scientific conference on sunny garden plants and introduced them to the conference clerk. "...hence time flew past and the conference was in ======================================== SAMPLE 6 ======================================== : New Zealand, if you are raising animals this will be your bugaboo.<|endoftext|>For architects, the concept of a 'built environment' was crystallised in a famous lecture by Peter Saville which focussed on the concepts of 'form' ======================================== SAMPLE 7 ======================================== Left: The investigation The next major investigation of that night was carried out by the local public advocate Felix van Leeuwenhoek.  In the past, such cases of alleged journalistic journalism were investigated through a numerous independent investigative procedures and the ======================================== SAMPLE 8 ======================================== what don't you want: horses in your face or extra dice on the Grand Chessboard All right! This is a very important guide that will help you, but it's not in any way god about football and using the game ======================================== SAMPLE 9 ======================================== "Tortoise crossed the finish line and reached the knot of both Tetris boards." So do the rabbit and the bear walk, each one at its own pace, then pee. Where did all the other animals lead? Why, whenever possible, ======================================== SAMPLE 10 ======================================== All jokes aside, this all is a bit hard to relate to. What is it that happens when a non-human stretches and yawns all day, while a non-human closest to danger, gets pumped full of adrenaline and wants every second ======================================== SAMPLE 11 ========================================  Tortoise ran it's 15,000 mile run in one continuous stretch. 43 hours in total. "So, George what if we're not being watched? Won't the Martians see us crawling through space again? :)" "We ======================================== SAMPLE 12 ======================================== My statement about the Frog and the Hare The Hare is a profound key of the human condition namely the situation in which perception simplifies to exclude objectivity. Not the Crab inside that Worm is the other key point of the human condition from which ======================================== SAMPLE 13 ======================================== the race for man and man's best friend, the Carcass, was ended by the late recovery of the Tortoise. Advertisements<|endoftext|>Tom Perez Sees 'Credibility Flipped' In Voice Vote On Transportation — Will He Continue ======================================== SAMPLE 14 ========================================  - "Harmony."<|endoftext|>Thank you, Thomas! Enjoying some fellow redditors today?? I have saved plans before. Anyways, after I went to breakfast along with Jennifer , General Manager here at Ryerson , said.... ======================================== SAMPLE 15 ======================================== the shuttle cried sendsay and adieu, and the dead animal to be buried fresh some place, rich in its own right. 7 Addenda. Henny-Penny's condition was excellent. We left her soon after dusk and as promised set ================================================================================ Model prompt >>> his is the story of a young girl who was given an unusual task to coach her illiterate grandma, how to read. She was stunned at first with this request, but her experience in teaching and training her grandmother, gave her a different perspective on life. It changed her completely for better forever. The grandmother of the little girl had a great pleasure in life and it was hearing the stories narrated by her granddaughter. The old woman regretted that she could not have formal education and read her favorite stories. So, she made a request to her granddaughter to educate her. The story is plotted and narrated on different issues which are mostly overlooked in today’s world. For instance, how do you react for scoring more marks in the exam than you actually deserve; what would you do if you are discriminated? The story captures many beautiful moments of life. The book doesn’t tell you or make you oblige to what is right and what is wrong. Rather, it makes you take independent decisions based on the life experiences. The book is more than a story, as it carries a motivational facet. ======================================== SAMPLE 1 ======================================== The characters are experienced and both young and old. The narrator (A, was also my college classmate still playing cricket in college) greatly affected me when t will grab the audience with his story of his love. The story and the characters keep us secure ======================================== SAMPLE 2 ======================================== If you get faulty assessment from the teacher, then you need this book to recover the faith of starting with a strong system of learning. Has it even a point to try to be a doctor, an animal trainer or an accountant? No. But, ======================================== SAMPLE 3 ======================================== Learners can rest on the big picture of the book. The authors' mastery of line and writing together (90’) is nothing short of astonishing. A agonizing part shocking to boys64 years back is the fact that the author accurately ======================================== SAMPLE 4 ======================================== Let us see what happens next. The story has a summary/slide show (slideshare.se 〉) , giving glimpses/picture of various problems one encounters in the work and the solutions he/she comes up ======================================== SAMPLE 5 ======================================== Instead of condemning people for their capacity to learn’s best use: be unique , its preach a message of compassion!<|endoftext|>Rahim is the editor of Literal Translation's website and contributor to the 23/3/17 Literal Translation News ======================================== SAMPLE 6 ======================================== Book check investment from: Amazon | Indie Bound | Audible<|endoftext|>Cancer and Reproductive Harm- Cancer and Reproductive Harm- www.P65Warnings.ca.gov Crosman XM-L powerful ======================================== SAMPLE 7 ======================================== A beautiful book to be adopted by everyone. In all its form it is a real life story. The voice of the grandma is also inspiring because she makes you understand why being independent is so important. It is pretty captivating j & you may just ======================================== SAMPLE 8 ======================================== The author describes as unreal what happened to her grandmother after trying her unorthodox skills. And that it will continue to haunt her for the rest of her life. Dietysm record And here's the story: It was Moriya who had this ======================================== SAMPLE 9 ======================================== Public Voice believes that we must do more to adopt, promote and support technology as a healing medium. By showing the cases of people with disabilities, dyslexia is offered as an inspiration and likelihood to croak-out. We can show more ======================================== SAMPLE 10 ======================================== Sophisticated and suspenseful, this book inspires introspection and helps the reader grapple with a true dilemma in today's world. Posted by ataaya at 6:23 PM<|endoftext|>(Truthstream Media.com) Photo by Zach ======================================== SAMPLE 11 ======================================== It express-sos a person is indiligently living life and is daring to live hand in hand with his/her grandmother. It tells you to overcome your fear and make the right decision everytime. This book is for obsessive adults, ======================================== SAMPLE 12 ======================================== When you are gonna try something, it rings loud and true but when you are due for major changes, you might not be ready even after 2nd attempt. And the life lessons inside are powerful and sometimes cool. The little girl who used to ======================================== SAMPLE 13 ======================================== And this is exactly what makes this book such a great read. ** This is a must read, or any unpublished story. Although it is not yet translated into helping vers as many as possible. But with every update, now even people across India ======================================== SAMPLE 14 ======================================== It gives you the life experiences to follow up on unplanned thoughts and actions, so that you can be successful; in times of sorrow or out of fear. I would definitely recommend this book.<|endoftext|>Description Novak is a wallet that ======================================== SAMPLE 15 ======================================== It teaches $upport and commitment by introducing gaining respect and being loved and being seen. And there is nothing wrong with that…except its standardisation. With you making certain decisions based on the story, you are shown ================================================================================ Model prompt >>> Traceback (most recent call last): File "/usr/lib/python3.6/contextlib.py", line 99, in __exit__ self.gen.throw(type, value, traceback) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 5480, in get_controller yield g File "src/interactive_conditional_samples.py", line 73, in interact_model raw_text = input("Model prompt >>> ") KeyboardInterrupt During handling of the above exception, another exception occurred: Traceback (most recent call last): File "src/interactive_conditional_samples.py", line 91, in <module> fire.Fire(interact_model) File "/usr/local/lib/python3.6/dist-packages/fire/core.py", line 138, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/usr/local/lib/python3.6/dist-packages/fire/core.py", line 471, in _Fire target=component.__name__) File "/usr/local/lib/python3.6/dist-packages/fire/core.py", line 675, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) File "src/interactive_conditional_samples.py", line 88, in interact_model print("=" * 80) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1633, in __exit__ close_thread.start() File "/usr/lib/python3.6/threading.py", line 851, in start self._started.wait() File "/usr/lib/python3.6/threading.py", line 551, in wait signaled = self._cond.wait(timeout) File "/usr/lib/python3.6/threading.py", line 295, in wait waiter.acquire() KeyboardInterrupt lsCONTRIBUTORS.md Dockerfile.gpu LICENSE README.md DEVELOPERS.md domains.txt model_card.md requirements.txt Dockerfile.cpu download_model.py models/ src/ cd ../content lsgpt-2/ sample_data/ _____no_output_____ </code>
{ "repository": "mojito9542/gpt-2", "path": "GPT-2.ipynb", "matched_keywords": [ "STAR", "evolution" ], "stars": null, "size": 74913, "hexsha": "4825b1e6b1d80fb3cd947426ff5bcee270bcbabc", "max_line_length": 1354, "avg_line_length": 76.2085452696, "alphanum_fraction": 0.5813410223 }
# Notebook from jjc2718/generic-expression-patterns Path: new_experiment/archive/debug.ipynb # Debug During a call with Casey and Jim, they noticed 2 unusual things in the generic_gene_summary table: * Not all values in the `num_simulated` column were equal to 25, which should be the case * There are some genes that do not have any DE statistics reported. This is the case using a template experiment that is included in the recount2 training compendium, so its not just an issue using an external template experiment. These two issues were NOT observed in the other generic_gene_summary tables training compendium = Powers et. al. or Pseudomonas datasets. See [example summary tables](https://docs.google.com/spreadsheets/d/1aqSPTLd5bXjYOBoAjG7bM8jpy5ynnCpG0oBC2zGzozc/edit?usp=sharing) Given that this is only observed in the recount2 training dataset, we suspect that this is an issue with mapping genes from ensembl ids (raw data) to hgnc ids (needed to compare against validation dataset)_____no_output_____ <code> %load_ext autoreload %load_ext rpy2.ipython %autoreload 2 import os import re import pandas as pd from ponyo import utils from generic_expression_patterns_modules import process, new_experiment_process_____no_output_____base_dir = os.path.abspath(os.path.join(os.getcwd(), "../")) # Read in config variables config_filename = os.path.abspath( os.path.join(base_dir, "configs", "config_human_general.tsv") ) params = utils.read_config(config_filename)_____no_output_____# Load params # Output files of recount2 template experiment data raw_template_recount2_filename = params['raw_template_filename'] mapped_template_recount2_filename = os.path.join( base_dir, "human_general_analysis", params['processed_template_filename'] ) # Local directory to store intermediate files local_dir = params['local_dir'] # ID for template experiment # This ID will be used to label new simulated experiments project_id = params['project_id']_____no_output_____base_dir = os.path.abspath(os.path.join(os.getcwd(), "../")) # Read in config variables config_filename = os.path.abspath( os.path.join(base_dir, "configs", "config_new_experiment.tsv") ) params = utils.read_config(config_filename)_____no_output_____# Load params # Output files of recount2 template experiment data raw_template_nonrecount2_filename = params['raw_template_filename'] mapped_template_nonrecount2_filename = params['processed_template_filename']_____no_output_____ </code> ## Read data_____no_output_____ <code> # Read raw template data # This is the data **before** gene ids were mapped raw_template_recount2 = pd.read_csv(raw_template_recount2_filename, sep="\t", index_col=0, header=0) raw_template_nonrecount2 = pd.read_csv(raw_template_nonrecount2_filename, sep="\t", index_col=0, header=0).T # Read mapped template data # This is the data **after** gene ids were mapped mapped_template_recount2 = pd.read_csv(mapped_template_recount2_filename, sep="\t", index_col=0, header=0) mapped_template_nonrecount2 = pd.read_csv(mapped_template_nonrecount2_filename, sep="\t", index_col=0, header=0)_____no_output_____print(raw_template_recount2.shape) raw_template_recount2.head()(36, 58037) print(mapped_template_recount2.shape) mapped_template_recount2.head()(24, 17755) print(raw_template_nonrecount2.shape) raw_template_nonrecount2.head()(72, 58528) print(mapped_template_nonrecount2.shape) mapped_template_nonrecount2.head()(6, 17755) </code> ## Look up some genes_____no_output_____## Case 1: Genes have only simulated statistics using recount2 template experiment: * ENSG00000169717 --> ACTRT2 * ENSG00000184895 --> SRY_____no_output_____ <code> ensembl_ids = ["ENSG00000169717", "ENSG00000184895", "ENSG00000124232", "ENSG00000261713", "ENSG00000186818", "ENSG00000160882" ] ensembl_version_ids = raw_template_recount2.columns for sub in ensembl_ids: for x in ensembl_version_ids: if re.search(sub, x): print(x)ENSG00000169717.6 ENSG00000184895.7 ENSG00000124232.10 ENSG00000261713.6 ENSG00000186818.12 ENSG00000160882.11 raw_template_recount2[["ENSG00000169717.6", "ENSG00000184895.7"]].sum()_____no_output_____mapped_template_recount2[["ACTRT2","SRY"]].sum()_____no_output_____ </code> Looks like reason for missing values in template experiment could be due to having all 0 counts_____no_output_____## Case 2: Genes that have all statistics present but number of simulated experiments < 25 using recount2 template. These genes are also only have simulated statistics using non-recount2 template experiment: * ENSG00000186818 --> LILRB4 * ENSG00000160882 --> CYP11B1_____no_output_____ <code> raw_template_nonrecount2[["LILRB4","CYP11B1"]].sum()_____no_output_____mapped_template_nonrecount2[["LILRB4","CYP11B1"]].sum()_____no_output_____ </code> So far, it seems that those genes that are missing template statistics have all 0 counts in the template experiment._____no_output_____ <code> raw_template_recount2[["ENSG00000186818.12", "ENSG00000160882.11"]].sum()_____no_output_____mapped_template_recount2[["LILRB4","CYP11B1"]].sum()_____no_output_____ </code> Overall there isn't a trend found in these genes missing some number of simulated experiments, so let's try looking at the simulated experiments. At this point we suspect that the missing simulated experiments are those where genes have all 0 counts._____no_output_____ <code> # Get list of files simulated_dir = os.path.join( local_dir, "pseudo_experiment", ) simulated_filename_list = [] for file in os.listdir(simulated_dir): if (project_id in file) and ("simulated" in file) and ("encoded" not in file): simulated_filename_list.append(os.path.join(simulated_dir,file))_____no_output_____assert len(simulated_filename_list) ==25_____no_output_____# For each simulated experiment, check how many have all 0 counts for this gene # Is this number the same number of missing simulated experiments? counter_LILRB4 = 0 counter_CYP11B1 = 0 for filename in simulated_filename_list: simulated_data = pd.read_csv(filename, sep="\t", index_col=0, header=0) if simulated_data["LILRB4"].sum() == 0: counter_LILRB4 += 1 if simulated_data["CYP11B1"].sum() == 0: counter_CYP11B1 += 1 # Verified LILRB4 to be missing 2 experiments (23 total experiments) # Verified CYP11B1 to be missing 8 experiments (17 total experiments) # Can look this up in google sheet print(counter_LILRB4, counter_CYP11B1)2 8 </code> ## Case 3: Genes that do not have a p-value using non-recount2 template experiment: * ENSG00000124232 --> RBPJL * ENSG00000261713 --> SSTR5-AS1_____no_output_____Following the theme observed in Case 1 and 2, we suspect that these "missing" p-values actually indicate p-value =0_____no_output_____ <code> # Look up these values in DE statis output files DE_stats_dir = os.path.join( local_dir, "DE_stats" ) template_nonrecount2_DE_stats_filename = os.path.join( DE_stats_dir, "DE_stats_template_data_cis-par-KU1919_real.txt" ) template_nonrecount2_DE_stats = pd.read_csv( template_nonrecount2_DE_stats_filename, sep="\t", index_col=0, header=0 ) template_nonrecount2_DE_stats.loc["RBPJL"]_____no_output_____template_nonrecount2_DE_stats.loc["SSTR5.AS1"]_____no_output_____ </code> According to this [link](https://bioconductor.org/packages/release/bioc/vignettes/DESeq2/inst/doc/DESeq2.html#pvaluesNA), genes with NA adjusted p-values indicate those genes that have been automatically filtered by DESeq2 as very likely to not have no significance. Let's test calling DESeq2, turning off filtering_____no_output_____ <code> transposed_template_filename = "/home/alexandra/Documents/Data/Generic_expression_patterns/Costello_BladderCancer_ResistantCells_Counts_12-8-20_transposed.txt" new_experiment_process.transpose_save(raw_template_nonrecount2_filename, transposed_template_filename)_____no_output_____project_id = "cis-par-KU1919" local_dir = "/home/alexandra/Documents/Data/Generic_expression_patterns/" mapped_compendium_filename = "/home/alexandra/Documents/Data/Generic_expression_patterns/mapped_recount2_compendium.tsv"_____no_output_____# Check that the feature space matches between template experiment and VAE model. # (i.e. ensure genes in template and VAE model are the same). mapped_template_experiment = new_experiment_process.compare_match_features( transposed_template_filename, mapped_compendium_filename ) mapped_template_filename = transposed_template_filename(72, 58528) (49651, 17755) # Load metadata file with processing information sample_id_metadata_filename = os.path.join( "data", "metadata", f"{project_id}_process_samples.tsv" ) # Read in metadata metadata = pd.read_csv(sample_id_metadata_filename, sep='\t', header=0, index_col=0) # Get samples to be dropped sample_ids_to_drop = list(metadata[metadata["processing"] == "drop"].index)_____no_output_____# Modify template experiment process.subset_samples_template( mapped_template_filename, sample_ids_to_drop, )_____no_output_____process.recast_int_template(mapped_template_filename)_____no_output_____# Load metadata file with grouping assignments for samples metadata_filename = os.path.join( "data", "metadata", f"{project_id}_groups.tsv" ) # Check whether ordering of sample ids is consistent between gene expression data and metadata process.compare_and_reorder_samples(mapped_template_filename, metadata_filename)sample ids are ordered correctly %%R -i metadata_filename -i project_id -i mapped_template_filename -i local_dir -i base_dir library("limma") library("DESeq2") # Manually change DESeq2 call with additional parameter get_DE_stats_DESeq <- function(metadata_file, experiment_id, expression_file, data_type, local_dir, run) { # This function performs DE analysis using DESeq. # Expression data in expression_file are grouped based on metadata_file # # Arguments # --------- # metadata_file: str # File containing mapping between sample id and group # # experiment_id: str # Experiment id used to label saved output filee # # expression_file: str # File containing gene expression data # # data_type: str # Either 'template' or 'simulated' to label saved output file # # local_dir: str # Directory to save output files to # # run: str # Used as identifier for different simulated experiments expression_data <- t(as.matrix(read.csv(expression_file, sep="\t", header=TRUE, row.names=1))) metadata <- as.matrix(read.csv(metadata_file, sep="\t", header=TRUE, row.names=1)) print("Checking sample ordering...") print(all.equal(colnames(expression_data), rownames(metadata))) group <- interaction(metadata[,1]) mm <- model.matrix(~0 + group) #print(head(expression_data)) ddset <- DESeqDataSetFromMatrix(expression_data, colData=metadata, design = ~group) deseq_object <- DESeq(ddset) deseq_results <- results(deseq_object, independentFiltering=FALSE) deseq_results_df <- as.data.frame(deseq_results) # Save summary statistics of DEGs if (data_type == "template") { out_file = paste(local_dir, "DE_stats/DE_stats_template_data_", experiment_id,"_", run, ".txt", sep="") } else if (data_type == "simulated") { out_file = paste(local_dir, "DE_stats/DE_stats_simulated_data_", experiment_id,"_", run, ".txt", sep="") } write.table(deseq_results_df, file = out_file, row.names = T, sep = "\t", quote = F) } # File created: "<local_dir>/DE_stats/DE_stats_template_data_SRP012656_real.txt" get_DE_stats_DESeq(metadata_filename, project_id, mapped_template_filename, "template", local_dir, "real_without_filtering")/home/alexandra/anaconda3/envs/generic_expression/lib/python3.7/site-packages/rpy2/rinterface/__init__.py:146: RRuntimeWarning: estimating size factors warnings.warn(x, RRuntimeWarning) /home/alexandra/anaconda3/envs/generic_expression/lib/python3.7/site-packages/rpy2/rinterface/__init__.py:146: RRuntimeWarning: estimating dispersions warnings.warn(x, RRuntimeWarning) /home/alexandra/anaconda3/envs/generic_expression/lib/python3.7/site-packages/rpy2/rinterface/__init__.py:146: RRuntimeWarning: gene-wise dispersion estimates warnings.warn(x, RRuntimeWarning) /home/alexandra/anaconda3/envs/generic_expression/lib/python3.7/site-packages/rpy2/rinterface/__init__.py:146: RRuntimeWarning: mean-dispersion relationship warnings.warn(x, RRuntimeWarning) /home/alexandra/anaconda3/envs/generic_expression/lib/python3.7/site-packages/rpy2/rinterface/__init__.py:146: RRuntimeWarning: final dispersion estimates warnings.warn(x, RRuntimeWarning) /home/alexandra/anaconda3/envs/generic_expression/lib/python3.7/site-packages/rpy2/rinterface/__init__.py:146: RRuntimeWarning: fitting model and testing warnings.warn(x, RRuntimeWarning) template_nonrecount2_nofilter_DE_stats_filename = os.path.join( DE_stats_dir, "DE_stats_template_data_cis-par-KU1919_real_without_filtering.txt" ) template_nonrecount2_nofilter_DE_stats = pd.read_csv( template_nonrecount2_nofilter_DE_stats_filename, sep="\t", index_col=0, header=0 ) template_nonrecount2_nofilter_DE_stats.loc["RBPJL"]_____no_output_____template_nonrecount2_nofilter_DE_stats.loc["SSTR5.AS1"]_____no_output_____ </code> **Takeaways:** * Case 1: genes with only simulated statistics is because template experiment has all 0 counts * Case 2: genes with fewer than 25 simulated experiments, some simulated experiments have all 0 counts for those genes * Case 3 (only found using non-recount2 template): genes with missing p-value in template experiment have NaN output from DESeq2, which indicates those genes that have been automatically filtered by DESeq2 as very likely to not have no significance: https://bioconductor.org/packages/release/bioc/vignettes/DESeq2/inst/doc/DESeq2.html#pvaluesNA **Proposed solution:** * Case 1 and 2: Remove genes with 0 counts across all samples. I will also add an option to additional remove genes that have mean counts < user specified threshold (for Jim's case). This should get rid of those rows with missing values in template statistic columns. These genes will be missing in the validation against Crow et. al. so I'll need to ignore these genes in the rank comparison. Any gene removed from the simulated experiments will get a lower `number of simulated experiments` reported so I will need to document this for the user. * Case 3: (option 1) I can change the parameter setting to turn off this autofiltering and it will perform all tests and report them. (option 2) Replace padj values = "NA" with "Filtered by DESeq2" and document that this means that DESeq2 pre-filtered these genes as likely not being signicant to help increase detection power. For now I am using option 1. DESeq2 documentation states: *filter out those tests from the procedure that have no, or little chance of showing significant evidence, without even looking at their test statistic. Typically, this results in increased detection power at the same experiment-wide type I error, as measured in terms of the false discovery rate.* *For weakly expressed genes, we have no chance of seeing differential expression, because the low read counts suffer from so high Poisson noise that any biological effect is drowned in the uncertainties from the read counting*_____no_output_____
{ "repository": "jjc2718/generic-expression-patterns", "path": "new_experiment/archive/debug.ipynb", "matched_keywords": [ "limma", "DESeq2" ], "stars": 8, "size": 59698, "hexsha": "48275a8766cd9de2a51a769a76b27ef62d780af9", "max_line_length": 556, "avg_line_length": 33.6516347238, "alphanum_fraction": 0.4395289624 }
# Notebook from arm61/pylj Path: examples/molecular_dynamics/intro_to_molecular_dynamics.ipynb <code> import warnings import matplotlib.pyplot as plt import numpy as np from pylj import md, util, sample, forcefields warnings.filterwarnings('ignore')_____no_output_____ </code> # Atomistic simulation The use of computers in chemistry is becoming more common as computers are increasing in power and the algorithms are becoming more accurate and efficient. Computer simulation of atomic and molecular species takes two "flavours": - **Quantum calculations**: where approximations are made to the Schrödinger equation to allow for its calculation for a multi-electron system (more information about this can be found in Atkins & Friedman's *Molecular Quantum Mechanics*), - **Classical simulations**: where model interactions between atoms are developed to allow chemical systems to be simulated, This exercise will focus on the latter. ## Classical simulation One of the most popular models used to simulate interparticle interactions is known as the **Lennard-Jones** function. This models the London dispersion interactions between atoms. Hopefully the London dispersion interactions are familiar as consisting of: - **van der Waals attraction**: the attraction between atoms that occurs as a result of the formation of instantenous dipole formation, - **Pauli's exclusion principle**: the repulsion which stops atoms overlapping as no two electrons can have the same quantum state. This means that the Lennard-Jones function is attractive when particles are close enough to induce dipole formation but **very** repulsive when the particles are too close. The Lennard-Jones function has the following form, $$ E(r) = \frac{A}{r^{12}} - \frac{B}{r^6}, $$ where $E(r)$ is the potential energy at a given distance $r$, while $A$ and $B$ are parameters specific to the nature of the given interaction. In the cell below, write a function to calculate the energy of an interaction between two argon atoms for given range of distances using the Lennard-Jones function as the model._____no_output_____ <code> def lennard_jones(A, B, r): return ◽◽◽_____no_output_____ </code> Then use this function to plot the potential energy from $r=3$ Å to $r=8$ Å, and discuss the shape and sign of this function in terms of the attractive and repulsive regimes, when the values of ε and σ are as follows. | A/$\times10^{-134}$Jm$^{12}$ | B/$\times10^{-78}$Jm$^6$ | |:------:|:------:| | 1.36 | 9.27 |_____no_output_____ <code> r = np.linspace( ◽◽◽, ◽◽◽, ◽◽◽ ) E = ◽◽◽ plt.plot( ◽◽◽, ◽◽◽ ) plt.xlabel( ◽◽◽ ) plt.ylabel( ◽◽◽ ) plt.show()_____no_output_____ </code> We are going to use the software pylj. This is a Python library, which means it is a collection of functions to enable the simulation of molecular dynamics. An example of a function in pylj is the `pairwise.lennard_jones_energy`, which can be used similar to the `lennard_jones` function that you have defined above._____no_output_____ <code> r = np.linspace( 3e-10, 8e-10, 100 ) E = forcefields.lennard_jones(r, [1.36e-134, 9.27e-78]) %matplotlib widget plt.plot( r, E ) plt.xlabel( '$r/m$' ) plt.ylabel( '$E/J$' ) plt.show()_____no_output_____ </code> In addition to the functions associated directly with the molecular dynamics simulations, pylj is also useful for the visualisation of simulations (an example of a visualiation environment in pylj is shown below). <img src="fig1.png" width="500px"> *Figure 1. The Interactions sampling class in pylj.* In this exercise, we will use pylj to help to build a molecular dynamics simulation where the particles interact via a Lennard-Jones potential. ## Molecular dynamics Molecular dynamics or MD is the process of using Newton's equations of motion to probe the dynamical nature of a given system, the system that we are interested in is the interaction of argon atoms through a Lennard-Jones potential. The algorithm that is used in molecular dynamics simulations is as follows: 1. initialise the system, 2. start the clock, 3. calculate the forces of each particle, 4. integrate Newton's equations of motion to step forward in time, 5. sample the system, 6. go to 3. This process continues for as long as the scientist is interested in. ### Initialisation Lets try and use pylj to initialise our system, to do this we use the function `md.initialise`. This function takes 4 inputs: - number of particles, - temperature of the simulation, - size of the simulation cell, - how the particles should be distributed. The first line is to set up the visualisation environment within the Jupyter notebook, and the third line then sets and plots the particular environment that you want (in this case the `JustCell` environment)._____no_output_____ <code> %matplotlib widget simulation = md.initialise(16, 300, 50, 'square') sample_system = sample.JustCell(simulation)_____no_output_____ </code> In the above example, a `square` distribution of particles is assigned, however, it is also possible to distribute the particles randomly through the cell (with the keyword `random`). Try this out in the cell above and comment below on why this might cause issues for the molecular dynamics simulation. _____no_output_____ Another important aspect of the initialisation process is to assign the initial velocities for each of the particles. This is achieved using the following equation, $$ \mathbf{v}_i = (n_i - \bar{n}) \sqrt{\frac{2T}{\sum_i^N n^2}}, $$ where $n_i$ is a series of numbers of length $N$, where $N$ is the number of particles, drawn from a uniform distribution between 0 and 1, $\bar{n}$ is the mean $n_i$, and $T$ is the initial temperature of the system. The range of values for the velocities can be found for the system above using the following command._____no_output_____ <code> print(simulation.particles['xvelocity']) # m/s print(simulation.particles['yvelocity']) # m/s_____no_output_____ </code> Having given each particle an initial position and velocity, the system is initialised. ### Calculate forces The next stage is to calculate the forces on each of the particles. The forces on each of the atoms can be force directly from the energy (which is given by the Lennard-Jones function discussed above), as the force is the negative of the first derivative of the energy with respect to the distance, $$ \mathbf{f} = -\frac{\partial E}{\partial r}. $$ The force of a given interaction can be found using the `forcefields.lennard(dr, constants, force=True)` function available in pylj, below plot the force for the interaction between two argon atoms as was completed for the energy above. _____no_output_____ <code> r = ◽◽◽ f = ◽◽◽ %matplotlib widget plt.plot( ◽◽◽, ◽◽◽ ) plt.xlabel( ◽◽◽ ) plt.ylabel( ◽◽◽ ) plt.show()_____no_output_____ </code> The knowledge of the forces is then converted to information about the acceleration on each particle using Newton's second law, $$ \mathbf{f} = m\mathbf{a}, $$ where $m$ is the mass of the particle. It is possible to calculate the forces on all of the particles within a system in pylj using the `compute_force` function, as shown below. _____no_output_____ <code> simulation.compute_force() print(simulation.particles['xacceleration']) # m/s2 print(simulation.particles['yacceleration']) # m/s2_____no_output_____ </code> ### Integration With knowledge of the particle positions, velocities and accelerations it is now possible to make use of Newton's equations of motion to move forward in time. This is achieved using an *integrator*, which integrates Newton's equations of motion. A [wide variety](https://en.wikipedia.org/wiki/Molecular_dynamics#Integrators) of MD integrators exist, but we will discuss one of the simplest, known as the Verlet integrator. This has the form, $$ \mathbf{x}_1 = \mathbf{x}_0 + \mathbf{v}_0\Delta t + \frac{1}{2}\mathbf{a}_0 \Delta t^2, $$ for the first step and, $$ \mathbf{x}_{n+1} = 2\mathbf{x}_n - \mathbf{x}_{n-1} + \mathbf{a}_0 \Delta t^2, $$ for subsequent steps. Below, define a function to perform the integration in either the *x* or *y* dimension using the Verlet integrator. _____no_output_____ <code> def verlet(position, previous_position, velocity, acceleration, timestep, i): if i == 0: ◽◽◽ else: ◽◽◽ _____no_output_____ </code> ### Sampling Following the integration, the next stage is to sample the system. Currently we are only interested in plotting the particles positions, this can be achieved using the `update` function (assuming that the sampling environment has been defined). _____no_output_____ <code> %matplotlib widget simulation = md.initialise(16, 300, 50, 'square') sample_system = sample.JustCell(simulation) simulation.compute_force() ◽◽◽ = verlet(◽◽◽, ◽◽◽, ◽◽◽, ◽◽◽, ◽◽◽, ◽◽◽) sample_system.update(simulation)_____no_output_____ </code> It might be hard to see the change, try making the system update on a loop (essentially carrying out step 6 of the algorithm) and see is the MD simulation is working (it is probably best to only update the plot occasionally, as this is the slowest part of the method, consider how this may be done)._____no_output_____ <code> # Cell for building MD simulation. _____no_output_____ </code>
{ "repository": "arm61/pylj", "path": "examples/molecular_dynamics/intro_to_molecular_dynamics.ipynb", "matched_keywords": [ "molecular dynamics" ], "stars": 18, "size": 13367, "hexsha": "4827b9609a5b2f893f75acf8d32d6ad4e123defe", "max_line_length": 450, "avg_line_length": 36.9254143646, "alphanum_fraction": 0.6231016683 }
# Notebook from UPbook-innovations/nlu Path: examples/colab/component_examples/classifiers/sentiment_classification_movies.ipynb ![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/nlu/blob/master/examples/colab/component_examples/classifiers/sentiment_classification_movies.ipynb) # Sentiment Classification with NLU for Movies Based on IMDB dataset The Sentiment classifier model uses universal sentence embeddings and is trained with the classifierdl algorithm provided by Spark NLP. # 1. Install Java and NLU_____no_output_____ <code> import os ! apt-get update -qq > /dev/null # Install java ! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"] ! pip install nlu pyspark==2.4.7 > /dev/null _____no_output_____ </code> # 2. Load the NLU sentiment pipeline and predict on a sample string_____no_output_____ <code> import nlu sentiment_pipe = nlu.load('en.sentiment.imdb') sentiment_pipe.predict('The movie matrix was pretty cool ')analyze_sentimentdl_use_imdb download started this may take some time. Approx size to download 935.8 MB [OK!] </code> # 3. Define a list of String for predictions_____no_output_____ <code> movie_reviews = [ "I thought this was a wonderful way to spend time on a too hot summer weekend, sitting in the air conditioned theater and watching a light-hearted comedy. The plot is simplistic, but the dialogue is witty and the characters are likable (even the well bread suspected serial killer). While some may be disappointed when they realize this is not Match Point 2: Risk Addiction, I thought it was proof that Woody Allen is still fully in control of the style many of us have grown to love.<br /><br />This was the most I'd laughed at one of Woody's comedies in years (dare I say a decade?). While I've never been impressed with Scarlet Johanson, in this she managed to tone down her 'sexy' image and jumped right into a average, but spirited young woman.<br /><br />This may not be the crown jewel of his career, but it was wittier than 'Devil Wears Prada' and more interesting than 'Superman' a great comedy to go see with friends.", "Basically there's a family where a little boy (Jake) thinks there's a zombie in his closet & his parents are fighting all the time.<br /><br />This movie is slower than a soap opera... and suddenly, Jake decides to become Rambo and kill the zombie.<br /><br />OK, first of all when you're going to make a film you must Decide if its a thriller or a drama! As a drama the movie is watchable. Parents are divorcing & arguing like in real life. And then we have Jake with his closet which totally ruins all the film! I expected to see a BOOGEYMAN similar movie, and instead i watched a drama with some meaningless thriller spots.<br /><br />3 out of 10 just for the well playing parents & descent dialogs. As for the shots with Jake: just ignore them.", "Petter Mattei's 'Love in the Time of Money' is a visually stunning film to watch. Mr. Mattei offers us a vivid portrait about human relations. This is a movie that seems to be telling us what money, power and success do to people in the different situations we encounter. <br /><br />This being a variation on the Arthur Schnitzler's play about the same theme, the director transfers the action to the present time New York where all these different characters meet and connect. Each one is connected in one way, or another to the next person, but no one seems to know the previous point of contact. Stylishly, the film has a sophisticated luxurious look. We are taken to see how these people live and the world they live in their own habitat.<br /><br />The only thing one gets out of all these souls in the picture is the different stages of loneliness each one inhabits. A big city is not exactly the best place in which human relations find sincere fulfillment, as one discerns is the case with most of the people we encounter.<br /><br />The acting is good under Mr. Mattei's direction. Steve Buscemi, Rosario Dawson, Carol Kane, Michael Imperioli, Adrian Grenier, and the rest of the talented cast, make these characters come alive.<br /><br />We wish Mr. Mattei good luck and await anxiously for his next work.", "Probably my all-time favorite movie, a story of selflessness, sacrifice and dedication to a noble cause, but it's not preachy or boring. It just never gets old, despite my having seen it some 15 or more times in the last 25 years. Paul Lukas' performance brings tears to my eyes, and Bette Davis, in one of her very few truly sympathetic roles, is a delight. The kids are, as grandma says, more like 'dressed-up midgets' than children, but that only makes them more fun to watch. And the mother's slow awakening to what's happening in the world and under her own roof is believable and startling. If I had a dozen thumbs, they'd all be 'up' for this movie.", "I sure would like to see a resurrection of a up dated Seahunt series with the tech they have today it would bring back the kid excitement in me.I grew up on black and white TV and Seahunt with Gunsmoke were my hero's every week.You have my vote for a comeback of a new sea hunt.We need a change of pace in TV and this would work for a world of under water adventure.Oh by the way thank you for an outlet like this to view many viewpoints about TV and the many movies.So any ole way I believe I've got what I wanna say.Would be nice to read some more plus points about sea hunt.If my rhymes would be 10 lines would you let me submit,or leave me out to be in doubt and have me to quit,If this is so then I must go so lets do it.", "This show was an amazing, fresh & innovative idea in the 70's when it first aired. The first 7 or 8 years were brilliant, but things dropped off after that. By 1990, the show was not really funny anymore, and it's continued its decline further to the complete waste of time it is today.<br /><br />It's truly disgraceful how far this show has fallen. The writing is painfully bad, the performances are almost as bad - if not for the mildly entertaining respite of the guest-hosts, this show probably wouldn't still be on the air. I find it so hard to believe that the same creator that hand-selected the original cast also chose the band of hacks that followed. How can one recognize such brilliance and then see fit to replace it with such mediocrity? I felt I must give 2 stars out of respect for the original cast that made this show such a huge success. As it is now, the show is just awful. I can't believe it's still on the air.", "Encouraged by the positive comments about this film on here I was looking forward to watching this film. Bad mistake. I've seen 950+ films and this is truly one of the worst of them - it's awful in almost every way: editing, pacing, storyline, 'acting,' soundtrack (the film's only song - a lame country tune - is played no less than four times). The film looks cheap and nasty and is boring in the extreme. Rarely have I been so happy to see the end credits of a film. <br /><br />The only thing that prevents me giving this a 1-score is Harvey Keitel - while this is far from his best performance he at least seems to be making a bit of an effort. One for Keitel obsessives only.", "If you like original gut wrenching laughter you will like this movie. If you are young or old then you will love this movie, hell even my mom liked it.<br /><br />Great Camp!!!", "Phil the Alien is one of those quirky films where the humour is based around the oddness of everything rather than actual punchlines.<br /><br />At first it was very odd and pretty funny but as the movie progressed I didn't find the jokes or oddness funny anymore.<br /><br />Its a low budget film (thats never a problem in itself), there were some pretty interesting characters, but eventually I just lost interest.<br /><br />I imagine this film would appeal to a stoner who is currently partaking.<br /><br />For something similar but better try 'Brother from another planet'", "I saw this movie when I was about 12 when it came out. I recall the scariest scene was the big bird eating men dangling helplessly from parachutes right out of the air. The horror. The horror.<br /><br />As a young kid going to these cheesy B films on Saturday afternoons, I still was tired of the formula for these monster type movies that usually included the hero, a beautiful woman who might be the daughter of a professor and a happy resolution when the monster died in the end. I didn't care much for the romantic angle as a 12 year old and the predictable plots. I love them now for the unintentional humor.<br /><br />But, about a year or so later, I saw Psycho when it came out and I loved that the star, Janet Leigh, was bumped off early in the film. I sat up and took notice at that point. Since screenwriters are making up the story, make it up to be as scary as possible and not from a well-worn formula. There are no rules.", "So im not a big fan of Boll's work but then again not many are. I enjoyed his movie Postal (maybe im the only one). Boll apparently bought the rights to use Far Cry long ago even before the game itself was even finsished. <br /><br />People who have enjoyed killing mercs and infiltrating secret research labs located on a tropical island should be warned, that this is not Far Cry... This is something Mr Boll have schemed together along with his legion of schmucks.. Feeling loneley on the set Mr Boll invites three of his countrymen to play with. These players go by the names of Til Schweiger, Udo Kier and Ralf Moeller.<br /><br />Three names that actually have made them selfs pretty big in the movie biz. So the tale goes like this, Jack Carver played by Til Schweiger (yes Carver is German all hail the bratwurst eating dudes!!) However I find that Tils acting in this movie is pretty badass.. People have complained about how he's not really staying true to the whole Carver agenda but we only saw carver in a first person perspective so we don't really know what he looked like when he was kicking a**.. <br /><br />However, the storyline in this film is beyond demented. We see the evil mad scientist Dr. Krieger played by Udo Kier, making Genetically-Mutated-soldiers or GMS as they are called. Performing his top-secret research on an island that reminds me of 'SPOILER' Vancouver for some reason. Thats right no palm trees here. Instead we got some nice rich lumberjack-woods. We haven't even gone FAR before I started to CRY (mehehe) I cannot go on any more.. If you wanna stay true to Bolls shenanigans then go and see this movie you will not be disappointed it delivers the true Boll experience, meaning most of it will suck.<br /><br />There are some things worth mentioning that would imply that Boll did a good work on some areas of the film such as some nice boat and fighting scenes. Until the whole cromed/albino GMS squad enters the scene and everything just makes me laugh.. The movie Far Cry reeks of scheisse (that's poop for you simpletons) from a fa,r if you wanna take a wiff go ahead.. BTW Carver gets a very annoying sidekick who makes you wanna shoot him the first three minutes he's on screen.", ] _____no_output_____ </code> # 4. Predict for each element in the list of strings_____no_output_____ <code> sentiment_pipe.predict(movie_reviews) _____no_output_____ </code>
{ "repository": "UPbook-innovations/nlu", "path": "examples/colab/component_examples/classifiers/sentiment_classification_movies.ipynb", "matched_keywords": [ "STAR" ], "stars": 1, "size": 18721, "hexsha": "48290277edbc850b461c7271e338fa78aaa13868", "max_line_length": 18721, "avg_line_length": 18721, "alphanum_fraction": 0.6707440842 }
# Notebook from BastianZim/openai-python Path: examples/embeddings/Classification.ipynb ## Classification using the embeddings In the classification task we predict one of the predefined categories given an input. We will predict the score based on the embedding of the review's text, where the algorithm is correct only if it guesses the exact number of stars. We split the dataset into a training and a testing set for all the following tasks, so we can realistically evaluate performance on unseen data. The dataset is created in the [Obtain_dataset Notebook](Obtain_dataset.ipynb). In the following example we're predicting the number of stars in a review, from 1 to 5._____no_output_____ <code> import pandas as pd import numpy as np from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report, accuracy_score df = pd.read_csv('output/embedded_1k_reviews.csv') df['babbage_similarity'] = df.babbage_similarity.apply(eval).apply(np.array) X_train, X_test, y_train, y_test = train_test_split(list(df.babbage_similarity.values), df.Score, test_size = 0.2, random_state=42) clf = RandomForestClassifier(n_estimators=100) clf.fit(X_train, y_train) preds = clf.predict(X_test) probas = clf.predict_proba(X_test) report = classification_report(y_test, preds) print(report) precision recall f1-score support 1 0.82 0.67 0.74 21 2 0.50 0.50 0.50 6 3 1.00 0.46 0.63 13 4 0.75 0.35 0.48 17 5 0.88 1.00 0.93 143 accuracy 0.86 200 macro avg 0.79 0.60 0.66 200 weighted avg 0.86 0.86 0.84 200 </code> We can see that the model has learnt to distinguish between the categories decently. 5-star reviews show the best performance overall, and this is not too surprising, since they are the most common in the dataset._____no_output_____ <code> from utils import plot_multiclass_precision_recall plot_multiclass_precision_recall(probas, y_test, [1,2,3,4,5], clf)RandomForestClassifier() - Average precision score over all classes: 0.93 </code> Unsurprisingly 5-star and 1-star reviews seem to be easier to predict. Perhaps with more data, the nuances between 2-4 stars could be better predicted, but there's also probably more subjectivity in how people use the inbetween scores._____no_output_____
{ "repository": "BastianZim/openai-python", "path": "examples/embeddings/Classification.ipynb", "matched_keywords": [ "STAR" ], "stars": null, "size": 81941, "hexsha": "482ba85910074a7d3b7c96a60d3d5f9022c31872", "max_line_length": 77750, "avg_line_length": 625.5038167939, "alphanum_fraction": 0.94499701 }
# Notebook from beniza/learn-python Path: experiments/.ipynb_checkpoints/InternetArchive-checkpoint.ipynb # Manipulating the items on the archive.org website The `archive.org` is arguably the largest collection of community contributed collection of `items` such as books, movies, audio, images and even code. The snippets in this notebook can be used for automating your interactions with the `archive.org` servers. In order for making the interactions smoother, they've released a python library named `internetarchive` with serveral methods that will allow you in seemlessly interact with their servers. Keep in mind that you need to configure your work system before you can start interacting with the servers, especially if you are planning to make changes on the items you've submitted on `archive.org` website._____no_output_____## Configuring your system The first step is to configure the system for using library. They have a tiny script, `ia`, that will help you configure and secure your account._____no_output_____TODO: Write the steps to use `ia`._____no_output_____ <code> from internetarchive import upload, get_item, modify_metadata, search_items_____no_output_____ </code> ## Reading the Metadata of an exisiting item An item on archive.org represents a single entry on the server's catalogue. Each item has two parts: its actual file(s), as well as its description. The description of an item is known as the metadata. You can access all of an item's metadata via the `item` object. _____no_output_____ <code> # To get the details of an item, you'd need the identifier item = get_item('vilpattukaltest1980kssp') item.item_metadata['metadata']_____no_output_____ </code> ## Upload an item It's fairly easy to establish a connection for creating an item on archive.org and uploading file to it. Remember that we need to upload both the actual files and metadata when we create a new item. > If the item already has a file with the same filename, the existing file within the item will be overwritten. _____no_output_____ <code> # metadata is submitted to the archive.org as a dictionary. Remember that the subject element must be a simi-colon separated string md = dict(title='Title of the item', mediatype='movies', subject='test; magazines') _____no_output_____r = upload('ia-test-upload', files=['test.txt'], metadata=md)_____no_output_____r[0].status_code_____no_output_____ </code> **Success** This has created the above item on the archive.org site. [Title of the item](https://archive.org/details/ia-test-upload)_____no_output_____## Modify the metadata_____no_output_____ <code> for book in kssp: print(book['identifier'])1966octsasthraga0000kssp 1969decsasthrake0000kssp 1969novsasthrake0000kssp 1969sathrakerala0000kssp 1970augeureka0000kssp 1970deceureka0000kssp 1970febsasthrake0000kssp 1970jansasthrake0000kssp 1970marsasthrake0000kssp 1970maysasthrake0000kssp 1970noveureka0000kssp 1970sepeureka0000kssp 1970sepeureka0000kssp_v4f2 1971apreureka0000kssp 1971febeureka0000kssp 1971janeureka0000kssp 1971mareureka0000kssp 1971mayeureka0000kssp 1983arogyarekha0000kssp 1983mannan0000rajm 1983nammudearogy0000ekba 1983pradhamasusr0000jaya 1984marunnuvyvas0000ekba 1984urjamchandra0000kssp 1985arationalstu0000jami 1985raionalityst0000shis 1986vaidyuthipra0000kssp 1986vaithyuthipr0000kssp 1986vanamvellam0000kssp 1986vyavasayaval0000kssp 1986vyavasayaval0000unse 1987janakeeyaaro0000kssp 1988vaidyuthikan0000mppa 1988vaidyuthinir0000unse 1989aksharathiln0000kssp 1989arogyasurvey0000kssp 1989deseeyavanit0000kssp 1989parishathums0000kssp 1989parishathums0000kssp_z0l1 1989randayiraman0000kssp 1989rogaprathiro0000kssp 1989sthreekalude0000kssp 1989sthreekalums0000kssp 1989sthreekalums0000kssp_d4r6 1990jalajanyarog0000kssp 1990keratintevai0000kssp 1990vanithakalum0000kssp 1990vanithangalu0000kssp 1992healthpolicy0000kssp 1992kerathinuoru0000kssp 1992marunnukalud0000kssp 1993samuhyasamra0000kssp 1993sthreekalude0000para 1993vanithasasth0000kssp 1994elippaniyump0000kssp 1994urjarekha0000unse 1995arogyarangat0000kssp 1995bhashasamska0000kssp 1995bodhanamadhy0000kssp 1995janakeeyvidy0000kssp 1995keralavidyab0000kssp 1995parishathuma0000kssp 1995swashrayavid0000kssp 1995urjavivadamkssp adhikaramjanangalkkufirstedn1989kssp adhikaramjanangalkkusecondedn1989kssp aksharajathaganangal1989kssp aksharakalajatha1989kssp aksharam1989kssp aksharam1989mhrdgoi alamarayileswapnangal0000kssp ammaparayunnu1992kssp asthama0000kssp aushadhavilavard0000ekba ayesteeyes1989kssp badalbudjet1993kssp balavedidinam1991kssp balikalathilekku1985kssp balolasavajatha1986kssp balolsavam1987kssp bruno1985kssp cheruthumvaluthum1984kssp chithrangalktrvarma1979kssp cvraman1982kssp denkelnirdhesam1992kssp deshabhimana1993kssp dhooredhooredhoore1990prmadhavapanicker ekmdistliteracy1990kssp eurekakalikal1992kssp eurekamagazinevo0000kssp eurekamagazinevo0000kssp_b8f9 expresshway2004kssp ezhuthupusthakam1989kssp ganithamekmlit1989kssp gattumnayangalum1994kssp geethangal1980kssp gramavikasanarekha1984kssp hallydhumakethu1986kssp indiayerakshikkan1992kssp irupathonnamnutandu1986kssp isrocharacase1995kssp janakeeyasasthraprasthanam1992kssp janakeeyasuthranavivadam2003kssp janakiyasuthranam2002kssp janangal1989kssp jwala0000kssp kalarppuvasthukkal0000kssp kalikkamaravindgupta1990kssp kalikoottam1992kssp kalivela2002kssp kallukalkrama1979kssp kannethathaprmad1983kssp kanyabhoomi1990kssp kariyunnakalpavriksham1984kssp karshikameghala1993kssp kavimalika1986kssp kayalnammude1991kssp keralapanchayathraj1995kssp keralathilevidyabhyasam1994kssp keralathintesampathu1984edn6kssp keralathinuoruar0000kssp keralavikasanam0000kssp keramulamkadukal0000kssp kilikoottam0000kssp kssp-archives kssp-inaguration1962kssp ksspsilentvalley0000kssp ksspsouvenir1983kssp ksspurjapathisan0000kssp kushtaraogam0000kssp leadkindlylight1989kssp mahothsavam1997kssp mamsamthinnunnasasyangal0000kssp manushyankatha1989ekmlit maramparanja0000kssp marichavareadakal1984kssp medicalsamaram0000ekba mochanam1982kssp mollakkayudekuthira1987kssp nadakangal1981kssp nadakasilpasala0000kssp naleyudempp1983kssp nallanaale1989ekmlit nammalariyan2000kssp nammalonnu1986kssp namukkuchuttumullalokam1989kssp narakam1981kssp navamalika1987kssp njansthree1990kssp onnaampadam1995kssp orudheeraswapnam1994kssp orukunjujanikkunnu0000kssp orumaramoruvaram1987kssp othukali1984kssp ottanthullalukal1980kssp padanolsavam1998kssp panayapeduthiyabhavi1992kssp papi1981kssp parishathaduppu0000kssp parishathenthucheyyunnu0000kssp parisheelanarekha1981kssp parisheelanarekha1982kssp paristhathaduppu0000kssp pattukal1980kssp pavanadakangal1988kssp penakalkadha1990kssp petamma1995kssp pkdjillavikasana1981kssp prathirodham0000kssp punyabhumiyudethengal1992kssp puthanpattukar1987kssp puthiyapadypadhathi1999kssp saksharathayanjamrandam1991kssp samathalam1981kssp samathavinjanotsavam1997kssp samoohyamanushyanum1982kssp sampathikaparishkaram1991kssp samrajyatham1992kssp santhigeetham1987kssp sasthragathifeb1968kssp sasthragathijan1967kssp sasthragathijuly1967kssp sasthrageethika1984kssp sasthragitha0000kssp sasthrakeralam0000kssp sasthrakeralammay1974kssp sasthramgramangalkku1982kssp sasthramgrameenarkku1983kssp sasthraparimathru2001kssp sasthrapariulgha1962kssp sasthrasankethika1993kssp seetha1985kssp sevanamekhala1993kssp silentvalleychar0000kssp silentvalleyjala0000kssp silentvalleypari0000kssp sisuvidyabhyasam1999kssp soapinterast2001kssp souraaduppu0000pgpa spartacs1981kssp stateconventiono0000kssp sthreekalumpinth0000sama sthreekalumsaksh0000kssp suryanteathmakadha1979kssp susthiravikasanamjankeeya2018kssp swasryakalajatha0000kssp thengumvazhayum0000kssp thiricharivu1989kssp thudarsaksharatha1992kssp unarthupaatu1989kssp unnathavidyabhyasa1999kssp urjamchodyothara0000kssp vanagatha0000kssp vanaparvam1981kssp vanitharekha1987kssp vayarilakkam0000kssp vayarilakkamchar0000rajm velichamenayi1989kssp velichamenayicha1989kssp velichathilekku1989kssp vidyapari2007kssp vidyaparvam1995kssp vikasanajatha1998kssp vikasanamjanangal1989ekmlit vikendreekrithaasoothranam0000kssp vilapesal1983kssp vilpattukal1980kssp vilpattukaltest1980kssp vipanikal1994kssp visittosilentval0000mssw vithuvaypa1993kssp vyavasayameghala1992kssp vyavasayamgramam1981kssp vyavasayasouvenir1980kssp vyavasayavalkaranam1986kssp yudham1981kssp </code> ## Normalizer This module normalizes current subject into the correct format expected by the archive.org website. Currently there are several non-standard forms there. Here are a few examples: ``` 'Sasthra Kala Jatha, Street Theatre' # 'str', but comma seperated ['Kerala Swasraya Samithi, Indian Agriculture, Globalization', 'Kerala Swashraya Samithy'] # 'list', comma separated 'KSSP leaflets;KSSP Health Books;Modern Medical Doctors' # 'str', but no space between entries ``` So before updating them with the new entries, we need to make sure that the subjects lines are formulated according to the standard format expeted by `archive.org` website._____no_output_____ <code> subject_text = 'KSSP leaflets;KSSP Health Books;Modern Medical Doctors' normalize_subject(subject_text)KSSP leaflets; KSSP Health Books; Modern Medical Doctors def normalize_subject(subject_text): if subject_text: if type(subject_text) == list: subject_text = ["; ".join(y.strip() for y in re.split(r'[;,]', x)) for x in subject_text] return("; ".join(subject_text)) else: subject_text = re.split(r'[;,]', subject_text) subject_text = [x.strip() for x in subject_text] return("; ".join(subject_text)) else: return ""_____no_output_____ </code> ### Fetch metadata info of an entire set of books _____no_output_____ <code> topic_name = 'kssp-archives' kssp = list(search_items(topic_name)) # fetch all the items within of a particular topic_____no_output_____ </code> ### Kerala Missionary Documents_____no_output_____ <code> strbuilder = '' for item_id in kerala_missionary_documents[2:]: # item = get_item(item_id['identifier']) item = get_item(item_id) cur_sub = False # print(item) try: cur_sub = item.item_metadata['metadata']['subject'] except Exception as e: # print("Error fetching data!: {}\t{}\t{}".format(item, cur_sub, str(e)) pass item_title = item.item_metadata['metadata']['title'] cur_sub = normalize_subject(cur_sub) if cur_sub: new_sub = cur_sub + "; Kerala Missionary Documents" print("{}\t{}".format(item_id, new_sub)) r = modify_metadata(item_id, metadata=dict(subject=new_sub)) r.status_code # print("{}\t{}\t{}\t{}\t{}".format(item_id, item_title, cur_sub, normalize_subject(cur_sub), new_sub)) 1815CMSMissionaryRegister Church Missionary Society; Kerala Missionary Documents 1816CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1817CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1818CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1819CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1820CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1821CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1822CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1823CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1824CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1825CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1826CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1827CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1828CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1829CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1830CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1831CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1832CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1833CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1835CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1836CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1837CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1838CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1839CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1840CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1841CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1842CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1842ChurchMissionaryGleanerVol2 The Church Missionary Gleaner; Kerala Missionary Documents 1843CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1843ChurchMissionaryGleanerVol3 The Church Missionary Gleaner; Kerala Missionary Documents 1844CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1845CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1846CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1846ChurchMissionaryGleanerVol6 The Church Missionary Gleaner; Kerala Missionary Documents 1847CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1848CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1848ChurchMissionaryGleanerVol8 Church Missionary Gleaner; Kerala Missionary Documents 1849ChurchMissionaryGleanerVol9 Church Missionary Gleaner; Kerala Missionary Documents 1850CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1850TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1851CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1851TheChurchMissionaryGleanerNewSeriesVol1 The Church Missionary Gleaner; Kerala Missionary Documents 1851TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1852CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1852TheChurchMissionaryGleanerNewSeriesVol2 The Church Missionary Gleaner; Kerala Missionary Documents 1852TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1853CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1853TheChurchMissionaryGleanerNewSeriesVol3 The Church Missionary Gleaner; Kerala Missionary Documents 1853TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1854CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1854TheChurchMissionaryGleanerNewSeriesVol4 Church Missionary Gleaner; Kerala Missionary Documents 1854TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1855CMSMissionaryRegister CMS Missionary Register; Kerala Missionary Documents 1855TheChurchMissionaryGleanerNewSeriesVol5 Church Missionary Gleaner; Kerala Missionary Documents 1855TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1856TheChurchMissionaryGleanerNewSeriesVol6 The Church Missionary Gleaner; Kerala Missionary Documents 1856TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1857TheChurchMissionaryGleanerNewSeriesVol7 The Church Missionary Gleaner; Kerala Missionary Documents 1857TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1858TheChurchMissionaryGleanerNewSeriesVol8 The Church Missionary Gleaner; Kerala Missionary Documents 1858TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1859TheChurchMissionaryAtlas The Church Missionary Atlas; Kerala Missionary Documents 1859TheChurchMissionaryGleanerNewSeriesVol9 The Church Missionary Gleaner; Kerala Missionary Documents 1859TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1860TheChurchMissionaryGleanerNewSeriesVol10 The Church Missionary Gleaner; Kerala Missionary Documents 1860TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1861TheChurchMissionaryGleanerNewSeriesVol11 The Church Missionary Gleaner; Kerala Missionary Documents 1861TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1862TheChurchMissionaryGleanerNewSeriesVol12 The Church Missionary Gleaner; Kerala Missionary Documents 1862TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1863TheChurchMissionaryGleanerNewSeriesVol13 The Church Missionary Gleaner; Kerala Missionary Documents 1863TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1864TheChurchMissionaryGleanerNewSeriesVol14 The Church Missionary Gleaner; Kerala Missionary Documents 1864TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1865TheChurchMissionaryGleanerNewSeriesVol15 The Church Missionary Gleaner; Kerala Missionary Documents 1865TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1866TheChurchMissionaryGleanerNewSeriesVol16 The Church Missionary Gleaner; Kerala Missionary Documents 1866TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1867TheChurchMissionaryGleanerNewSeriesVol17 Church Missionary Gleaner; Kerala Missionary Documents 1868TheChurchMissionaryGleanerNewSeriesVol18 The Church Missionary Gleaner; Kerala Missionary Documents 1868TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1869TheChurchMissionaryGleanerNewSeriesVol19 The Church Missionary Gleaner; Kerala Missionary Documents 1869TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1870TheChurchMissionaryGleanerNewSeriesVol20 The Church Missionary Gleaner; Kerala Missionary Documents 1870TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1871TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1873TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1874TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1875TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents 1876TheChurchMissionaryIntelligencer The Church Missionary Intelligencer; Kerala Missionary Documents </code> ### Malankara Edavaka Pathrika_____no_output_____ <code> my_collection = [] for i in search_items('collection:(MalankaraEdavakaPathrika)'): my_collection.append(i["identifier"]) print("{} items found in this collection".format(len(my_collection))_____no_output_____ </code> #### Add Malankara Edvaka Pathrika as a topic_____no_output_____ <code> total_items = len(my_collection) i = 1 for item_id in my_collection: # item = get_item(item_id['identifier']) item = get_item(item_id) cur_sub = False # print(item) try: cur_sub = item.item_metadata['metadata']['subject'] except Exception as e: # print("Error fetching data!: {}\t{}\t{}".format(item, cur_sub, str(e)) pass item_title = item.item_metadata['metadata']['title'] cur_sub = normalize_subject(cur_sub) if cur_sub: new_sub = "Malankara Edavaka Pathrika" print("{}/{}\t{}\t{}".format(i, total_items, item_id, new_sub)) i += 1 r = modify_metadata(item_id, metadata=dict(subject=new_sub)) # r.status_code 1/162 1892_Malankara_Edavaka_Pathrika_Volume_1_Issue_10 Malankara Edavaka Pathrika 2/162 1892_Malankara_Edavaka_Pathrika_Volume_1_Issue_11 Malankara Edavaka Pathrika 3/162 1892_Malankara_Edavaka_Pathrika_Volume_1_Issue_12 Malankara Edavaka Pathrika 4/162 1892_Malankara_Edavaka_Pathrika_Volume_1_Issue_2 Malankara Edavaka Pathrika 5/162 1892_Malankara_Edavaka_Pathrika_Volume_1_Issue_3 Malankara Edavaka Pathrika 6/162 1892_Malankara_Edavaka_Pathrika_Volume_1_Issue_4 Malankara Edavaka Pathrika 7/162 1892_Malankara_Edavaka_Pathrika_Volume_1_Issue_5 Malankara Edavaka Pathrika 8/162 1892_Malankara_Edavaka_Pathrika_Volume_1_Issue_6 Malankara Edavaka Pathrika 9/162 1892_Malankara_Edavaka_Pathrika_Volume_1_Issue_7 Malankara Edavaka Pathrika 10/162 1892_Malankara_Edavaka_Pathrika_Volume_1_Issue_8 Malankara Edavaka Pathrika 11/162 1892_Malankara_Edavaka_Pathrika_Volume_1_Issue_9 Malankara Edavaka Pathrika 12/162 1893MalankaraEdavakaPathrikaVolume02Issue01 Malankara Edavaka Pathrika 13/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_01 Malankara Edavaka Pathrika 14/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_02 Malankara Edavaka Pathrika 15/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_03 Malankara Edavaka Pathrika 16/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_04 Malankara Edavaka Pathrika 17/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_05 Malankara Edavaka Pathrika 18/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_06 Malankara Edavaka Pathrika 19/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_07 Malankara Edavaka Pathrika 20/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_08 Malankara Edavaka Pathrika 21/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_09 Malankara Edavaka Pathrika 22/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_10 Malankara Edavaka Pathrika 23/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_11 Malankara Edavaka Pathrika 24/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_12 Malankara Edavaka Pathrika 25/162 1894_Malankara_Edavaka_Pathrika_Volume_03_Issue_01 Malankara Edavaka Pathrika 26/162 1894_Malankara_Edavaka_Pathrika_Volume_03_Issue_04 Malankara Edavaka Pathrika 27/162 1894_Malankara_Edavaka_Pathrika_Volume_03_Issue_11 Malankara Edavaka Pathrika 28/162 1894_Malankara_Edavaka_Pathrika_Volume_03_Issue_12 Malankara Edavaka Pathrika 29/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_01 Malankara Edavaka Pathrika 30/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_02 Malankara Edavaka Pathrika 31/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_03 Malankara Edavaka Pathrika 32/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_04 Malankara Edavaka Pathrika 33/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_05 Malankara Edavaka Pathrika 34/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_06 Malankara Edavaka Pathrika 35/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_07 Malankara Edavaka Pathrika 36/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_07_201803 Malankara Edavaka Pathrika 37/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_08 Malankara Edavaka Pathrika 38/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_09 Malankara Edavaka Pathrika 39/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_10 Malankara Edavaka Pathrika 40/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_11 Malankara Edavaka Pathrika 41/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_12 Malankara Edavaka Pathrika 42/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_01 Malankara Edavaka Pathrika 43/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_02 Malankara Edavaka Pathrika 44/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_03 Malankara Edavaka Pathrika 45/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_04 Malankara Edavaka Pathrika 46/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_05 Malankara Edavaka Pathrika 47/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_06 Malankara Edavaka Pathrika 48/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_07 Malankara Edavaka Pathrika 49/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_08 Malankara Edavaka Pathrika 50/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_09 Malankara Edavaka Pathrika 51/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_10 Malankara Edavaka Pathrika 52/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_11 Malankara Edavaka Pathrika 53/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_12 Malankara Edavaka Pathrika 54/162 1897_Malankara_Edavaka_Pathrika_Volume_06_Issue_02 Malankara Edavaka Pathrika 55/162 1897_Malankara_Edavaka_Pathrika_Volume_06_Issue_05 Malankara Edavaka Pathrika 56/162 1897_Malankara_Edavaka_Pathrika_Volume_06_Issue_07 Malankara Edavaka Pathrika 57/162 1897_Malankara_Edavaka_Pathrika_Volume_06_Issue_08 Malankara Edavaka Pathrika 58/162 1897_Malankara_Edavaka_Pathrika_Volume_06_Issue_09 Malankara Edavaka Pathrika 59/162 1897_Malankara_Edavaka_Pathrika_Volume_06_Issue_12 Malankara Edavaka Pathrika 60/162 1898_Malankara_Edavaka_Pathrika_Volume_07_Issue_04 Malankara Edavaka Pathrika 61/162 1899_Malankara_Edavaka_Pathrika_Volume_08_Issue_02 Malankara Edavaka Pathrika 62/162 1899_Malankara_Edavaka_Pathrika_Volume_08_Issue_05 Malankara Edavaka Pathrika 63/162 1900_Malankara_Edavaka_Pathrika_Volume_09_Issue_09 Malankara Edavaka Pathrika 64/162 1900_Malankara_Edavaka_Pathrika_Volume_09_Issue_12 Malankara Edavaka Pathrika 65/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_01 Malankara Edavaka Pathrika 66/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_02 Malankara Edavaka Pathrika 67/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_03 Malankara Edavaka Pathrika 68/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_04 Malankara Edavaka Pathrika 69/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_05 Malankara Edavaka Pathrika 70/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_06 Malankara Edavaka Pathrika 71/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_07 Malankara Edavaka Pathrika 72/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_08 Malankara Edavaka Pathrika 73/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_09 Malankara Edavaka Pathrika 74/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_10 Malankara Edavaka Pathrika 75/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_11 Malankara Edavaka Pathrika 76/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_12 Malankara Edavaka Pathrika 77/162 1902_Malankara_Edavaka_Pathrika_Volume_11_Issue05 Malankara Edavaka Pathrika 78/162 1902_Malankara_Edavaka_Pathrika_Volume_11_Issue_02 Malankara Edavaka Pathrika 79/162 1902_Malankara_Edavaka_Pathrika_Volume_11_Issue_03 Malankara Edavaka Pathrika 80/162 1902_Malankara_Edavaka_Pathrika_Volume_11_Issue_04 Malankara Edavaka Pathrika 81/162 1902_Malankara_Edavaka_Pathrika_Volume_11_Issue_06 Malankara Edavaka Pathrika 82/162 1902_Malankara_Edavaka_Pathrika_Volume_11_Issue_08 Malankara Edavaka Pathrika 83/162 1902_Malankara_Edavaka_Pathrika_Volume_11_Issue_09 Malankara Edavaka Pathrika 84/162 1902_Malankara_Edavaka_Pathrika_Volume_11_Issue_10 Malankara Edavaka Pathrika 85/162 1903_Malankara_Edavaka_Pathrika_Volume_12_Issue_04 Malankara Edavaka Pathrika 86/162 1903_Malankara_Edavaka_Pathrika_Volume_12_Issue_05 Malankara Edavaka Pathrika 87/162 1903_Malankara_Edavaka_Pathrika_Volume_12_Issue_06 Malankara Edavaka Pathrika 88/162 1903_Malankara_Edavaka_Pathrika_Volume_12_Issue_07 Malankara Edavaka Pathrika 89/162 1903_Malankara_Edavaka_Pathrika_Volume_12_Issue_08 Malankara Edavaka Pathrika 90/162 1903_Malankara_Edavaka_Pathrika_Volume_12_Issue_09 Malankara Edavaka Pathrika 91/162 1903_Malankara_Edavaka_Pathrika_Volume_12_Issue_10 Malankara Edavaka Pathrika 92/162 1903_Malankara_Edavaka_Pathrika_Volume_12_Issue_11 Malankara Edavaka Pathrika 93/162 1903_Malankara_Edavaka_Pathrika_Volume_12_Issue_12 Malankara Edavaka Pathrika 94/162 1904_Malankara_Edavaka_Pathrika_Volume_13_Issue_01 Malankara Edavaka Pathrika 95/162 1904_Malankara_Edavaka_Pathrika_Volume_13_Issue_02 Malankara Edavaka Pathrika 96/162 1904_Malankara_Edavaka_Pathrika_Volume_13_Issue_03 Malankara Edavaka Pathrika 97/162 1904_Malankara_Edavaka_Pathrika_Volume_13_Issue_04 Malankara Edavaka Pathrika </code> #### Update collection info Remove existing collections and add `kerala-archives` as a new collection_____no_output_____ <code> total_items = len(my_collection) i = 2 for item_id in my_collection[1:]: # item = get_item(item_id['identifier']) item = get_item(item_id) cur_sub = False # print(item) try: cur_sub = item.item_metadata['metadata']['collection'] except Exception as e: # print("Error fetching data!: {}\t{}\t{}".format(item, cur_sub, str(e)) pass item_title = item.item_metadata['metadata']['title'] cur_sub = normalize_subject(cur_sub) if cur_sub: new_sub = "kerala-archives" print("{}/{}\t{}\t{}".format(i, total_items, item_id, cur_sub)) i += 1 r = modify_metadata(item_id, metadata=dict(collection=[new_sub]))2/162 1892_Malankara_Edavaka_Pathrika_Volume_1_Issue_11 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 3/162 1892_Malankara_Edavaka_Pathrika_Volume_1_Issue_12 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 4/162 1892_Malankara_Edavaka_Pathrika_Volume_1_Issue_2 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 5/162 1892_Malankara_Edavaka_Pathrika_Volume_1_Issue_3 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 6/162 1892_Malankara_Edavaka_Pathrika_Volume_1_Issue_4 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 7/162 1892_Malankara_Edavaka_Pathrika_Volume_1_Issue_5 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 8/162 1892_Malankara_Edavaka_Pathrika_Volume_1_Issue_6 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 9/162 1892_Malankara_Edavaka_Pathrika_Volume_1_Issue_7 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 10/162 1892_Malankara_Edavaka_Pathrika_Volume_1_Issue_8 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 11/162 1892_Malankara_Edavaka_Pathrika_Volume_1_Issue_9 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 12/162 1893MalankaraEdavakaPathrikaVolume02Issue01 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 13/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_01 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 14/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_02 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 15/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_03 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 16/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_04 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 17/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_05 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 18/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_06 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 19/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_07 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 20/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_08 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 21/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_09 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 22/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_10 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 23/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_11 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 24/162 1893_Malankara_Edavaka_Pathrika_Volume_02_Issue_12 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 25/162 1894_Malankara_Edavaka_Pathrika_Volume_03_Issue_01 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 26/162 1894_Malankara_Edavaka_Pathrika_Volume_03_Issue_04 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 27/162 1894_Malankara_Edavaka_Pathrika_Volume_03_Issue_11 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 28/162 1894_Malankara_Edavaka_Pathrika_Volume_03_Issue_12 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 29/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_01 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 30/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_02 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 31/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_03 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 32/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_04 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 33/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_05 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 34/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_06 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 35/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_07 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 36/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_07_201803 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 37/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_08 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 38/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_09 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 39/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_10 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 40/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_11 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 41/162 1895_Malankara_Edavaka_Pathrika_Volume_04_Issue_12 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 42/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_01 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 43/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_02 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 44/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_03 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 45/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_04 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 46/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_05 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 47/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_06 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 48/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_07 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 49/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_08 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 50/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_09 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 51/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_10 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 52/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_11 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 53/162 1896_Malankara_Edavaka_Pathrika_Volume_05_Issue_12 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 54/162 1897_Malankara_Edavaka_Pathrika_Volume_06_Issue_02 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 55/162 1897_Malankara_Edavaka_Pathrika_Volume_06_Issue_05 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 56/162 1897_Malankara_Edavaka_Pathrika_Volume_06_Issue_07 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 57/162 1897_Malankara_Edavaka_Pathrika_Volume_06_Issue_08 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 58/162 1897_Malankara_Edavaka_Pathrika_Volume_06_Issue_09 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 59/162 1897_Malankara_Edavaka_Pathrika_Volume_06_Issue_12 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 60/162 1898_Malankara_Edavaka_Pathrika_Volume_07_Issue_04 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 61/162 1899_Malankara_Edavaka_Pathrika_Volume_08_Issue_02 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 62/162 1899_Malankara_Edavaka_Pathrika_Volume_08_Issue_05 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 63/162 1900_Malankara_Edavaka_Pathrika_Volume_09_Issue_09 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 64/162 1900_Malankara_Edavaka_Pathrika_Volume_09_Issue_12 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 65/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_01 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 66/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_02 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 67/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_03 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 68/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_04 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 69/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_05 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 70/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_06 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 71/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_07 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 72/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_08 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 73/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_09 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 74/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_10 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 75/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_11 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan 76/162 1901_Malankara_Edavaka_Pathrika_Volume_10_Issue_12 MalankaraEdavakaPathrika; MalayalamHeritage; JaiGyan item = get_item('1813CMSMissionaryRegister')_____no_output_____ </code> ## Upload the updated topic list_____no_output_____ <code> f = open("kssp.tab", mode='r', encoding='utf-8') fc = f.read()_____no_output_____for line in fc.split("\n"): line = line.split("\t") new_item_id = line[0] new_sub = line[-1].split("; ") ka = False try: ka = new_sub.pop(new_sub.index('Kerala Archives')) r = modify_metadata(new_item_id, metadata=dict(subject=new_sub)) r.status_code print("; ".join(new_sub)) except: print(new_sub) ['Sasthragathi'] ['Sasthra Keralam Magazine'] ['Sasthra Keralam Magazine', 'KSSP Science Magazine'] ['Sasthra Keralam Magazine', 'Malayalam Science Magazine'] ['Eureka Magazine', "Malayalam Children's Magazine"] ['Eureka Magazine', "Malayalam Children's Magazine"] ['Sasthra Keralam Magazine', 'KSSP Science Magazine'] ['Sasthra Keralam Magazine', 'KSSP Science Magazine'] ['Sasthra Keralam Magazine', 'KSSP Science Magazine'] ['Sasthra Keralam Magazin', 'Malayalam Science Magazine'] ['Eureka Magazine', "Malayalam Children's Magazine"] ['Eureka Magazine', "Malayalam Children's Magazine"] ['Eureka Magazine', "Malayalam Children's Magazine"] ['Eureka Magazine', "Malayalam Children's Magazine"] ['Eureka Magazine'] ['Eureka Magazine', "Malayalam Children's Magazine"] ['Eureka Magazine', "Malayalam Children's Magazine"] ['Eureka Magazine'] ['KSSP leaflets', 'KSSP Health Books'] ['KSSP leaflets', 'KSSP Health Books', 'Measles'] ['KSSP leaflets', 'Kerala Health'] ['KSSP leaflets', 'First aid', 'KSSP Health Books'] ['KSSP leaflets', 'KSSP Health Books'] ['KSSP leaflets', 'KSSP Books about Power'] ['KSSP leaflets', 'KSSP Health Books'] ['KSSP leaflets', 'KSSP Health Books'] ['KSSP leaflets', 'Kerala Energy Problem', 'KSSP Energy Books'] ['KSSP leaflets', 'KSSP Energy Books', 'Power Problex in Kerala'] ['KSSP leaflets', 'Kerala Energy', 'KSSP Ecology Books'] ['KSSP leaflets', 'KSSP Development Books', 'Kerala Development'] ['KSSP leaflets', 'KSSP Development Books', 'Kerala Development'] ['KSSP leaflets', 'KSSP Health Books', 'Health Survey'] ['KSSP leaflets', 'Kerala Energy Problem', 'KSSP Energy Books'] ['KSSP leaflets', 'Kerala Power Problem'] ['KSSP leaflets', 'KSSP Health Books', 'Kerala Health Problem'] ['KSSP leaflets', 'KSSP Health Books', 'Health Survey'] ['KSSP leaflets', 'KSSP Gender Books'] ['KSSP leaflets about Gender'] ['KSSP leaflets', 'KSSP Gender Books'] ['KSSP leaflets', 'KSSP Health Books'] ['KSSP leaflets', 'KSSP Health Books', 'Health Survey'] ['KSSP leaflets', 'KSSP Books about Gender Bias'] ['KSSP leaflets', 'KSSP Gender Books'] ['KSSP leaflets', 'KSSP Gender Books'] ['KSSP leaflets', 'KSSP Health Books'] ['KSSP leaflets', 'Kerala Energy Problem', 'KSSP Energy Books'] ['KSSP leaflets', 'KSSP Gender Books'] ['KSSP leaflets', 'KSSP Gender Books', 'Civil Code and Gender Bias'] ['KSSP leaflets', 'KSSP Health Books'] ['KSSP leaflets', 'KSSP Health Books'] ['KSSP leaflets', 'KSSP Health Books', 'Medicine Price Hike'] ['KSSP leaflets', 'KSSP Gender Books', 'Gender Bias'] [''] ['KSSP leaflets', "Women's Health", 'KSSP Books about Health', 'KSSP books about women'] ['KSSP leaflets', 'KSSP Gender Books', 'Gender Bias in Kerala'] ['KSSP leaflets', 'KSSP Health Books', 'Leptospirosis', 'Plague'] ['KSSP leaflets', 'Power Problem of Kerala', 'KSSP Books about Power'] ['KSSP leaflets', 'KSSP Health Books'] ['KSSP leaflets', 'Kerala Public Education', 'KSSP Education Books'] ['KSSP leaflets', 'Kerala Public Education', 'KSSP Education Books', 'Vidyabhyasa Jadha-95'] ['KSSP leaflets', 'Kerala Public Education', 'KSSP Education Books', 'Vidyabhyasa Jadha-95'] ['KSSP leaflets', 'Kerala Public Education', 'Vidyabhyasa Jadha-95'] ['KSSP leaflets', 'Kerala Public Education', 'KSSP Education Books', 'KSSP and Education Debates'] ['KSSP leaflets', 'Kerala Education', 'KSSP Books about Education', 'Vidyabhyasa Jadha-95'] ['Kerala Electricity', 'KSSP leaflets', 'Kerala Energy'] Planning; evelopment Decentralization; Planning; Democracy ['Kala Jatha', 'Street Theatre'] ['Ernakulam District Total Literarcy Programme', 'Kerala Literacy'] ['Ernakulam District Total Literarcy Programme'] ['Ernakulam District Total Literarcy Programme', 'Kerala Literacy'] ['Kilikkkoottam Jadha', 'Street Theatre'] ['Street Theatre'] ['Health Education', 'Asthma'] ['KSSP leaflets', 'KSSP Health Books'] ['Kala Jatha', 'Street Theatre'] ['Economics', 'Badjet', 'Kerala Swashraya Samithy'] ['Balavedi', 'Science History'] ['Sasthra Kala Jatha', 'Street Theatre'] ['Balalsava Jatha', 'Street Theatre'] ['Balolsava Songs'] ['Sasthra Kala Jatha', 'Street Theatre'] ['kssp-science books', 'P R Madhavappanikkar', 'Malayalam Physics Books'] ['Painting', 'Art', 'History'] ['Science Education', 'Biography', 'C V Raman'] Dunkal Draft ['New Economic Policy of India', 'Kerala Swasraya Samithi', 'Kerala Sasthra Sahithya Parishad'] ['kssp-science books', 'P R Madhavappanikkar', 'Malayalam Physics Books'] ['Ernakulam District Total Literarcy Programme', 'Kerala Literacy'] ['Science Education'] ['Eureka Magazine', "Malayalam Children's Magazine"] ['Eureka Magazine'] ['Kerala Development', 'Express Highway', 'Jalanidhi'] ['Ernakulam District Total Literarcy Programme', 'Kerala Literacy'] ['Ernakulam District Total Literarcy Programme', 'Mathematics Hand Book', 'Kerala Literacy'] ['Globalization', 'Gatt Agreement', 'Kerala Swashraya Samithy'] Sasthra Kala Jatha ['Kerala Economy', 'Rural Development'] ['Astronomy', "Halley's comet"] [''] ['Globalization', 'Dunkal Draft', 'Indian Economy', 'Kerala Swashraya Samithy'] ['Sasthra Kala Jatha', 'Street Theatre'] ISRO; KSSP leaflets ["People's Science Movement in Kerala"] ["People's Planning"] ["People's Planning", 'Participative Democracy', 'Decentralization'] ['Education', 'Literacy', 'Kerala Literacy'] ['Bhopal Gas Tragedy'] ['Science Education', 'Adulteration'] ['Malayalam Science Education Books'] ["'subject'"] [''] KSSP leaflets ['Parishath Geology Books', 'Rock', 'KSSP Books'] ['Astronomy', 'Cosmology'] Bharat Gyanvigyan Samithi Kerala; Street Theatre in Kerala; Literacy; Gender; Kerala Sasthra Sahithya Parishad ['Kerala Coconut Industry', 'KSSP leaflets'] ['Dunkal Draft', 'New Economic Policy', 'Agriculture', 'Kerala Swashraya Samithy'] ['Kala Jatha', 'Street Theatre'] ['Kala Jatha', 'Ecology'] ['Kerala Panchayathraj'] ['Education'] ['KSSP General Documents'] ['KSSP leaflets', 'KSSP Health Books', 'Kerala Health Books'] Development; Decentralization ['Bamboo', 'Kerala Environment'] ['Kala Jatha', 'Street Theatre'] ["'subject'"] [''] ['news kssp-inaguration'] ['KSSP leaflets', 'Silent Valley'] ['KSSP Souvenir'] ['KSSP leaflets', 'Power Problem in Kerala', 'Kayamkulam Thermal Station KSSP Books about Energy'] ['Health', 'leprosy'] ['Ernakulam District Total Literarcy Programme', 'Kerala Literacy'] ['Kala Jatha', 'Street Theatre'] ['carnivorous plants'] ['History', 'Ernakulam District Total Literarcy Programme'] ['Environment'] ['Kala Jatha', 'Street Theatre'] ['KSSP leaflets', 'KSSP Health Books'] ['Sasthra Kala Jatha', 'Street Theatre'] ['Sasthra Kala Jatha', 'Street Theatre'] Sasthra Kala Jatha; Street Theatre ['Street Theatre'] ['politics', 'KSSP Books'] ['KSSP- Health Education Books', 'Ernakulam District Total Literarcy Programme', 'Ernakulam District Total Literarcy Programme'] ['Globalization'] ['Kala Jatha', 'Street Theatre'] ['Ernakulam District Total Literarcy Programme', 'Malayalam Science Books'] Sasthra Kala Jatha; Street Theatre ['Sasthra Kala Jatha', 'Street Theatre'] [''] ['Kala Jatha', 'Street Theatre'] ['Kala Jatha', 'Street Theatre', 'Kerala Education'] ['Sasthra Kala Jatha', 'Street Theatre'] ['Reproduction', 'Sex Education'] ['Balalsava Jatha', 'Street Theatre'] ['Kala Jatha', 'Street Theatre'] Sasthra Kala Jatha; Street Theatre ['Development'] ['Kerala Swasraya Samithi', 'Economic Policy', 'Kerala Swashraya Samithy'] Sasthra Kala Jatha; Street Theatre ['KSSP leaflets', 'Parishathaduppu'] People's science movement People's science movement KSSP Cadre Education; People's Science Movement ['KSSP leaflets', 'Parishathaduppu'] ['KSSP Songs'] ['Sasthra Kala Jatha', 'Street Theatre', 'Puppetry Script'] Bharat Gyanvigyan Samithi Kerala; Street Theatre in Kerala; Literacy; Gender; Kerala Sasthra Sahithya Parishad ['Kala Jatha', 'Street Theatre'] ['Development Palakkad', 'Kerala Development'] ['vaccination'] ['Kala Jatha', 'Street Theatre'] ['Sasthra Kala Jatha', 'Street Theatre'] ['Education'] ['Education', 'Literacy'] Sasthra Kala Jatha; Street Theatre Bharat Gyanvigyan Samithi Kerala; Samatha Vijnanolsavam; Gender; Kerala Sasthra Sahithya Parishad ['Science Education'] Globalization; New Economic Policy ['Imperialism', 'Globalization', 'Kerala Swashraya Samithy'] ['Kala Jatha', 'Street Theatre'] ['Sasthragathi'] ['Sasthragathi'] ['Malayalam Science Magazine', 'Sasthragathi'] ['Kala Jatha', 'Street Theatre'] ['Grama Sasthra Jatha'] ['Sasthra Keralam Magazine', 'Malayalam Science Magazine'] ['Sasthra Keralam Magazine'] ['Grama Sasthra Jatha'] ['Grama Sasthra Jatha'] ['K.G. Adiyodi'] ['KSSP Souvenir', 'KSSP General Documents'] ['Science and Technology Policy', 'Kerala Swashraya Samithy'] ['Sasthra Kala Jatha', 'Street Theatre'] ['Kerala Swasraya Samithi', 'Globalization', 'Indian Economy', 'Kerala Swashraya Samithy'] ['KSSP leaflets', 'Silent Valley', 'KSSP Ecology Books'] ['KSSP leaflets', 'Silent Valley'] ['KSSP leaflets', 'Silent Valley'] ['Education'] ['Science Education', 'Soap', 'Indian Soap Industry'] [''] ['KSSP leaflets', 'KSSP energy Books'] Sasthra Kala Jatha; ['KSSP leaflets', 'KSSP Health Books', 'Modern Medical Doctors'] ['KSSP leaflets', 'Samatha leaflets', 'Gender Bias in Kerala'] ['KSSP leaflets', 'KSSP Gender Books', 'Gender Bias in Kerala'] ['Sun', 'Energy'] ['Development', 'Flood 2018', 'Climate Change'] ['Kala Jatha', 'Street Theatre'] ['Agriculture', 'Banana', 'Coconut'] ['Kala Jatha', 'Street Theatre'] ['Kerala Literacy', 'Education', 'Kerala Literarcy'] ['Kalajatha', 'Street Drama'] ['Higher Education Kerala', 'Kerala Education'] ['KSSP leaflets', 'Kerala Energy'] ['Kala Jatha', 'Street Theatre', 'Kerala Environment'] Sasthra Kala Jadha; Street Theatre ['Gender'] ['Health Education', 'Dysentery'] ['KSSP leaflets', 'KSSP Health Books'] ['Ernakulam District Total Literarcy Programme', 'Kerala Literacy'] ['Ernakulam District Total Literarcy Programme', 'Kerala Literacy'] ['Ernakulam District Total Literarcy Programme', 'Kerala Literacy'] ['Education', 'Kerala Education'] ['Kala Jatha', 'Street Theatre'] ["People's Planning", 'decentralization'] ['Government welfare schemes in Kerala', 'Ernakulam District Total Literarcy Programme'] Decentralization; Planning ['Sasthra Kala Jatha', 'Street Theatre'] ['Sasthra Kala Jatha', 'Street Theatre'] ['Kilipattu', '1982'] ['Sasthra Kala Jatha', 'Street Theatre'] ['KSSP leaflets', 'Silent Valley'] ['Kerala Swasraya Samithi', 'Indian Agriculture', 'Globalization', 'Kerala Swashraya Samithy'] ['Globalization', 'Indian Industry', 'Kerala Swashraya Samithy'] ['Village Industry', 'Rural Development'] ['KSSP Souvenir'] ['Industrialisation of Kerala'] Sasthra Kala Jatha; Street Theatre [''] shiju_list = [] for i in search_items('collection:(digitallibraryindia) AND uploader:([email protected])'): shiju_list.append(i["identifier"]) _____no_output_____len(shiju_list)_____no_output_____len(shiju_list)_____no_output_____kerala_missionary_documents = [] for i in search_items('collection:(digitallibraryindia) AND uploader:([email protected])'): kerala_missionary_documents.append(i["identifier"]) _____no_output_____for book_item in kerala_missionary_documents: print(book_item) 1813CMSMissionaryRegister 1814CMSMissionaryRegister 1815CMSMissionaryRegister 1816CMSMissionaryRegister 1817CMSMissionaryRegister 1818CMSMissionaryRegister 1819CMSMissionaryRegister 1820CMSMissionaryRegister 1821CMSMissionaryRegister 1822CMSMissionaryRegister 1823CMSMissionaryRegister 1824CMSMissionaryRegister 1825CMSMissionaryRegister 1826CMSMissionaryRegister 1827CMSMissionaryRegister 1828CMSMissionaryRegister 1829CMSMissionaryRegister 1830CMSMissionaryRegister 1831CMSMissionaryRegister 1832CMSMissionaryRegister 1833CMSMissionaryRegister 1835CMSMissionaryRegister 1836CMSMissionaryRegister 1837CMSMissionaryRegister 1838CMSMissionaryRegister 1839CMSMissionaryRegister 1840CMSMissionaryRegister 1841CMSMissionaryRegister 1842CMSMissionaryRegister 1842ChurchMissionaryGleanerVol2 1843CMSMissionaryRegister 1843ChurchMissionaryGleanerVol3 1844CMSMissionaryRegister 1845CMSMissionaryRegister 1846CMSMissionaryRegister 1846ChurchMissionaryGleanerVol6 1847CMSMissionaryRegister 1848CMSMissionaryRegister 1848ChurchMissionaryGleanerVol8 1849ChurchMissionaryGleanerVol9 1850CMSMissionaryRegister 1850TheChurchMissionaryIntelligencer 1851CMSMissionaryRegister 1851TheChurchMissionaryGleanerNewSeriesVol1 1851TheChurchMissionaryIntelligencer 1852CMSMissionaryRegister 1852TheChurchMissionaryGleanerNewSeriesVol2 1852TheChurchMissionaryIntelligencer 1853CMSMissionaryRegister 1853TheChurchMissionaryGleanerNewSeriesVol3 1853TheChurchMissionaryIntelligencer 1854CMSMissionaryRegister 1854TheChurchMissionaryGleanerNewSeriesVol4 1854TheChurchMissionaryIntelligencer 1855CMSMissionaryRegister 1855TheChurchMissionaryGleanerNewSeriesVol5 1855TheChurchMissionaryIntelligencer 1856TheChurchMissionaryGleanerNewSeriesVol6 1856TheChurchMissionaryIntelligencer 1857TheChurchMissionaryGleanerNewSeriesVol7 1857TheChurchMissionaryIntelligencer 1858TheChurchMissionaryGleanerNewSeriesVol8 1858TheChurchMissionaryIntelligencer 1859TheChurchMissionaryAtlas 1859TheChurchMissionaryGleanerNewSeriesVol9 1859TheChurchMissionaryIntelligencer 1860TheChurchMissionaryGleanerNewSeriesVol10 1860TheChurchMissionaryIntelligencer 1861TheChurchMissionaryGleanerNewSeriesVol11 1861TheChurchMissionaryIntelligencer 1862TheChurchMissionaryGleanerNewSeriesVol12 1862TheChurchMissionaryIntelligencer 1863TheChurchMissionaryGleanerNewSeriesVol13 1863TheChurchMissionaryIntelligencer 1864TheChurchMissionaryGleanerNewSeriesVol14 1864TheChurchMissionaryIntelligencer 1865TheChurchMissionaryGleanerNewSeriesVol15 1865TheChurchMissionaryIntelligencer 1866TheChurchMissionaryGleanerNewSeriesVol16 1866TheChurchMissionaryIntelligencer 1867TheChurchMissionaryGleanerNewSeriesVol17 1868TheChurchMissionaryGleanerNewSeriesVol18 1868TheChurchMissionaryIntelligencer 1869TheChurchMissionaryGleanerNewSeriesVol19 1869TheChurchMissionaryIntelligencer 1870TheChurchMissionaryGleanerNewSeriesVol20 1870TheChurchMissionaryIntelligencer 1871TheChurchMissionaryIntelligencer 1873TheChurchMissionaryIntelligencer 1874TheChurchMissionaryIntelligencer 1875TheChurchMissionaryIntelligencer 1876TheChurchMissionaryIntelligencer 1877TheChurchMissionaryIntelligencer 1878TheChurchMissionaryGleaner 1878TheChurchMissionaryIntelligencer 1879TheChurchMissionaryGleaner 1879TheChurchMissionaryIntelligencer 1880TheChurchMissionaryGleaner 1880TheChurchMissionaryIntelligencer 1881TheChurchMissionaryGleaner 1881TheChurchMissionaryIntelligencer 1882TheChurchMissionaryGleaner 1882TheChurchMissionaryIntelligencer 1883TheChurchMissionaryGleaner 1883TheChurchMissionaryIntelligencer 1884TheChurchMissionaryGleaner 1884TheChurchMissionaryIntelligencer 1885TheChurchMissionaryGleaner 1885TheChurchMissionaryIntelligencer 1886TheChurchMissionaryGleaner 1887TheChurchMissionaryGleaner 1888TheChurchMissionaryGleaner 1888TheChurchMissionaryIntelligencer 1889TheChurchMissionaryGleaner 1889TheChurchMissionaryIntelligencer 1890TheChurchMissionaryGleaner 1890TheChurchMissionaryIntelligencer 1891TheChurchMissionaryGleaner 1891TheChurchMissionaryIntelligencer 1892TheChurchMissionaryGleaner 1892TheChurchMissionaryIntelligencer 1893TheChurchMissionaryIntelligencer 1894TheChurchMissionaryIntelligencer 1895TheChurchMissionaryAtlas 1895TheChurchMissionaryIntelligencer 1897TheChurchMissionaryIntelligencer 1898TheChurchMissionaryGleaner 1898TheChurchMissionaryIntelligencer 1899TheChurchMissionaryIntelligencer 1900TheChurchMissionaryGleaner 1900TheChurchMissionaryIntelligencer 1902TheChurchMissionaryGleaner 1902TheChurchMissionaryIntelligencer 1903TheChurchMissionaryGleaner 1903TheChurchMissionaryIntelligencer 1904TheChurchMissionaryIntelligencer "https://archive.org/details/kssp-archives?and[]=subject%3A%22Kerala+Archives%22"_____no_output_____subject_list = '''Sasthragathi\nSasthra Keralam Magazine\nSasthra Keralam Magazine, KSSP Science Magazine\nSasthra Keralam Magazine, Malayalam Science Magazine\nEureka Magazine, Malayalam Children's Magazine\nEureka Magazine, Malayalam Children's Magazine\nSasthra Keralam Magazine, KSSP Science Magazine\nSasthra Keralam Magazine, KSSP Science Magazine\nSasthra Keralam Magazine, KSSP Science Magazine\nSasthra Keralam Magazin; Malayalam Science Magazine\nEureka Magazine, Malayalam Children's Magazine\nEureka Magazine, Malayalam Children's Magazine\nEureka Magazine, Malayalam Children's Magazine\nEureka Magazine, Malayalam Children's Magazine\nEureka Magazine\nEureka Magazine, Malayalam Children's Magazine\nEureka Magazine, Malayalam Children's Magazine\nEureka Magazine\nKSSP leaflets;KSSP Health Books\nKSSP leaflets;KSSP Health Books, Measles\nKSSP leaflets; Kerala Health\nKSSP leaflets; First aid, KSSP Health Books\nKSSP leaflets; KSSP Health Books\nKSSP leaflets;KSSP Books about Power\nKSSP leaflets;KSSP Health Books\nKSSP leaflets;KSSP Health Books\nKSSP leaflets;Kerala Energy Problem;KSSP Energy Books\nKSSP leaflets;KSSP Energy Books;Power Problex in Kerala\nKSSP leaflets;Kerala Energy;KSSP Ecology Books\nKSSP leaflets; KSSP Development Books; Kerala Development\nKSSP leaflets; KSSP Development Books; Kerala Development\nKSSP leaflets;KSSP Health Books;Health Survey\nKSSP leaflets;Kerala Energy Problem;KSSP Energy Books\nKSSP leaflets;Kerala Power Problem\nKSSP leaflets;KSSP Health Books;Kerala Health Problem\nKSSP leaflets;KSSP Health Books;Health Survey\nKSSP leaflets;KSSP Gender Books\nKSSP leaflets about Gender\nKSSP leaflets;KSSP Gender Books\nKSSP leaflets;KSSP Health Books\nKSSP leaflets;KSSP Health Books;Health Survey\nKSSP leaflets; KSSP Books about Gender Bias\nKSSP leaflets;KSSP Gender Books\nKSSP leaflets;KSSP Gender Books\nKSSP leaflets;KSSP Health Books\nKSSP leaflets;Kerala Energy Problem;KSSP Energy Books\nKSSP leaflets;KSSP Gender Books\nKSSP leaflets;KSSP Gender Books;Civil Code and Gender Bias\nKSSP leaflets;KSSP Health Books\nKSSP leaflets;KSSP Health Books\nKSSP leaflets;KSSP Health Books;Medicine Price Hike\nKSSP leaflets;KSSP Gender Books;Gender Bias\nKSSP leaflets; Women's Health; KSSP Books about Health; KSSP books about women\nKSSP leaflets;KSSP Gender Books;Gender Bias in Kerala\nKSSP leaflets;KSSP Health Books;Leptospirosis;Plague\nKSSP leaflets;Power Problem of Kerala, KSSP Books about Power\nKSSP leaflets;KSSP Health Books\nKSSP leaflets;Kerala Public Education;KSSP Education Books\nKSSP leaflets;Kerala Public Education;KSSP Education Books; Vidyabhyasa Jadha-95\nKSSP leaflets;Kerala Public Education;KSSP Education Books;Vidyabhyasa Jadha-95\nKSSP leaflets;Kerala Public Education, Vidyabhyasa Jadha-95\nKSSP leaflets;Kerala Public Education;KSSP Education Books;KSSP and Education Debates\nKSSP leaflets; Kerala Education; KSSP Books about Education; Vidyabhyasa Jadha-95\n['Kerala Electricity', 'KSSP leaflets', 'Kerala Energy']\n['Planning, evelopment', 'Kerala Archives']\n['Decentralization, Planning, Democracy', 'Kerala Archives']\nKala Jatha, Street Theatre\n['Ernakulam District Total Literarcy Programme', 'Kerala Literacy']\nErnakulam District Total Literarcy Programme\n['Ernakulam District Total Literarcy Programme', 'Kerala Literacy']\nKilikkkoottam Jadha, Street Theatre\nStreet Theatre\nHealth Education, Asthma\nKSSP leaflets;KSSP Health Books\nKala Jatha, Street Theatre\n['Economics, Badjet', 'Kerala Swashraya Samithy']\nBalavedi, Science History\nSasthra Kala Jatha, Street Theatre\nBalalsava Jatha, Street Theatre\nBalolsava Songs\nSasthra Kala Jatha, Street Theatre\nkssp-science books, P R Madhavappanikkar, Malayalam Physics Books\nPainting, Art, History\nScience Education, Biography, C V Raman\n['Dunkal Draft', 'Kerala Archives']\n['New Economic Policy of India, Kerala Swasraya Samithi', 'Kerala Sasthra Sahithya Parishad']\n['kssp-science books', 'P R Madhavappanikkar', 'Malayalam Physics Books']\n['Ernakulam District Total Literarcy Programme', 'Kerala Literacy']\nScience Education\nEureka Magazine, Malayalam Children's Magazine\nEureka Magazine\nKerala Development, Express Highway, Jalanidhi\n['Ernakulam District Total Literarcy Programme', 'Kerala Literacy']\n['Ernakulam District Total Literarcy Programme; Mathematics Hand Book', 'Kerala Literacy']\n['Globalization, Gatt Agreement', 'Kerala Swashraya Samithy']\n['Sasthra Kala Jatha', 'Kerala Archives']\nKerala Economy, Rural Development\nAstronomy, Halley's comet\n['Globalization, Dunkal Draft, Indian Economy', 'Kerala Swashraya Samithy']\nSasthra Kala Jatha, Street Theatre\n['ISRO; KSSP leaflets', 'Kerala Archives']\nPeople's Science Movement in Kerala\nPeople's Planning\nPeople's Planning; Participative Democracy, Decentralization\n['Education, Literacy', 'Kerala Literacy']\nBhopal Gas Tragedy\nScience Education, Adulteration\nMalayalam Science Education Books'''.split("\n")_____no_output_____subject_list = [x.replace('"', '') for x in subject_list]_____no_output_____sl = [] for subject_text in subject_list: sl.append(normalize_subject(subject_text))_____no_output_____sl = [normalize_subject(x) for x in sl]_____no_output_____sl_____no_output_____ </code>
{ "repository": "beniza/learn-python", "path": "experiments/.ipynb_checkpoints/InternetArchive-checkpoint.ipynb", "matched_keywords": [ "ecology" ], "stars": null, "size": 107716, "hexsha": "482d190bccc0e949ca7c2cbaa4b53f4f8d163c31", "max_line_length": 5076, "avg_line_length": 57.2348565356, "alphanum_fraction": 0.7267444019 }
# Notebook from jamfeitosa/ia898 Path: src/isccsym.ipynb # Function isccsym ## Description Check if the input image is symmetric and return a boolean value. ## Synopse Check for conjugate symmetry - **b = isccsym(F)** - **b**: Boolean. - **F**: Image. Complex image._____no_output_____ <code> import numpy as np def isccsym2(F): if len(F.shape) == 1: F = F[np.newaxis,np.newaxis,:] if len(F.shape) == 2: F = F[np.newaxis,:,:] n,m,p = F.shape x,y,z = np.indices((n,m,p)) Xnovo = np.mod(-1*x,n) Ynovo = np.mod(-1*y,m) Znovo = np.mod(-1*z,p) aux = np.conjugate(F[Xnovo,Ynovo,Znovo]) return (abs(F-aux)<10E-4).all()_____no_output_____def isccsym(F): import ia898.src as ia if len(F.shape) == 1: F = F[np.newaxis,np.newaxis,:] if len(F.shape) == 2: F = F[np.newaxis,:,:] n,m,p = F.shape return(abs(F-np.conjugate(ia.ptrans(F[::-1,::-1,::-1],(1,1,1))))<10E-4).all() _____no_output_____ </code> ## Examples_____no_output_____ <code> testing = (__name__ == "__main__") if testing: ! jupyter nbconvert --to python isccsym.ipynb import numpy as np import sys,os ia898path = os.path.abspath('../../') if ia898path not in sys.path: sys.path.append(ia898path) import ia898.src as ia import matplotlib.image as mpimg[NbConvertApp] Converting notebook isccsym.ipynb to python [NbConvertApp] Writing 3808 bytes to isccsym.py </code> ### Numeric Example: 1D data_____no_output_____ <code> if testing: F = np.arange(5) print('Is 1d odd dimension vetor symmetric?',ia.isccsym(F),'\n') F = np.arange(6) print('Is 1d even dimension vetor symmetric?',ia.isccsym(F),'\n') F = np.array( [1j,1j,0,1j,1j] ) print('Is 1d even dimension vetor symmetric?',ia.isccsym(F),'\n')Is 1d odd dimension vetor symmetric? False Is 1d even dimension vetor symmetric? False Is 1d even dimension vetor symmetric? False </code> ### Numeric Example: real symmetric matrix_____no_output_____ <code> if testing: F = np.array( [ [0,1,1], [2,4,3], [2,3,4]] ) print('Is function F symmetric?',ia.isccsym(F),'\n')Is function F symmetric? True </code> ### Numeric Example: imaginary matrix_____no_output_____ <code> if testing: F = np.array([ [ 0j,1j,-1j], [ 2j,4j,-3j], [-2j,3j,-4j]] ) print('Is function F symmetric?',ia.isccsym(F),'\n') F = np.array( [ [ 2j,1j,-1j], [ 2j,4j,-3j], [-2j,3j,-4j]] ) print('Is function F symmetric?',ia.isccsym(F),'\n')Is function F symmetric? True Is function F symmetric? False </code> ### Numeric Example: Fourier transformation of a real image is symmetric_____no_output_____ <code> if testing: print('Is this function symmetric?') print(ia.isccsym(np.fft.fft2(np.random.rand(100,100)))) # dimension variation print(ia.isccsym(np.fft.fft2(np.random.rand(101,100)))) print(ia.isccsym(np.fft.fft2(np.random.rand(101,101))))Is this function symmetric? True True True </code> ### Image Example: circular filter_____no_output_____ <code> if testing: img = mpimg.imread('../data/cameraman.tif') F = ia.dft(img) imgc = 1 * ia.circle(img.shape, 50, [img.shape[0]/2, img.shape[1]/2]) imgct = ia.ptrans(imgc, np.array(imgc.shape)//2) ia.adshow(ia.normalize(imgct),'circular filter') res = F * imgct ia.adshow(ia.dftview(res)) print('Is this filter symmetric?', ia.isccsym(res))_____no_output_____ </code> ### Image Example 2: retangular filter_____no_output_____ <code> if False: # testing: mquadra = ia.rectangle(img.shape, [50,50], [img.shape[0]/2, img.shape[1]/2]) ia.adshow(mquadra,'RETANGULO') mquadra = ia.ptrans(mquadra, array(mquadra.shape)/2) ia.adshow(ia.normalize(mquadra),'retangular filter') mfiltrada = F * mquadra print('Is this filter symmetric?', ia.isccsym(mfiltrada))_____no_output_____ </code> ## Equation $$ \begin{matrix} F(s,u,v) &=& F^{\star}(-s \ mod\ P,-u \ mod\ N, -v \ mod\ M) \\ & & (0,0,0) \leq (s,u,v) < (P,N,M) \end{matrix} $$_____no_output_____## See also - [dftview](dftview.ipynb) - Ready for central spectrum visualization of DFT _____no_output_____## Contributions - Marcelo Zoccoler, 1st semester, 2017 - Mariana Pinheiro, 1st semester 2011_____no_output_____
{ "repository": "jamfeitosa/ia898", "path": "src/isccsym.ipynb", "matched_keywords": [ "STAR" ], "stars": 14, "size": 21043, "hexsha": "482e2042c2bc8cd51e30de560fd7de6091e1cf23", "max_line_length": 9339, "avg_line_length": 46.7622222222, "alphanum_fraction": 0.7130637267 }
# Notebook from Zuyuf/Advanced-Machine-Learning-Specialization Path: Introduction to Deep Learning/Week5/POS-task.ipynb __This seminar:__ after you're done coding your own recurrent cells, it's time you learn how to train recurrent networks easily with Keras. We'll also learn some tricks on how to use keras layers and model. We also want you to note that this is a non-graded assignment, meaning you are not required to pass it for a certificate. Enough beatin' around the bush, let's get to the task!_____no_output_____## Part Of Speech Tagging <img src=https://i.stack.imgur.com/6pdIT.png width=320> Unlike our previous experience with language modelling, this time around we learn the mapping between two different kinds of elements. This setting is common for a range of useful problems: * Speech Recognition - processing human voice into text * Part Of Speech Tagging - for morphology-aware search and as an auxuliary task for most NLP problems * Named Entity Recognition - for chat bots and web crawlers * Protein structure prediction - for bioinformatics Our current guest is part-of-speech tagging. As the name suggests, it's all about converting a sequence of words into a sequence of part-of-speech tags. We'll use a reduced tag set for simplicity: ### POS-tags - ADJ - adjective (new, good, high, ...) - ADP - adposition (on, of, at, ...) - ADV - adverb (really, already, still, ...) - CONJ - conjunction (and, or, but, ...) - DET - determiner, article (the, a, some, ...) - NOUN - noun (year, home, costs, ...) - NUM - numeral (twenty-four, fourth, 1991, ...) - PRT - particle (at, on, out, ...) - PRON - pronoun (he, their, her, ...) - VERB - verb (is, say, told, ...) - . - punctuation marks (. , ;) - X - other (ersatz, esprit, dunno, ...)_____no_output_____ <code> import nltk import sys import numpy as np nltk.download('brown') nltk.download('universal_tagset') data = nltk.corpus.brown.tagged_sents(tagset='universal') all_tags = ['#EOS#','#UNK#','ADV', 'NOUN', 'ADP', 'PRON', 'DET', '.', 'PRT', 'VERB', 'X', 'NUM', 'CONJ', 'ADJ'] data = np.array([ [(word.lower(),tag) for word,tag in sentence] for sentence in data ])[nltk_data] Downloading package brown to [nltk_data] C:\Users\Jiadai\AppData\Roaming\nltk_data... [nltk_data] Package brown is already up-to-date! [nltk_data] Downloading package universal_tagset to [nltk_data] C:\Users\Jiadai\AppData\Roaming\nltk_data... [nltk_data] Package universal_tagset is already up-to-date! from sklearn.model_selection import train_test_split train_data,test_data = train_test_split(data,test_size=0.25,random_state=42)_____no_output_____from IPython.display import HTML, display def draw(sentence): words,tags = zip(*sentence) display(HTML('<table><tr>{tags}</tr>{words}<tr></table>'.format( words = '<td>{}</td>'.format('</td><td>'.join(words)), tags = '<td>{}</td>'.format('</td><td>'.join(tags))))) draw(data[11]) draw(data[10]) draw(data[7])_____no_output_____ </code> ### Building vocabularies Just like before, we have to build a mapping from tokens to integer ids. This time around, our model operates on a word level, processing one word per RNN step. This means we'll have to deal with far larger vocabulary. Luckily for us, we only receive those words as input i.e. we don't have to predict them. This means we can have a large vocabulary for free by using word embeddings._____no_output_____ <code> from collections import Counter word_counts = Counter() for sentence in data: words,tags = zip(*sentence) word_counts.update(words) all_words = ['#EOS#','#UNK#']+list(list(zip(*word_counts.most_common(10000)))[0]) #let's measure what fraction of data words are in the dictionary print("Coverage = %.5f"%(float(sum(word_counts[w] for w in all_words)) / sum(word_counts.values())))Coverage = 0.92876 from collections import defaultdict word_to_id = defaultdict(lambda:1,{word:i for i,word in enumerate(all_words)}) tag_to_id = {tag:i for i,tag in enumerate(all_tags)}_____no_output_____ </code> convert words and tags into fixed-size matrix_____no_output_____ <code> def to_matrix(lines,token_to_id,max_len=None,pad=0,dtype='int32',time_major=False): """Converts a list of names into rnn-digestable matrix with paddings added after the end""" max_len = max_len or max(map(len,lines)) matrix = np.empty([len(lines),max_len],dtype) matrix.fill(pad) for i in range(len(lines)): line_ix = list(map(token_to_id.__getitem__,lines[i]))[:max_len] matrix[i,:len(line_ix)] = line_ix return matrix.T if time_major else matrix _____no_output_____batch_words,batch_tags = zip(*[zip(*sentence) for sentence in data[-3:]]) print("Word ids:") print(to_matrix(batch_words,word_to_id)) print("Tag ids:") print(to_matrix(batch_tags,tag_to_id))Word ids: [[ 2 3057 5 2 2238 1334 4238 2454 3 6 19 26 1070 69 8 2088 6 3 1 3 266 65 342 2 1 3 2 315 1 9 87 216 3322 69 1558 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [ 45 12 8 511 8419 6 60 3246 39 2 1 1 3 2 845 1 3 1 3 10 9910 2 1 3470 9 43 1 1 3 6 2 1046 385 73 4562 3 9 2 1 1 3250 3 12 10 2 861 5240 12 8 8936 121 1 4] [ 33 64 26 12 445 7 7346 9 8 3337 3 1 2811 3 2 463 572 2 1 1 1649 12 1 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]] Tag ids: [[ 6 3 4 6 3 3 9 9 7 12 4 5 9 4 6 3 12 7 9 7 9 8 4 6 3 7 6 13 3 4 6 3 9 4 3 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [ 5 9 6 9 3 12 6 3 7 6 13 3 7 6 13 3 7 13 7 5 9 6 3 3 4 6 13 3 7 12 6 3 6 13 3 7 4 6 3 9 3 7 9 4 6 13 3 9 6 3 2 13 7] [ 4 6 5 9 13 4 3 4 6 13 7 13 3 7 6 3 4 6 13 3 3 9 9 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]] </code> ### Build model Unlike our previous lab, this time we'll focus on a high-level keras interface to recurrent neural networks. It is as simple as you can get with RNN, allbeit somewhat constraining for complex tasks like seq2seq. By default, all keras RNNs apply to a whole sequence of inputs and produce a sequence of hidden states `(return_sequences=True` or just the last hidden state `(return_sequences=False)`. All the recurrence is happening under the hood. At the top of our model we need to apply a Dense layer to each time-step independently. As of now, by default keras.layers.Dense would apply once to all time-steps concatenated. We use __keras.layers.TimeDistributed__ to modify Dense layer so that it would apply across both batch and time axes._____no_output_____ <code> import keras import keras.layers as L model = keras.models.Sequential() model.add(L.InputLayer([None],dtype='int32')) model.add(L.Embedding(len(all_words),50)) model.add(L.SimpleRNN(64,return_sequences=True)) #add top layer that predicts tag probabilities stepwise_dense = L.Dense(len(all_tags),activation='softmax') stepwise_dense = L.TimeDistributed(stepwise_dense) model.add(stepwise_dense)Using TensorFlow backend. </code> __Training:__ in this case we don't want to prepare the whole training dataset in advance. The main cause is that the length of every batch depends on the maximum sentence length within the batch. This leaves us two options: use custom training code as in previous seminar or use generators. Keras models have a __`model.fit_generator`__ method that accepts a python generator yielding one batch at a time. But first we need to implement such generator:_____no_output_____ <code> from keras.utils.np_utils import to_categorical BATCH_SIZE=32 def generate_batches(sentences,batch_size=BATCH_SIZE,max_len=None,pad=0): assert isinstance(sentences,np.ndarray),"Make sure sentences is a numpy array" while True: indices = np.random.permutation(np.arange(len(sentences))) for start in range(0,len(indices)-1,batch_size): batch_indices = indices[start:start+batch_size] batch_words,batch_tags = [],[] for sent in sentences[batch_indices]: words,tags = zip(*sent) batch_words.append(words) batch_tags.append(tags) batch_words = to_matrix(batch_words,word_to_id,max_len,pad) batch_tags = to_matrix(batch_tags,tag_to_id,max_len,pad) batch_tags_1hot = to_categorical(batch_tags,len(all_tags)).reshape(batch_tags.shape+(-1,)) yield batch_words,batch_tags_1hot _____no_output_____ </code> __Callbacks:__ Another thing we need is to measure model performance. The tricky part is not to count accuracy after sentence ends (on padding) and making sure we count all the validation data exactly once. While it isn't impossible to persuade Keras to do all of that, we may as well write our own callback that does that. Keras callbacks allow you to write a custom code to be ran once every epoch or every minibatch. We'll define one via LambdaCallback_____no_output_____ <code> def compute_test_accuracy(model): test_words,test_tags = zip(*[zip(*sentence) for sentence in test_data]) test_words,test_tags = to_matrix(test_words,word_to_id),to_matrix(test_tags,tag_to_id) #predict tag probabilities of shape [batch,time,n_tags] predicted_tag_probabilities = model.predict(test_words,verbose=1) predicted_tags = predicted_tag_probabilities.argmax(axis=-1) #compute accurary excluding padding numerator = np.sum(np.logical_and((predicted_tags == test_tags),(test_words != 0))) denominator = np.sum(test_words != 0) return float(numerator)/denominator class EvaluateAccuracy(keras.callbacks.Callback): def on_epoch_end(self,epoch,logs=None): sys.stdout.flush() print("\nMeasuring validation accuracy...") acc = compute_test_accuracy(self.model) print("\nValidation accuracy: %.5f\n"%acc) sys.stdout.flush() _____no_output_____model.compile('adam','categorical_crossentropy') model.fit_generator(generate_batches(train_data),len(train_data)/BATCH_SIZE, callbacks=[EvaluateAccuracy()], epochs=5,)Epoch 1/5 1344/1343 [==============================] - 22s 16ms/step - loss: 0.2464 Measuring validation accuracy... 14335/14335 [==============================] - 4s 300us/step Validation accuracy: 0.94055 Epoch 2/5 1344/1343 [==============================] - 20s 15ms/step - loss: 0.0578 Measuring validation accuracy... 14335/14335 [==============================] - 4s 305us/step Validation accuracy: 0.94403 Epoch 3/5 1344/1343 [==============================] - 20s 15ms/step - loss: 0.0518 Measuring validation accuracy... 14335/14335 [==============================] - 4s 307us/step Validation accuracy: 0.94690 Epoch 4/5 1344/1343 [==============================] - 20s 15ms/step - loss: 0.0471 Measuring validation accuracy... 14335/14335 [==============================] - 4s 306us/step Validation accuracy: 0.94637 Epoch 5/5 1344/1343 [==============================] - 20s 15ms/step - loss: 0.0436 Measuring validation accuracy... 14335/14335 [==============================] - 4s 305us/step Validation accuracy: 0.94521 </code> Measure final accuracy on the whole test set._____no_output_____ <code> acc = compute_test_accuracy(model) print("Final accuracy: %.5f"%acc) assert acc>0.94, "Keras has gone on a rampage again, please contact course staff."14335/14335 [==============================] - 4s 305us/step Final accuracy: 0.94521 </code> ### Task I: getting all bidirectional Since we're analyzing a full sequence, it's legal for us to look into future data. A simple way to achieve that is to go both directions at once, making a __bidirectional RNN__. In Keras you can achieve that both manually (using two LSTMs and Concatenate) and by using __`keras.layers.Bidirectional`__. This one works just as `TimeDistributed` we saw before: you wrap it around a recurrent layer (SimpleRNN now and LSTM/GRU later) and it actually creates two layers under the hood. Your first task is to use such a layer for our POS-tagger._____no_output_____ <code> #Define a model that utilizes bidirectional SimpleRNN model = keras.models.Sequential() # <Your code here!> model.add(L.InputLayer([None],dtype='int32')) model.add(L.Embedding(len(all_words),50)) model.add(L.Bidirectional(L.SimpleRNN(64,return_sequences=True))) #add top layer that predicts tag probabilities stepwise_dense = L.Dense(len(all_tags),activation='softmax') stepwise_dense = L.TimeDistributed(stepwise_dense) model.add(stepwise_dense)_____no_output_____model.compile('adam','categorical_crossentropy') model.fit_generator(generate_batches(train_data),len(train_data)/BATCH_SIZE, callbacks=[EvaluateAccuracy()], epochs=5,)Epoch 1/5 1344/1343 [==============================] - 35s 26ms/step - loss: 0.2037 0s - loss: Measuring validation accuracy... 14335/14335 [==============================] - 9s 597us/step Validation accuracy: 0.95688 Epoch 2/5 1344/1343 [==============================] - 35s 26ms/step - loss: 0.0424 Measuring validation accuracy... 14335/14335 [==============================] - 8s 592us/step Validation accuracy: 0.96070 Epoch 3/5 1344/1343 [==============================] - 35s 26ms/step - loss: 0.0349 Measuring validation accuracy... 14335/14335 [==============================] - 9s 599us/step Validation accuracy: 0.96204 Epoch 4/5 1344/1343 [==============================] - 36s 26ms/step - loss: 0.0293 Measuring validation accuracy... 14335/14335 [==============================] - 9s 609us/step Validation accuracy: 0.96227 Epoch 5/5 1344/1343 [==============================] - 36s 27ms/step - loss: 0.0246 Measuring validation accuracy... 14335/14335 [==============================] - 9s 601us/step Validation accuracy: 0.96143 acc = compute_test_accuracy(model) print("\nFinal accuracy: %.5f"%acc) assert acc>0.96, "Bidirectional RNNs are better than this!" print("Well done!")14335/14335 [==============================] - 9s 596us/step Final accuracy: 0.96143 Well done! </code> ### Task II: now go and improve it You guesses it. We're now gonna ask you to come up with a better network. Here's a few tips: * __Go beyond SimpleRNN__: there's `keras.layers.LSTM` and `keras.layers.GRU` * If you want to use a custom recurrent Cell, read [this](https://keras.io/layers/recurrent/#rnn) * You can also use 1D Convolutions (`keras.layers.Conv1D`). They are often as good as recurrent layers but with less overfitting. * __Stack more layers__: if there is a common motif to this course it's about stacking layers * You can just add recurrent and 1dconv layers on top of one another and keras will understand it * Just remember that bigger networks may need more epochs to train * __Gradient clipping__: If your training isn't as stable as you'd like, set `clipnorm` in your optimizer. * Which is to say, it's a good idea to watch over your loss curve at each minibatch. Try tensorboard callback or something similar. * __Regularization__: you can apply dropouts as usuall but also in an RNN-specific way * `keras.layers.Dropout` works inbetween RNN layers * Recurrent layers also have `recurrent_dropout` parameter * __More words!__: You can obtain greater performance by expanding your model's input dictionary from 5000 to up to every single word! * Just make sure your model doesn't overfit due to so many parameters. * Combined with regularizers or pre-trained word-vectors this could be really good cuz right now our model is blind to >5% of words. * __The most important advice__: don't cram in everything at once! * If you stuff in a lot of modiffications, some of them almost inevitably gonna be detrimental and you'll never know which of them are. * Try to instead go in small iterations and record experiment results to guide further search. There's some advanced stuff waiting at the end of the notebook. Good hunting!_____no_output_____ <code> #Define a model that utilizes bidirectional SimpleRNN model = keras.models.Sequential() # <Your code here!> model.add(L.InputLayer([None],dtype='int32')) model.add(L.Embedding(len(all_words),50)) model.add(L.Bidirectional(L.GRU(128,return_sequences=True))) model.add(L.Dropout(0.2)) model.add(L.Bidirectional(L.GRU(64,return_sequences=True))) model.add(L.Dropout(0.2)) #add top layer that predicts tag probabilities stepwise_dense = L.Dense(len(all_tags),activation='softmax') stepwise_dense = L.TimeDistributed(stepwise_dense) model.add(stepwise_dense)_____no_output_____#feel free to change anything here model.compile('adam','categorical_crossentropy') model.fit_generator(generate_batches(train_data),len(train_data)/BATCH_SIZE, callbacks=[EvaluateAccuracy()], epochs=5,)Epoch 1/5 1344/1343 [==============================] - 237s 176ms/step - loss: 0.1619 Measuring validation accuracy... 14335/14335 [==============================] - 63s 4ms/step Validation accuracy: 0.95977 Epoch 2/5 1344/1343 [==============================] - 233s 173ms/step - loss: 0.0423 Measuring validation accuracy... 14335/14335 [==============================] - 67s 5ms/step Validation accuracy: 0.96417 Epoch 3/5 1344/1343 [==============================] - 234s 174ms/step - loss: 0.0350 Measuring validation accuracy... 14335/14335 [==============================] - 68s 5ms/step Validation accuracy: 0.96659 Epoch 4/5 1344/1343 [==============================] - 235s 175ms/step - loss: 0.0305 Measuring validation accuracy... 14335/14335 [==============================] - 68s 5ms/step Validation accuracy: 0.96709 Epoch 5/5 1344/1343 [==============================] - 235s 175ms/step - loss: 0.0273 Measuring validation accuracy... 14335/14335 [==============================] - 68s 5ms/step Validation accuracy: 0.96685 acc = compute_test_accuracy(model) print("\nFinal accuracy: %.5f"%acc) if acc >= 0.99: print("Awesome! Sky was the limit and yet you scored even higher!") elif acc >= 0.98: print("Excellent! Whatever dark magic you used, it certainly did it's trick.") elif acc >= 0.97: print("Well done! If this was a graded assignment, you would have gotten a 100% score.") elif acc > 0.96: print("Just a few more iterations!") else: print("There seems to be something broken in the model. Unless you know what you're doing, try taking bidirectional RNN and adding one enhancement at a time to see where's the problem.")14335/14335 [==============================] - 67s 5ms/step Final accuracy: 0.96685 Just a few more iterations! </code> ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` #### Some advanced stuff Here there are a few more tips on how to improve training that are a bit trickier to impliment. We strongly suggest that you try them _after_ you've got a good initial model. * __Use pre-trained embeddings__: you can use pre-trained weights from [there](http://ahogrammer.com/2017/01/20/the-list-of-pretrained-word-embeddings/) to kickstart your Embedding layer. * Embedding layer has a matrix W (layer.W) which contains word embeddings for each word in the dictionary. You can just overwrite them with tf.assign. * When using pre-trained embeddings, pay attention to the fact that model's dictionary is different from your own. * You may want to switch trainable=False for embedding layer in first few epochs as in regular fine-tuning. * __More efficient batching__: right now TF spends a lot of time iterating over "0"s * This happens because batch is always padded to the length of a longest sentence * You can speed things up by pre-generating batches of similar lengths and feeding it with randomly chosen pre-generated batch. * This technically breaks the i.i.d. assumption, but it works unless you come up with some insane rnn architectures. * __Structured loss functions__: since we're tagging the whole sequence at once, we might as well train our network to do so. * There's more than one way to do so, but we'd recommend starting with [Conditional Random Fields](http://blog.echen.me/2012/01/03/introduction-to-conditional-random-fields/) * You could plug CRF as a loss function and still train by backprop. There's even some neat tensorflow [implementation](https://www.tensorflow.org/api_guides/python/contrib.crf) for you. _____no_output_____
{ "repository": "Zuyuf/Advanced-Machine-Learning-Specialization", "path": "Introduction to Deep Learning/Week5/POS-task.ipynb", "matched_keywords": [ "bioinformatics" ], "stars": 252, "size": 30943, "hexsha": "48604b7f6a574f0a80e291267099cbec457c3035", "max_line_length": 416, "avg_line_length": 36.9248210024, "alphanum_fraction": 0.5465856575 }
# Notebook from davidevdt/datamining_jbi030 Path: 10a. neural_networks.ipynb ================================================================================================================= # Lecture Notes: Neural Networks ##### D.Vidotto, Data Mining: JBI030 2019/2020 =================================================================================================================_____no_output_____Artificial Neural Networks, after decades of up and downs in the Machine Learning community, have re-gained popularity in the last 10-15 years under the name of Deep Learning. Their model structure, their flexibility, and their ability to estimate relatively difficult functions of their inputs, make them one of the most powerfool tool available to Data and Computer Scientists. Neural Networks can be specified under several different architectures that allow them to be nowaday's preferred algorithm in fields such as Computer Vision and Speech Recognition. The main downside of Neural Networks is the large number of hyperparameters that need to be tuned to make them work optimally. In this notebook, we will explore in detail Feedforward Neural Networks (also known as Multilayer Perceptron), and mention in the end other two types of architectures become famous recently: Convolutional Neural Networks and Recurrent Neural Networks. For this notebook, students are assumed to have followed the lecture in class and to have understood the following concepts: * layers and neurons * activation functions * Stochastic Gradient Descent * Backpropagation * hyperparameters of a Neural Network In the notebook, we will cover the following topics in more detail: 1. Notation 1. Introduction * from linear models to Neural Networks * Digression: Biological Neural Networks 1. Components of a FNN * layer, neurons, activation function * formal model * adding layers to the network 1. Setting up a FNN * regression * classification (binary and multi-class cases) 1. Training a NN with Backpropagation * Main Idea * Stochastic Gradient Descent * mini-batches,epochs * learning rate, momentum * initialization * Backpropagation * Example: 2 layers with one neuron * Increasing the number of neurons 1. Setting a FNN - Other technical considerations * Number of layers and neurons * learning rate * optimizer * batch-size * activation function * number of epochs and early stopping * $l_2$ and $l_1$ regularization 1. Other Architectures * Convolutional Neural Networks * Recurrent Neural Networks 1. Other Resources This notebook only contains the theoretical concepts underlying Neural Networks. Examples of Neural Networks implemented with Keras and scikit-learn can be found in the notebooks *Neural Networks in Practice*. ## 1. Notation In this notebook the following notation will be used: * as usual $j=1,...,p$ denotes the input feature index ($X_j$) and $i=1,...,n$ denotes the observation index ($\mathbf{x}_i$ refers to the vector of observed input for unit $i$) * $l=1,...,L$ will denote the $l$-th (out of $L$) layers of the network, while $k=1,...,K_l$ will denote $k$-th among the $K_l$ neurons of layer $l$ * bold lowercase denotes a vector (for example $\mathbf{w}$ denotes a vector of weights; possible sub- or super-scripts will help to contextualize such vector) * $z$ will generally denote linear combinations (plus biases) of neurons of a previous layer; for example, $z^{(1)}_k$ is the linear combination of neurons of the input layer and the weights that denote the $k$-th neuron in such layer * uppercase denote matrices (for example $\mathbf{W}^{(l)}$ is the matrix that stacks horizontally all the row vectors of weights of layer $l$, while $W^{(l)}_{k,m}$ refers to the $(k,m)$-th element in such matrix -it is the weight that connects neuron $m$ of layer $(l-1)$ to neuron $k$ of layer $l$; $\mathbf{W}^{(l)}_k$ is the $k$-th row vector of this matrix -the set of weights used to define the $k$-th neuron of layer $l$) * a generic activation function is denoted with $g$, while distinct activation functions in one network are denoted with $g_1$, $g_2$, and so on. When a vector of dimension $d$ is given as input to $g$, it is implied that the output is a $d$-dimensional vector, which is the result of element-wise application of $g$: for example, if $d=2$, $g([v_1, v_2]) = [g(v_1), g(v_2)]$ * $\sigma$ indicates the *sigmoid function*, already encountered in the logistic regression lecture * $\hat{y}$ denotes the *output* provided by the network (with a bit of abuse of notation, this can be both a probability or a class in classification tasks) ## 2. Introduction **From Linear Models to Neural Networks.** Recall the function for the prediction of a linear regression model: $$\hat{y}_i= w_0 + \mathbf{w}^T\mathbf{x}_i = w_0 + \sum_{j=1}^{p}w_jx_{ij}$$ Graphically, a linear regression model can be represented as follows: <br> <img src="./img/neural_networks/linear_regression.png" width="250"/> <br> The first layer, called *input layer*, contains the *nodes* representing the features (note that a node with a value of 1 is included, to accommodate for the *bias* $w_0$), and each of the edges represent the weights of the regression model. The *output layer*, finally, contains one node reporting the prediction of the model, which in case of linear regression is a simple linear combination of the input features (plut the bias term). Similarly, we can represent graphically a *logistic regression* model: <br> <img src="./img/neural_networks/logistic_regression.png" width="250"/> <br> The difference with linear regression is that the linear combination of the features is *filtered* through the sigmoid function, which "squashes" its value between 0 and 1, transforming them into a probability (as we will see, the sigmoid function is an example of *activation function* in Neural Networks); this probability is the final output of the model. In turn, the class of the observation can be predicted as a function of this probability. **Neural Networks** (NN) add a level of complexity to the simple linear models, by introducing **hidden layers** (that is, unobserved sets of nodes) between the input and output layers. Each node of a hidden layer is also called **neuron**, to reflect an alleged analogy with the biological neurons. Here is a graphical example of a single-layer (i.e., having a unique hidden layer) neural network; to ease the introduction to NNs, we start by showing the case with a unique neuron (for a regression case): <br> <img src="./img/neural_networks/simple_nn.png" width="350"/> <br> There are a couple of points that's worth stressing here: * the neuron $h^{(1)}_1$ denotes the first neuron (subscript) of the first layer (superscript) in the model * the edges, once again, denote the weights (this time, this is implied in the figure) * $h^{(1)}_1$ contains a number, which is computed in two steps: (1) first, the linar combination of the previous layer (the input layer in this case) is calculated; (2) second, such linear combination is filtered through an *activation function* (which can be the sigmoid function, but need not to) whose output is exactly $h^{(1)}_1$ * in turn, $h_1^{(1)}$ is multiplied by its weight and added to a new bias term; the result is filtered again through another activation function (in case of regression, this is simply the *identity function* which does not apply any filter) to return the final output of the network, $\hat{y}$ This type of Neural Networks is known as Feedforward Neural Network -FNN for short- as the information flows forward from the input to the output layer. The mere presence of the activation function is enough to turn the model into a non-linear one, and it makes the neuron capable of capturing the specific information located in a precise region of the feature space (we will see the activation functions more in detail in the next section). Of course, more complex networks allow capturing more complex aspects of the feature space, making in turn also the model more complex. Here's an example of a more complex single-layer network (with five neurons): <br> <img src="./img/neural_networks/single_layer_nn.png" width="350"/> <br> This new network can be described as follows: * the input layer is linked to each neuron in the hidden layer with different linear combinations of the input features. In practice, this means that each $h^{(1)}_k$ for $k=1,...,5$ differ from one another because of the *weights* used to create the linear combinations. This allows the neurons to focus on different aspects of the data, as each of these linear combinations is also given as input to an activation function, whose output is the value of each neuron $h^{(1)}_k$ * this means that we can identify 5 sets of weights used to create the five neurons: $\mathbf{w}^{(1)}_1$ is the set of weights to calculate the linear combination of neuron 1 in layer 1; and so on until $\mathbf{w}^{(1)}_5$. The transpose of these vectors (row vectors) can be stacked together to form a $5 \times p$ matrix of weights for the first layer (so that $\mathbf{W}^{(1)}_k = \mathbf{w}^{(1)^T}_k$): <br>$$\mathbf{W}^{(1)} = \begin{bmatrix} \text{---} & \mathbf{w}^{(1)T}_1 & \text{---} \\ \text{---} & \mathbf{w}^{(1)T}_2 & \text{---} \\\text{---} & \mathbf{w}^{(1)T}_3 & \text{---} \\\text{---} & \mathbf{w}^{(1)T}_4 & \text{---} \\\text{---} & \mathbf{w}^{(1)T}_5 & \text{---}\end{bmatrix}$$<br> This, of course, can be easily generalized to the case of $K_1$ neurons in the first layer * The output node is now given by a linear combination of the neurons, which are therefore multiplied by a set of weights contained in the vector $\mathbf{w}^{(2)} = [w_1^{(2)},...,w_5^{(2)}]$ (plust a bias term); the resulting linear combination, once again, can optionally be filtered through an activation function (then again, this step is not necessary in regression problems) In the following figure, we can see what is the effect of manipulating the number of neurons in a single-layer FNN, on the predictions of a regression problem with a single input $X$: <br> <img src="./img/neural_networks/nn_demo_neurons.png" width="800"/> <br> As you can see, by simply increasing the number of neurons in the single-layer FNN, the network was able to retrieve the true sinusoidal signal of the data. This is because single-layer NNs are [universal approximators](https://en.wikipedia.org/wiki/Universal_approximation_theorem): given a sufficiently large number of neurons, they can theoretically approximate any continuous function. (Be careful though: Too many neurons/layers lead to too complex models and therefore, as usual, to overfitting! This is what happens in the bottom right figure, where a network with 30 neurons was specified). It seems clear, therefore, that adding more neurons increases the capacity of the network to focus on different types of relationships in the data (different levels of interactions), and the activation functions allow the model to capture nonlinearities. Further flexibility is given by the fact that not only neurons, but also layers, can be added to the model; here's an example with two layers, the first one containing five neurons, and the second containing three neurons: <br> <img src="./img/neural_networks/two_layer_nn.png" width="450"/> <br> This time we can count three sets of weights: (1) Those "linking" the input layer to the first hidden layer, with the corresponding vectors "stacked" in the matrix $\mathbf{W}^{(1)}$ as just seen. (2) Those linking the first hidden layer to the second hidden layer; similarly to what done for the first layer, we can "stack" the resulting weight vectors in the matrix $\mathbf{W}^{(2)}$; and (3), the set of weights "linking" the second hidden layer to the output layer, which can be denoted with the vector $\mathbf{w}^{(3)}$. (Note that, more in general, for a network with $L$ layers the set of weights for the output is the set number $L+1$). In total, this network includes $p\cdot5 + 5\cdot3 + 3$ weights, to be added to the 5+3+1 bias terms to obtain the total number of parameters present in the model. As we have seen so far, all the linear combinations are given to an activation function before their values are passed to the next layer. Adding this second layer further increases the flexibility of the Network, as each neuron in the second layer can describe and manipulate different types of relationships in the first hidden layer. This further increases the expressive ability of the network, reflected in the predictions given by the output layer. It is also very hard to interpret the output of a NN, given that the interpretation of each neuron is already very complicated with just a small number of layers. Therefore, NN's are in general mostly used for prediction. To complete this introduction to Neural Networks, we can also see that the output layer needs not contain just one neuron; it can also be composed of several neurons. This can be useful, for example, in regression cases where we want to predict more than one output variable; or in multi-class classification (as we will see), where each output neuron corresponds to a different class of the output (here's an example for a single-layer network): <br> <img src="./img/neural_networks/multi_output_nn.png" width="350"/> <br> All types of FNN's presented so far are also known as *dense* (or *fully connected*) *networks*, as all the nodes of a layer are connected with all nodes of the next and previous layers. These are in contrast with *sparse networks*, where some of the weights are set equal to 0, loosing in this way connections between (some of) the nodes. ---------------------------------------------------------------------------------------------------------------------- #### Digression: Biological Neural Networks The Artificial Neural Networks we are exploring in this notebook owe their names to the functioning of the biological neural networks of our brain; for a long time (at least until two or three decades ago) it was believed that ANN's were actually the mathematical reflection of what happens within the biological neurons of human and animal brains. Today this is no longer believed, as profound differences have been found (ANN's simplify too much what is really going on in reality). However, it can still be worth to give a small description of Biological neurons, to catch some similarities between the two paradigms. <br> <img src="./img/neural_networks/biological_nn.png" width="400"/> <br> A biological neuron consists of a *cell body*, containing the *nucleu*s and other cell's components, as well as several branching extensions called *dendrites* and one very long extension called the *axon*. At the extremity of the axon there are the *axon terminals* (know also as *synapses*), which are connected to the dendrites or cell bodies of other neurons. The neuron produces electrical impulses, known as action *potentials of signals*; these travel along the axons and make the synapses release chemical signals (the *neurotransmitters*). If a neuron receives a sufficient amount of neurotransmitters, it is activated ("fired"), and the signal can be passed to the next neuron. This process happens within milliseconds in a network of billions of neurons. The analogy with ANN's resides in the fact that the information flows from "layers of neurons" to others, and an "activation function" determines whether a specific neuron is set on or off (i.e., if it is equal or not to 0) given the information (the inputs in the input layer) being processed. <br> ---------------------------------------------------------------------------------------------------------------------- ## 3. Components of a FNN ### 3.1 Layers, Neurons, Activation Functions As we have seen in the introductory section, the three main components of a FNN are the layers, the neurons, and an activation function (here denoted with $g$). In the following figure, we see more in detail how the information is processed in a single-layer NN (with five neurons). This is also the way in which **predictions** are performed when the model, already trained, is shown a new data point. <br> <img src="./img/neural_networks/flow_nn.png" width="600"/> <br> 1. First, the inputs are multiplied by each set of weights of the first layer, given by the rows of $\mathbf{W}^{(1)}$ (denoted with $\mathbf{W}^{(1)}_k$). For each neuron, such products are then added to each other (wighted sum of the inputs) and to the neuron-specific bias terms $(w^{(1)}_{0,1},...,w^{(1)}_{0,5})$. The results lead to the weighted sums (linear combinations) denoted with $z^{(1)}_1,...,z^{(1)}_5$ in the figure 1. The results from the previous point are then processed through an activation function $g_1$; this can be a sigmoid (a.k.a. logistic) function, but other options are possible (as we will see in a while) 1. The result "fired" by the activation function is the value taken on by each neuron; therefore, the $k$-th neuron in the first layer can be written as $h_k^{(1)} = g_1\left(z_k^{(1)}\right) = g_1\left(w_{0,k}^{(1)} + \sum_{j=1}^{p}W^{(1)}_{k,j}x_{ij}\right)$ 1. The values contained in the hidden neurons are then combined by means of a new weighted sum; that is, they are multiplied by a new vector of weights $\mathbf{w}_k^{(2)}$, and added to a new bias term $w_0^{(2)}$; the result is in the new node denoted by $z_1^{(2)}$ in the figure 1. last, such linear combination is (optionally) passed to an output activation function $g_2$ (notice that such function needs not be the same as $g_1$). If no filter is desired (as it is custom in regression problems), we can see $g_2$ as an identity function. Thus, we can write the formula for the output, $$\hat{y} = g_2\left(z^{(2)}_1\right) = g_2\left(w_0^{(2)} + \sum_{k=1}^{5}w_k^{(2)}h^{(1)}_k\right)$$ **Types Of Activation Functions**. The role of the activation function is to decide which bit of information is relevant for the network, when making new predictions. Such functions must be non-linear in nature; otherwise, the network becomes a combination of linear models, which at last is again a linear model! What makes Neural Networks universal approximators, instead, is the presence of non-linear activation functions. Ideally, the activation functions (at least the ones that define the hidden layers) should somehow compress (*squash*) the value of the linear combinations of their inputs into a bounded range of values, or at least to set some of these linear combinations to 0, in which case the neurons are switched off and not used to predict the current unit. For any type of NN architecture, there are some standard activation functions that are commonly used. Until few years ago, the most common activation function was the **sigmoid function**; as we have already described above and in the lecture of logistic regression, this function takes an input and transforms it into a value between 0 and 1. This means that when the sigmoid function is used, the values of the neurons cannot be smaller than 0 and larger than 1. For a linear combination (plus bias) of the previous layer $z$, the sigmoid function is defined as: $$ g = \sigma(z) = \frac{1}{1+e^{-z}}. $$ The bias terms can then be seen as the magnitude that the linear combination of the inputs must take to have a neuron equal to 0.5. Another common function is the **tanh** (hyperbolic tangent) function; it is S-shaped, similar to the sigmoid function, but instead of 0 and 1, it ranges between -1 and 1 (in which case the neurons can also be negative, bounded at -1). In fact, the tanh function can be seen as a rescaled sigmoid. In this case, the bias tells the minimum value needed for the linear combination in order to yield a neuron equal or larger than 0. The tanh function is defined as: $$ g = tanh(z) = \frac{e^{z} - e^{-z}}{e^{z} + e^{-z}} $$ A last function that has gained popularity in the last years, and that has become the default choice in several Deep Learning libraries, is the **Rectified Linear Unit** (ReLU). It switches off to 0 all the inputs smaller than or equal to 0, while if the input is positive, it simply becomes the identity function (and therefore linear in the input). However, the ReLU function is a non-linear function, as it only activates one part of its input. It is then defined as: $$ g = ReLU(z) = \left\{ \begin{array}{cl} 0 & if\ z \leq 0 \\ z & if\ z > 0 \end{array} \right. $$ Therefore, ReLU switches on only those neurons for which the linear combination of its inputs is larger than the bias term. The reason why the rectified linear unit has become so popular is that it works well in practice, and makes the gradients for algorithms such as gradient descent cheap to compute. The following graph plots the three functions just discussed: <br> <img src="./img/neural_networks/activation_functions.png" width="400"/> <br> A nice overview and more detailed discussion of these and other activation functions (such as for example the leaky ReLU) is given in [this link](https://www.analyticsvidhya.com/blog/2020/01/fundamentals-deep-learning-activation-functions-when-to-use-them/). ### 3.2 Formal model We repeat now the steps seen previously, but using exclusively a mathematical notation. 1. The input values of the $i$-th unit are multiplied by the $k$-th set of weights and added to the $k$-th bias term: $z^{(1)}_k=w_{0,k}^{(1)} + \mathbf{W}^{(1)T}_k\mathbf{x}_i$ 1. The $k$-th neuron in the hidden layer is calculated by passing the output of the previous point into the activation function: $h^{(1)}_k = g_1\left(z_k^{(1)}\right)$ 1. Finally, the output of the network for unit $i$ is computed by repeating the previous two steps using layer 1 as input: $\hat{y}=g_2\left(z^{(2)}_1\right) =g_2\left(w^{(2)}_{0} + \mathbf{w}^{(2)T}\mathbf{h}^{(1)}\right)$, where $\mathbf{h}^{(1)}$ is the vector that contains all the values of the neurons of layer 1 Note that we can also rewrite steps (1) and (2) as $\mathbf{h}^{(1)} = g_1\left(\mathbf{w}^{(1)}_0 + \mathbf{W}^{(1)}\mathbf{x}_i\right)$ with $\mathbf{w}^{(1)}_0$ the vector containing all the bias terms for the first layer. The following table summarizes the computations for each layers. | Input Layer | Hidden Layer ${h}^{(1)}$ | Output Layer $\hat{y}$ | |:-----------:|:-----------:|:-----------:| |$\mathbf{x}_i$| $$g_1\left( \mathbf{w}^{(1)}_0 + \mathbf{W}^{(1)}\mathbf{x}_i\right)$$ | $$g_2\left(w^{(2)}_{0} + \mathbf{w}^{(2)T}\mathbf{h}^{(1)}\right)$$ | Notice, last, that by "wrapping" the first layer inside the output layer, the output can be written as a function of the input: $$\hat{y} = g_2\left(w^{(2)}_{0} + \mathbf{w}^{(2)T}\left[g_1\left( \mathbf{w}^{(1)}_0 + \mathbf{W}^{(1)}\mathbf{x}_i\right)\right]\right)$$ ### 3.3 Adding Layers to the Network When layers are added to the network, the principle behind the computations seen so far is the same of the single-layer case: that is, we compute as many linear combinations of the neurons in the previous layers as there are neurons in the next layer, filter them through the activation function, and go to the next layer. The following table shows an example with two layers; $\mathbf{W}^{(2)}$ and $\mathbf{w}^{(2)}_0$, represent the matrix of weights and vector of biases for the second hidden layer, while $\mathbf{w}^{(3)}$ and $\mathbf{w}^{(3)}_0$ represent the vector of weights and the bias of the output layer. It is assumed that for the two hidden layers, the same activation function $g_1$ is used (this is not necessary though). The output layer, instead, uses a different function $g_2$ (the reason of this difference will become clearer in the next section). | Input Layer | Hidden Layer 1: ${h}^{(1)}$ | Hidden Layer 2: ${h}^{(2)}$ | Output Layer $\hat{y}$ | |:-----------:|:-----------:|:-----------:|:-----------:| |$\mathbf{x}_i$| $$g_1\left( \mathbf{w}^{(1)}_0 + \mathbf{W}^{(1)}\mathbf{x}_i\right)$$ |$$g_1\left( \mathbf{w}^{(2)}_0 + \mathbf{W}^{(2)}\mathbf{h}^{(1)}\right)$$ | $$g_2\left(w^{(3)}_{0} + \mathbf{w}^{(3)T}\mathbf{h}^{(2)}\right)$$ | And writing the output as a function of the input: $$\hat{y} = g_2\left(w^{(3)}_{0} + \mathbf{w}^{(3)T}\left\{g_1\left( \mathbf{w}^{(2)}_0 + \mathbf{W}^{(2)}\left[g_1\left( \mathbf{w}^{(1)}_0 + \mathbf{W}^{(1)}\mathbf{x}_i\right)\right]\right)\right\}\right)$$ More in general, in a network with $L$ layers, the $l$-th layer can be expressed as a function of the previous ($l-1$) layer... $$g_1\left(\mathbf{w}^{(l)}_0 + \mathbf{W}^{(l)}\mathbf{h}^{(l-1)}\right)$$ ...and in turn it becomes the input of the next ($l+1$) layer. The output layer, in a network with $L$ layers, is a function of the ($L+1$)-th sets of weights and biases. If the output layer is also composed of multiple neurons (we have seen an example in the introduction), then the ($L+1$)-th set of weights becomes a matrix (with $K_{L+1}$ rows, where $K_{L+1}$ is the number of neurons in the output layer), denoted by $\mathbf{W}^{(L+1)}$. The biases of the output, in turn, become the components of a $K_{L+1}$-dimensional vector, $\mathbf{w}^{(L+1)}_0$. ## 4. Setting Up a Neural Network Finding a good setting of a NN is not easy, as it requires tuning several hyperparameters (more on this in Section 6 below). Nevertheless, depending on whether we are in a regression or in a (binary/multi-class) classification problem, there are some standard rules that can be followed when deciding on the architecture of the network. In particular, depending on the type of prediction we want to perform, such architectures differ for the number of neurons in the output layer, the activation function for the output layer, and the **loss function** used to find the optimal weights of the network during the training stage (this is done with an algorithm called **backpropagation**, as we will see in Section 5). In this section, we will discuss how to set such components; in Section 6, we will come back to the problem of setting the number of neurons and layers and the activation functions in the hidden layers. ### 4.1 Regression In regression, the typical architecture is the one we have discuss the most so far. That is, we only need one neuron on the output layer, and the identity function ($g_2(z) = z $) can be used as activation of the output. For the loss function $J$ to be used during optimization (this function is denoted with $L$ in other notebooks), a typical choice is given by the mean squared error: $$J = MSE(y, \hat{y}) = \frac{1}{n}\sum_{i=1}^n(y_i-\hat{y}_i)^2$$. There can be some exceptions to these rules. For example, if we want to constrain the output to be positive, we can use the ReLU as the activation function for the output. Otherwise, if we want to bound the output to lie within the (0-1) interval, we can use the sigmoid function; if we want to restrict it into the (-1,1) interval, we can use the tanh function. Last, if we want to predict more than one target $y$, we can specify as many neurons as the number of targets for the output layer. ### 4.2 Classification **Binary Classification**. In binary classification, we can arbitrarily set the output labels equal to 0 (negative class) or 1 (positive class). Here, we also use one neuron for the output layer. Such neurons denote the probability $\Pr(y)$ of the positive class. Once such probability is computed, the class can be predicted based on a threshold (usually 0.5). <br> <img src="./img/neural_networks/binary_nn.png" width="300"/> <br> Therefore, in order to express the output as a probability, the activation function $g_2$ for the output layer is the sigmoid function. The loss function, instead, is the [*cross-entropy*](https://en.wikipedia.org/wiki/Cross_entropy); this was already introduced in the multi-class case of logistic regression, and we repropose it here ($\hat{y}$ represent the predicted probability of the positive class): $$J = H(y,\hat{y}) = -\frac{1}{n}\sum_{i=1}^{n}\left(y_i log(\hat{y}_i)+(1-y_i)log(1-\hat{y}_i)\right)$$ Notice that this function is minimized (equal to 0) when $y_i=\hat{y}_i$ (remember: $y_i$ can be equal to either 0 or 1), and maximal when predictions and labels do not match. Thus, since it measures the discrepancies between predictions and output, it is a valid loss (cost) function. Alternatively, the mean squared error MSE can also be used in binary classification (in which case the squared difference between the value of the observed class and its estimated probability is considered). **Multi-Class Classification**. Here, we have multiple classes, so that $y_i$ can take on $C$ possible values, from 1 to C. In this case, it is customary to one-hot encode the labels; for example, if for the $i$-th unit we observe $y_i=c$, its output will be converted to a vector $y_i=[0,0,...,1,...,0]$, where the '1' is in the position of the $c$-th class. At this point, the network is set with as many neurons in the output layers as the number of classes ($C$). Each value predicted by the output $\hat{y}_1,...,\hat{y}_C$ can be considered as the probability of class $c$; thus, in this case, the activation function $g_2$ for the output is the *softmax* function. We also encountered this function when discussing the multi-class case of logistic regression. If we denote with $z_c$ the value $w^{(L+1)}_{0,c} + \mathbf{W}^{(L+1)}_c\mathbf{h}^{(L)}$ (where, as usual, $w^{(L+1)}_{0,c}$ is the $c$-th bias for the output layer, $\mathbf{W}^{(L+1)}_c$ is the $c$-th row of the weights matrix for the output layer, and $\mathbf{h}^{(L)}$ are the neurons of the last hidden layer), the $c$-th neuron in the output layer is then estimated as: $$ softmax(z_c) = \frac{exp(z_c)}{\sum_{c'=1}^{C} exp(z_{c'})}. $$ In practice, the softmax function is the multi-class version of the sigmoid, and calculates the probabilities for each of the output neurons. Once again, the loss function is the cross-entropy, extended to the case with $C$ classes: $$J = H(y,\hat{y}) = -\frac{1}{n}\sum_{i=1}^{n}\sum_{c=1}^{C}\left(y_{i,c} log(\hat{y}_{i,c})\right).$$ Alternatively, like in the binary case, the MSE can be used also for the multi-class case (in which case the interpretation of the error is analogous to the one described for the binary case; the difference is that the MSE is now calculated for each node in the output layer). **Summary.** The following table summarizes the standard choices for the architectures of NN's in the three cases just outlined. | | Regression | Binary Class. | Multi-Class Class.| |:---:|:---:|:---:|:---:| | **# Output Neurons** | 1 | 1 | $C$ | | **Output Layer Activation** | None (Identity) | Logistic | Softmax | | **Loss Function** | MSE | Cross-Entropy | Cross-Entropy | ## 5. Training a NN with Backpropagation ### 5.1 Main Idea Training a NN network consists of finding optimal values for the weights and biases, in such a way that the loss functions described in the previous section reach a minimum. Like several algorithms studied in this course, NN also use Gradient Descent to iteratively estimate the parameters. If we store all the weights and biases of the model in a *tensor* (i.e., a multi-dimensional array) $\mathbf{W}$, we can describe the typical Gradient Descent updating step: $$\mathbf{W}_{new} \leftarrow \mathbf{W}_{old} - \eta \nabla J(\mathbf{W}_{old})$$ where $J(\cdot)$ corresponds to any of the loss functions describred in the previous section, expressed as a function of the parameters of the network. This version of Gradient Descent, in which we use all $n$ observations in the training set to perform an update step, is called *batch gradient descent*. The optimization problem to solve when training a NN is highly non-convex (the number of parameters to optimize grows quickly as we add few neurons to the network), and the function to optimize will contain several local minima, making it impossible to find the global optimum. In practice, what is done is to accept the convergence to a local minimum, as long it is a wide (not necessarily deep) point in the parameter space that allows for good generalization of the model predictions (these types of regions generally do not overfit the training dataset). Batch gradient descent, under this point of view, is problematic: besides being slow to converge (especially with large training datasets), it might easily get stuck in some narrow local minimum which might be too dataset-specific, jeopardizing in this way model performance. For this reason, in the Deep Learning community, a special version of Gradient Descent, called *Stochastic* (or mini-batch) *Gradient Descent*, has become popular. ### 5.2 Stochastic Gradient Descent **Mini-Batches and Epochs**. In mini-batch Gradient Descent, instead of calculating the gradient of the cost function on the whole training dataset before performing the update, the training data is randomly split into *mini-batches* (these are simply subsets of units of the training dataset), and the gradient is iteratively evaluated for each of them. The update of the model parameters is executed after a mini-batch has been evaluated, rather than the whole training dataset. One iteration of the mini-batch GD algorithm, (which occurs after all mini-batches have been evaluated so that the algorithm has passed once through the whole training dataset) is called **epoch**. The size of the mini-batches (called "batch size"), $n_b$, is another hyperparameter that can be tuned by the user; typical choices for $n_b$ are 32, 64, or 128, as in this way it is possible to exploit CPU's and GPU's capacity to perform parallel matrix and vector multiplication. When using mini-batch GD, the loss functions that are used are still the ones described in Section 4, with the difference that they are calculated for each batch rather than for the whole dataset (therefore, $n$ in the loss functions should be replaced by $n_b$). In the special case of $n_b=1$, the algorithm is called "Stochastic Gradient Descent", SGD (although in general, libraries and packages refer to SGD also when using mini-batch GD). In this figure, you can see the different paths taken from batch GD, mini-batch GD, and stochastic GD when optimizing a two-parameter function: <br> <img src="./img/neural_networks/gradient_descent_convergence.png" width="400"/> <br> As you can see, batch GD reaches the minimum in a more stable and efficient way, while SGD is much noisier and when it gets close to the minimum it keeps "dancing" around it, without reaching it exactly. Mini-batch GD, instead, is somewhat in the middle, being noisier that batch GD but more stable than SGD. However, the advantage of mini-batch methods (including SGD) is that they require much less data to update the parameters of the network; such updates, therefore, occur more often than in standard GD. Furthermore, given to the noise they introduce, they are more likely to escape from a local minimum in a narrow "hole", allowing to conduct the algorithm to a better region of the parameter space (in general a wider local minimum). In contrast, batch GD does not have this property, and it is "doomed" to converge to the first local minimum it finds, no matter how "deep" it is. **Learning Rate and Momentum**. Among the disadvantages of SGD and mini-batch GD, is that there is a new hyperparameter to be tuned ($n_b$), and that they are more sensitive to the **learning rate** $\eta$ for the update step. A way to overcome this issue is by making the learning rate function of the iteration step $t$, so that it changes at every iteration. What is done usually is to start with a large value of $t$ when the algorithm is far from the minima, and decreasing it according to a selected "schedule" as the number of iteration $t$ grows, and the algorithm gets closer to a minima, so that it becomes more precise. Example of schedules are *power scheduling* ($\eta(t) = \eta_0/(1+t/s)^r$, with $t/s$ decay parameter defined by the iteration number $t$ and a hyperparameter $s$, and $\eta_0$ the starting value of $\eta$; $r$ is another hyperparameter), and *exponential scheduling* (with $\eta(t) = \eta_00.1^{t/s}$). Of course, other schedules are possible. Another way to accelerate learning is by using the *momentum*; momentum is a way to take into account the directions indicated by the gradients in the previous iterations, so that the algorithm does not depend only on the direction of the last iteration and robustly moves towards the compund main direction indicated by previous steps. This can be beneficial in presence of long curvatures in the objective function, or noisy gradients. Convergence is accelerated and the algorithm is more likely to escape bad local optima. There are two update steps in GD with momentum: $$1.\ \mathbf{m}_{new} \leftarrow \beta\mathbf{m}_{old} - \eta\nabla J(\mathbf{W}_{old})$$ $$2.\ \mathbf{W}_{new} \leftarrow \mathbf{W}_{old} + \mathbf{m}_{new}$$ Here, $\mathbf{m}$ is the momentum vector (where all old gradients are stored), while $\beta \in [0,1)$ is the *momentum* hyperparameter. The larger $\beta$, the more previous gradients affect the direction of the new update. Typical values of $\beta$ are 0.9 and 0.99. The learning rate now depends on $\beta$; GD with momentum can be $\frac{1}{1-\beta}$ times faster than GD without momentum. With $\beta = 0.9$, momentum can become 10 times faster than GD methods without it. The following figure represents a possible sequence of GD iterations, with and without momentum. Note how the steps are more "stable" in presence of momentum, and how the algorithm requires less iterations to get closer to the minimum. <br> <img src="./img/neural_networks/momentum.png" width="400"/> <br> **Data Scaling**. Although not strictly necessary for the neural network model, it is a good idea to scale the continuous features of the dataset before training. This can help to speed up the convergence of Gradient Descent. In fact, with rescaled features, the cost function looks more symmetric, while with un-scaled inputs the cost function might be more elongated, and squished towards some specific direction. The latter case is known to create convergence problems to GD methods, and a smaller learning rate might be required to reach the local minimum. Conversely, in a nicely symmetric cost function all directions share the same type of curvature, and GD can potentially find the direction of a a good local minimum with the help of a larger learning rate and - therefore - within a smaller number of iterations. **Initialization**. In order to perform any type of GD, all weights and biases need to be initialized. While there are some ways to initialize these values that help to speed up convergence, for us it is just important to know the heuristics that the weights can be initialized with draws from a random distribution (it can be a uniform or Gaussian), set in such a way that the values are very small and possibly close to 0. This avoids obtaining very large values of the weigths, and consequently exploding values for the neurons in the first iterations, which would result in too unstable results. Furthermore, too large weights cause activation functions such as the sigmoid or tanh to saturate (i.e., they reach their extreme values) in regions where the function is flat, and therefore the gradient is basically 0 (which prevents learning). ### 5.3 Backpropagation Backpropagation is a way to compute the gradients of a neural network model, so that it can be exploited by GD methods. Because, as we have seen in Section 3, the outputs of a Neural Network are expressed in terms of *function compositions* ($h(x) = g(f(x))\ $), backpropagation makes intensive use of the [chain rule](https://en.wikipedia.org/wiki/Chain_rule) to compute the gradients. In this section we will see the functioning of Backpropagation with a stochastic gradient descent step ($n_b=1$) to ease the presentation. The reasoning can be easily extended to the $n_b>1$ case. In particular, Backpropagation with SGs works as follows: 1. consider the next unit $i$ of the dataset, and given its inputs perform a feed-forward step (as seen in Section 3) in such a way that you can calculate the loss for that unit 1. using the chain rule to compute the gradient of such loss, w.r.t. the weights in the last hidden layer $\mathbf{h}^{(L)}$ 1. next, compute the gradient of such loss w.r.t the weigths of the last-but-one hidden layer $\mathbf{h}^{(L-1)}$ (once again with the chain rule) 1. Continue until the gradient w.r.t. the weights of all hidden layers (from $L$ to $1$) are computed 1. Perform the SGD update with the gradient just found 1. Repeat steps 1-4 until convergence "Backpropagation" owes its name to the fact that, after the forward step 1 which goes from the input to the output layers, the error is propagated back from the last to the first hidden layers, in order to obtain the gradient values. As done above, we will refer to a linear combination of nodes (plus bias) with $z$. Therefore, from the input layer to the first hidden layer, $z^{(1)}_k$ refers to $w^{(1)}_{0,k} + \mathbf{W}^{(1)}_k\mathbf{x}_i$, while for the second hidden layer, $z^{(2)}_k = w^{(2)}_{0,k} + \mathbf{W}^{(2)}_k\mathbf{h}^{(1)}$. The $k$-th neuron in the first layer is then $h^{(1)}_k = g_1\left(z^{(1)}_k\right)$, with $g_1$ the chosen activation for the hidden layers. Similarly, the $k$-th neuron in the second hidden layer can be expressed as $h^{(2)}_k = g_1\left(z^{(2)}_k\right)$. The activation function chosen for the output layer will be denoted by $g_2$, and the $k$-th neuron of the output layer in a two-layers network is $\hat{y} = g_2\left(z^{(3)}_k\right)$. #### Example: two layers with one neuron Here, we see how backpropagation works in a simple case, with two hidden layers and one neuron per layer (in the next section we extend the network to more neurons). The network is represented in this figure, where weights and the linear combinations $z$ are also reported for a better understanding of the algorithm: <br> <img src="./img/neural_networks/backprop_1.png" width="700"/> <br> We consider an example with the squared error as loss function. As we consider only one unit at a time, the loss for each iteration of SGD (divided by two to ease the computation with the derivatives) is: $$J = \frac{1}{2}(y_i - \hat{y}_i)^2.$$ The loss can be rewritten as a function of the elements in the second hidden layer: $$ J = \frac{1}{2}\left(y_i - g_2(z^{(3)}_1)\right)^2 = \frac{1}{2}\left(y_i - g_2(w_0^{(3)} + w_1^{(3)}h_1^{(2)})\right)^2. $$ Now, let's assume we have already performed the first feed-forward step, and the loss has been calculated. Backpropagation starts by finding the derivative of the loss w.r.t. $w_0^{(3)}$ and $w_1^{(3)}$. With the help of the chain rule: <br> $$\frac{\partial J}{\partial w_1^{(3)}} = \frac{\partial J}{\partial \hat{y}} \frac{\partial \hat{y}}{\partial z_1^{(3)}}\frac{\partial z_1^{(3)}}{\partial w^{(3)}_1} = \left(\hat{y}_i-y_i\right)\cdot g_2'(z_1^{(3)}) \cdot h^{(2)}_1 $$ <br> where $g_2'(z_1^{(2)})$ is the derivative of the activation function of the output layer (for example, if $g_2$ is the sigmoid function $\sigma(z)$, its [derivative](https://towardsdatascience.com/derivative-of-the-sigmoid-function-536880cf918e) is $\sigma(z)(1-\sigma(z))$). In a similar fashion, we can find the partial derivative w.r.t. $w_0^{(3)}$: $$\frac{\partial J}{\partial w_0^{(3)}} = \frac{\partial J}{\partial \hat{y}} \frac{\partial \hat{y}}{\partial z_1^{(3)}}\frac{\partial z_1^{(3)}}{\partial w^{(2)}_0} = \left(\hat{y}_i-y_i\right)\cdot g_2'(z_1^{(3)}) \cdot 1 $$ <br> **Note**. If we use the identity function for $g_2$ (so that no activation is used for the output), $$\frac{\partial \hat{y}}{\partial z_1^{(3)}} = 1$$ <br> We have found the formulas to compute the first two components of our gradient vector. Let's continue the back-propagation, and find the derivatives w.r.t. $w^{(2)}_1$ and $w^{(2)}_0$. In order to do this, we compute the derivative of the loss w.r.t. the hidden neuron $h^{(2)}_1$, as it will simplify computations in the subsequent steps: <br> $$\frac{\partial J}{\partial h_1^{(2)}} = \frac{\partial J}{\partial \hat{y}} \frac{\partial \hat{y}}{\partial z_1^{(3)}} \frac{\partial z_1^{(3)}}{\partial h^{(2)}_1} = \left(\hat{y}_i-y_i\right)\cdot g_2'(z_1^{(3)}) \cdot w^{(3)}_1 $$ <br> The first two components of this last derivative, $\frac{\partial J}{\partial \hat{y}}$ and $\frac{\partial \hat{y}}{\partial z_1^{(3)}}$, were already computed in the previous step, and therefore $\frac{\partial J}{\partial h_1^{(2)}}$ can be calculated efficiently. Now, the derivative w.r.t. $w^{(2)}_1$ is: <br> $$\frac{\partial J}{\partial w_1^{(2)}} = \frac{\partial J}{\partial \hat{y}} \frac{\partial \hat{y}}{\partial z_1^{(3)}} \frac{\partial z_1^{(3)}}{\partial h^{(2)}_1} \frac{\partial h^{(2)}_1}{\partial z^{(2)}_1} \frac{\partial z^{(2)}_1}{\partial w^{(2)}_1} $$ <br> This expression looks quite formidable, but if you pay a bit more attention to it, we can see that most of its components were already calculated in previous steps. In fact, this can rewritten as: <br> $$\frac{\partial J}{\partial w_1^{(2)}} = \frac{\partial J}{\partial h_1^{(2)}} \frac{\partial h^{(2)}_1}{\partial z^{(2)}_1} \frac{\partial z^{(2)}_1}{\partial w^{(2)}_1} = \frac{\partial J}{\partial h_1^{(2)}} \cdot g_1'(z_1^{(2)}) \cdot h^{1}_1 $$ <br> where $\frac{\partial J}{\partial h_1^{(2)}}$ was precisely computed in the previous step, and $g_1'$ is the derivative of the activation function for the hidden layers. Similarly, the gradient w.r.t. $w_0^{(2)}$ is: <br> $$\frac{\partial J}{\partial w_0^{(2)}} = \frac{\partial J}{\partial h_1^{(2)}} \frac{\partial h^{(2)}_1}{\partial z^{(2)}_1} \frac{\partial z^{(2)}_1}{\partial w^{(2)}_0} = \frac{\partial J}{\partial h_1^{(2)}} \cdot g_1'(z_1^{(2)}) \cdot 1 $$ <br> By now, you should have noticed what pattern is going on here: first, the derivatives w.r.t. the neurons of previous layers are computed and stored; second, the derivatives w.r.t. the weights of such layers are then calculated. With this rule in mind, it is straightforward to find the gradients for the bias and weights that connect the input and first hidden layer. First, the derivative w.r.t. the first hidden layer is: <br> $$\frac{\partial J}{\partial h_1^{(1)}} = \frac{\partial J}{\partial h_1^{(2)}} \frac{\partial h^{(2)}_1}{\partial z^{(2)}_1} \frac{\partial z^{(2)}_1}{\partial h^{(1)}_1} = \frac{\partial J}{\partial h_1^{(2)}} \cdot g_1'(z_1^{(2)}) \cdot w^{(2)}_1 $$ <br> and, in turn, the partial derivatives w.r.t. the $j$-th weight for the first hidden layer: <br> $$\frac{\partial J}{\partial w_{1,j}^{(1)}} = \frac{\partial J}{\partial h_1^{(1)}} \frac{\partial h^{(1)}_1}{\partial z^{(1)}_1} \frac{\partial z^{(1)}_1}{\partial w^{(1)}_{1,j}} = \frac{\partial J}{\partial h_1^{(1)}} \cdot g_1'(z_1^{(1)}) \cdot x_{ij} $$ <br> for $j=1,...,p$. The gradient w.r.t. the last bias term is: <br> $$\frac{\partial J}{\partial w_0^{(1)}} = \frac{\partial J}{\partial h_1^{(1)}} \frac{\partial h^{(1)}_1}{\partial z^{(1)}_1} \frac{\partial z^{(1)}_1}{\partial w^{(1)}_0} = \frac{\partial J}{\partial h_1^{(1)}} \cdot g_1'(z_1^{(1)}) \cdot 1 $$ <br> And that's it! We have the formulas for the gradients of all the weights in the model. As you can see, by storing the information w.r.t. neurons and weights in upper levels of the networks, backpropagation allows to efficiently calculate the derivatives w.r.t. neurons and weights in lower layers. Now, it is just a matter of plugging the values found for the gradient in the parameter update step, and the full SGD iteration is complete. #### Increasing the number of neurons When multiple neurons are present in the layers, the principle seen in the previous section is the same. Let's suppose that a generic layer $l$ (in a network with $L$ layers in total) has $K_l$ neurons, while the layer $l+1$ in the next level (already explored by backpropagation) has $K_{l+1}$ neurons. There are only a couple of differences with the algorithm seen above. First, the gradient for neuron $h^{(l)}_k$ (with $k\ \in \{1,...,K_l\}$) is now calculated for each neuron at the level above, and its gradient is taken to be the sum over all such neurons (if $l=L$, level $l+1$ corresponds to the output, $\hat{y}$): $$\frac{\partial J}{\partial h_k^{(l)}} = \sum_{m=1}^{K_{l+1}}\frac{\partial J}{\partial h_m^{(l+1)}} \frac{\partial h_m^{(l+1)}}{\partial z_m^{(l+1)}} \frac{\partial z_m^{(l+1)}}{\partial h_k^{(l)}} $$ and a generic weight $W^{(l)}_{k,r}$ which connects neuron $r$ of layer $(l-1)$ with neuron $k$ of layer $l$ (the element $(k,r)$ of the weights matrix $\mathbf{W}^{(l)}$): $$ \frac{\partial J}{\partial W_{k,r}^{(l)}} = \frac{\partial J}{\partial h_k^{(l)}} \frac{\partial h_k^{(l)}}{\partial z_k^{(l)}} \frac{\partial z_k^{(l)}}{\partial W_{k,r}^{(l)}}. $$ When multiple neurons are present in the output layer, the principle is the same: we compute the gradients for each of the output neurons, and perform the sum over such neurons. ## 6. Setting a FNN - Other technical considerations NN are among the most difficult algorithms to tune, as they are very sensitive to the values of their (several) hyperparameters. In this section, we see more in detail other practices, advices, and tricks to keep in mind when setting and training a FNN. In general, it is possible to use Cross-Validation to tune a Neural Network; however, this can be a time-consuming step, and what is done in practice (especially if the size of the dataset is very large) is to use a validation set to check the network performance after (and during!) training. **Number of layers and neurons**. Since single-layer networks are universal approximators, in most of the problems shallow networks (i.e., networks with a small number of layers) should be able to perform well. However ,despite the fact that single-layer networks can approximate any function, they might need an exponentially growing width (=number of neurons) to reach optimal performance. On the other hand, deeper networks that are less wide can be more efficient to compute and get closer to optimal performance. In practice, one or two layers should be able to solve most (90%) of the problems; if optimal performance is not reached yet, you can try to make the network deeper (by gradually increasing the number of layers). About the number of neurons, a common structure given to FNN's (especially in the past) is a "pyramid", in which the number of neurons is decreased at each layer. With the newly developed methods to regularize NN's, more recent networks are built with the same number of neurons at each hidden layer. This also allows decreasing the number of hyperparameters to tune. As with the number of layers, you can also start with a small number of neurons, and try to increment it to see whether performance improves. Another rule of thumb is to use a number of neurons for the hidden layers somewhere in between the number of neurons in the input layer (i.e., the number of features) and the number of neurons in the output layer (for example, by using a mean among these two numbers). Of course, both the deep and the width of the networks must be carefully chosen; too little layers and neurons cause underfitting, while too deep and wide networks easily overfit your dataset. Furthermore, adding just a few neurons or layers to the network grows quickly the number of parameters present in the network, slowing down algorithms such as Gradient Descent. You can find a good discussion on the number of layers and neurons to use in a FNN in this [stackoverflow page](https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw). **Learning Rate**. Gradient Descent methods are very sensitive to the value chosen for the learning rate. In general, there is a tradeoff between the speed of the algorithm and the precision of the final results, with larger learning rates getting close to a local minimum faster, but with the risk of never converging, and smaller rates that may take too long time to converge. We have already discussed in the previous section possible learning schedules, as well as the momentum technique, that allow the learning rate to adapt and change at each iteration. Alternatively, a rule of thumb may be to start with a very small learnign rate (e.g., $10^{-5}$) and increase it gradually to check how model performance changes with it. When the learning rate becomes too large, generalization performance should drop. A good discussion on possible strategies to set the learning rate and other methods is given in the blog [machinelarningmastery.com](https://machinelearningmastery.com/learning-rate-for-deep-learning-neural-networks/). **Optimizers**. Besides (batch/mini-batch/stochastic) Gradient Descent methods, a number of other optimizer has been developed which is suitable for neural nets. Choosing a good optimizer may not only speed up computations, but also ensures to reach a local minimum that generalizes well to new data, in a more efficient manner. While a nice overview of some of these main methods is reported in this [medium blog](https://medium.com/datadriveninvestor/overview-of-different-optimizers-for-neural-networks-e0ed119440c3), here it is worth mentioning the ADAM optimizer. ADAM (that stands for *adaptive moment estimation*) uses a weighted average between previous and current iterations of the first and second moments of the gradients (the first moment is the mean and the second moment is the uncentered variance). In this way, ADAM does not only exploits the benefits of the momentum method (first moment), but it also uses an adaptive learning rate, specific for each dimension of the gradient (with the aid of the second moment). In particular, dimensions that are steeper have a larger decay of the learning rate than dimensions that are closer to being flat, which considerably speeds up convergence. Although ADAM introduces three new parameters (one for each momentum, $\beta_1$ and $\beta_2$, and one for numeric stability, $\epsilon$), it is quite robust to manual tuning, as the learning rate adjusts automatically as just explained. **Batch-size**. When using mini-batch GD, the batch size can help to find a good balance between stability and speed of the learning process. As already discussed above, a complete batch GD is completely stable, but it might take longer to converge and might not escape local minima that do not generalize well. On the other hand, stochastic GD is very noisy and might take longer to converge; it can escape bad local minima more easily, but given its large amount of noisy steps it might also fail to converge to good ones. Mini-batch GD is somewhat in the middle, and therefore adjusting the batch size can be beneficial to the GD algorithm. A good overview of GD methods is given [here](https://medium.com/@divakar_239/stochastic-vs-batch-gradient-descent-8820568eada1). **Activation Function**. Also the activation function for the hidden layers can be tuned. Even though the ReLU function is now widely used, it can still cause some issues. For example, it can suffer a phenomenon called "dying ReLU": if all the weights of a neuron lead to a negative value, the output of the ReLU function for this neuron becomes zero, making the neuron "die" and never be able to be active again during the next iterations of Gradient Descent. In some networks, you might even find several neurons dead simultaneosly. An alternative to the ReLU function is the [leaky relu](https://sefiks.com/2018/02/26/leaky-relu-as-an-neural-networks-activation-function/), which allows for small negative values. In this case, neurons may die during training, but have non-zero chance to be activated again in subsequent iterations of the training algorithm. ReLU functions tend to find more piece-wise linear functions, while functions such as the sigmoid and tanh allow for smoother shapes of the output function (or decision boundary, in case of classification). However, tanh and sigmoid suffer the [vanishing gradient problem](https://en.wikipedia.org/wiki/Vanishing_gradient_problem): with extreme values of their inputs, such functions become essentialy flat, leading to a derivative basically close to 0. In this way, backpropagation has no or little gradient to propagate back to the network, and no gradient is left for layers at lower levels. A different but related issue is the *exploding gradient problem*. More in general, neural networks suffer from unstable gradients, and different layers may learn at different speeds. Therefore, choosing a good activation function is crucial for the learning of the network. **Number of Epochs and Early Stopping**. The number of epochs for GD methods is also crucial for learning. If the algorithm is run for too many epochs, it passes through the whole dataset too many times and it ends up learning too closely the training data, leading to poor generalization performance. However, a good number of epochs can lead to good generalization performance, while a too small number of epochs is not enough to make the model learn well, and it can cause underfitting. To decide how many epochs are good to let the algorithm generalize well, you can run the algorithm several times with different (increasing) numbers of epochs, and evaluate each time its performance on a validation set. However, this method can be time-consuming, as you have to re-train the algorithm every time. A better strategy is given by **early stopping**: during training, you can assess the performance of the algorithm on the validation set at the end of each epoch. When the performance on this validation set stops improving, you can wait for a certain number of iterations (say, 10 or 20) to see whether the algorithm is actually starting to overfit; in that case, you can just stop the algorithm and use the estimates of the model obtained with the best epoch. An example of how early stopping works is given by this figure: <br> <img src="./img/neural_networks/early_stopping.png" width="500"/> <br> The following figure shows the analogy between early stopping and $l_2$ regularization: <br> <img src="./img/neural_networks/l2_early_stopping.png" width="400"/> <br> In practice, because the algorithm starts with weights initialized close to 0 (and that usually increase after each mini-batch update), early stopping is a way to find a simplified solution of the network; this is exactly the same that happens with $l_2$ regularization, where a penalty parameter must be tuned in order to lead to optimal generalization performance. **$l_1$ and $l_2$ regularization**. Another way to protect against overfitting is by obtaining simple solutions of the network (that is, setting its weights closer to 0). This is exactly what we have already seen for linear models: with $l_2$ regularization, all the weights are shrunken towards (but never equal to) 0; the effect is similar to the one of early stopping just described. With $l_1$ penalty, instead, we allow for *sparse* networks, where some of the connections between the neurons are equal to 0, and are dropped from the model. ## 7. Other Architectures There exists a [huge number of variants](https://www.asimovinstitute.org/neural-network-zoo/) of Neural Networks, each of which is devised for specific usage and context. Here, we are going to quickly talk about two among the most famous of these architectures: Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN). ### 7.1 Convolutional Neural Networks CNN's are probably the reason number one why NN's have gained popularity again in recent years. They allow the machines to perform a task that is very basic for humans, but not so trivial for computers: image recognition. In particular, by using an analgoy with the [visual cortex](https://en.wikipedia.org/wiki/Visual_cortex) of the brain, they translate into a NN context the concept of *local receptive field*, with which the human brain reacts to local stimuli of a limited region of the visual field; when the neurons overlap such stimuli, they are able to recognize the object we are observing. In a similar fashion, CNN's try to process the various areas of a rectangular image locally, and subsequently -layer after layer- they try to combine the regions together, until they understand in the output layer what type of objects is represented in the figure. CNN's owe their name to the fact that they use *convolutions*. In this context, a convolution can be seen as a sort of matrix multiplication, slided across the rows of one of its inputs. In CNN's convolutions are computed for all the pixels of a figure by means of a *filter* or *convolutional kernel*, which is able to capture the local behaviour of the intensity of the pixels, and summarize it in a number. This operation is desirable for image recognition, as we want the method to be robust against rotations and shifting of the objects in the figure. The following figure summarizes the convolution operation: <br> <img src="./img/neural_networks/convolution.png" width="400"/> <br> In images with colors, such operation is repeated (with different kernels) for each RGB channel of the image. After the convolutions have been computed, their result- a *feature map*- is passed through the first hidden layer of the network. Because CNN use the concepts of *receptive field* to capture the local behaviour of the previous layers, the first layers of a CNN (called convolutional layers) are also represented in 2D, so that it is possible to match the neuron of a layer with the one of the subsequent layer. The layers are then processed through other convolutions (or other types of operations, such as *pooling*). Finally, the last layers of the network are dense (fully-connected) layers (exactly like the ones observed in FNN) and the network terminates with an output layer, as usual. In general, CNN's are used for classification tasks (so that it is possible to recognize a category of object present in the picture). Because the task of CNN's is much more complicated than the ones of FNN's, their architecture is generally deeper than the architecture of a FNN. A more detailed introduction on the functioning of CNN's can be found in [this link](https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53). <br> <img src="./img/neural_networks/cnn.png" width="700"/> <br> ### 7.2 Recurrent Neural Networks FNN's allow the signal to travel through the network only in one direction, from input to output. RNN's have a similar structure of FNN's, but with an important difference: they introduce loops in the network, allowing the signal to travel also backward. This makes RNN's the ideal tool to model sequential data and time series, as they are likely to contain autocorrelations with time points of the past. In particular, when we observe inputs and outputs at different time points, the input of a layer at time point $t$ can use the output of the network at time point $t-1$ to model this type of dependencies. In practice, this is a way to take into account the evolution of a phenomenon over time. This makes RNN's a powerful forecasting tool. They are becoming widely tested and applied in fields such as stock market forecasting, and speech recognition (when a sequence of words must be recognized by the system). An issue of RNN's is that they tend to loose the information of time points very far in the past; this is know are "short memory" problem. Special types of RNN's architectures devised to overcome this issue are the LSTM (Long Short-Term Memory) and the GRU (Gater Reccurent Unit) networks. An introduction to RNN's is [in this link](https://www.geeksforgeeks.org/introduction-to-recurrent-neural-network/). <br> <img src="./img/neural_networks/rnn.png" width="400"/> <br> ## 8. Other Resources * the [scikit-learn documentation](https://scikit-learn.org/stable/modules/neural_networks_supervised.html) on neural networks * the [Deep Learning Book](https://www.deeplearningbook.org/) is freely available online; it contains a good and detailed overview of state-of-the-art Neural Networks (includig method for unsupervised learning and autencoders). The book is a bit technical, but it offers a nice review of linear algebra and other mathematical tools in the first part * in the Youtube channel of "3 Blue 1 Brown" there is a nice mini-series on Feed-Forward Neural Networks, explained with an example of digit recognition. The vision of such videos is highly recommended, as they are fun to watch and explain Neural Networks with amazing graphics. In particular, videos 2, 3, and 4 provide an accurate explanation of Gradient Descent and the Backpropagation algorithm. [Here](https://www.youtube.com/watch?v=aircAruvnKk&t=1008s) is the link to the first video of the series. _____no_output_____
{ "repository": "davidevdt/datamining_jbi030", "path": "10a. neural_networks.ipynb", "matched_keywords": [ "evolution" ], "stars": 5, "size": 71309, "hexsha": "48665858c21393857cb885855137306d32d213ae", "max_line_length": 1745, "avg_line_length": 124.0156521739, "alphanum_fraction": 0.6919603416 }
# Notebook from VCMason/PyGenToolbox Path: notebooks/Therese/IRS_v2_Coil1.Coil2.Eed.Suz12.ipynb <code> %load_ext autoreload %autoreload 2 import datetime import os import pandas as pd print(datetime.datetime.now()) #dir(pygentoolbox.Tools) %matplotlib inline import matplotlib.pyplot as plt from pygentoolbox.IRS_v2 import mainThe autoreload extension is already loaded. To reload it, use: %reload_ext autoreload 2019-09-24 11:57:37.974667 gff3file = 'D:\\LinuxShare\\Ciliates\\Genomes\\Annotations\\internal_eliminated_sequence_PGM_ParTIES.pt_51_with_ies.gff3' samfilelist_mac = ['D:\\LinuxShare\\Projects\\Sarah\\BWA\\DCL_SCG\\Mac\\D23_1_R1R2.trim.sam', \ 'D:\\LinuxShare\\Projects\\Sarah\\BWA\\DCL_SCG\\Mac\\D23_2_R1R2.trim.sam', \ 'D:\\LinuxShare\\Projects\\Sarah\\BWA\\DCL_SCG\\Mac\\D5_1_R1R2.trim.sam', \ 'D:\\LinuxShare\\Projects\\Sarah\\BWA\\DCL_SCG\\Mac\\D5_2_R1R2.trim.sam', \ 'D:\\LinuxShare\\Projects\\Sarah\\BWA\\DCL_SCG\\Mac\\ND7_1_R1R2.trim.sam'\ ] samfilelist_mac_ies = ['D:\\LinuxShare\\Projects\\Sarah\\BWA\\DCL_SCG\\MacAndIES\\D23_1_R1R2.trim.sort.sam', \ 'D:\\LinuxShare\\Projects\\Sarah\\BWA\\DCL_SCG\\MacAndIES\\D23_2_R1R2.trim.sort.sam', \ 'D:\\LinuxShare\\Projects\\Sarah\\BWA\\DCL_SCG\\MacAndIES\\D5_1_R1R2.trim.sort.sam', \ 'D:\\LinuxShare\\Projects\\Sarah\\BWA\\DCL_SCG\\MacAndIES\\D5_2_R1R2.trim.sort.sam', \ 'D:\\LinuxShare\\Projects\\Sarah\\BWA\\DCL_SCG\\MacAndIES\\ND7_1_R1R2.trim.sort.sam'\ ] main(gff3file, samfilelist_mac, samfilelist_mac_ies, numreads=15000000, features=['internal_eliminated_sequence']) # or use ['all'] print(datetime.datetime.now())Reading Gff3 file: D:\LinuxShare\Ciliates\Genomes\Annotations\internal_eliminated_sequence_PGM_ParTIES.pt_51_with_ies.gff3 Number of scaffolds: 511 ['scaffold51_100', 'scaffold51_101', 'scaffold51_102', 'scaffold51_103', 'scaffold51_104', 'scaffold51_105', 'scaffold51_106', 'scaffold51_107', 'scaffold51_108', 'scaffold51_109'] Reading, sub-sampling, trim by CIGAR, only coordinates, for sam file: D:\LinuxShare\Projects\Sarah\BWA\DCL_SCG\Mac\D23_1_R1R2.trim.sam Number of scaffolds: 691 ['scaffold51_153', 'scaffold51_90', 'scaffold51_77', 'scaffold51_75', 'scaffold51_57', 'scaffold51_111', 'scaffold51_23', 'scaffold51_16', 'scaffold51_10', 'scaffold51_13'] Desired Number of reads: 15000000 Number of sub-sampled reads = 15000000 Reading, sub-sampling, trim by CIGAR, only coordinates, for sam file: D:\LinuxShare\Projects\Sarah\BWA\DCL_SCG\MacAndIES\D23_1_R1R2.trim.sort.sam Number of scaffolds: 691 ['scaffold51_98', 'scaffold51_50', 'scaffold51_11', 'scaffold51_67', 'scaffold51_471', 'scaffold51_62', 'scaffold51_140', 'scaffold51_204', 'scaffold51_8', 'scaffold51_25'] Desired Number of reads: 15000000 Number of sub-sampled reads = 15000000 ##### Starting IRS Calculations ##### Processed 50 scaffolds scaffold51_145 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 100 scaffolds scaffold51_190 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 150 scaffolds scaffold51_237 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 200 scaffolds scaffold51_293 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 250 scaffolds scaffold51_345 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 300 scaffolds scaffold51_39 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 350 scaffolds scaffold51_474 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 400 scaffolds scaffold51_558 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 450 scaffolds scaffold51_647 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 500 scaffolds scaffold51_8 is present in d_mac, dsam_mac, and dsam_mac_ies Calculating IRS IRS 5000 2, 0, 0, 34 IESPGM.PTET51.1.120.78091 0.03, 0.06 IRS 10000 0, 0, 0, 8 IESPGM.PTET51.1.145.3648 0.00, 0.00 IRS 15000 0, 2, 0, 37 IESPGM.PTET51.1.16.271128 0.03, 0.05 IRS 20000 1, 0, 0, 20 IESPGM.PTET51.1.23.555354 0.02, 0.05 IRS 25000 0, 0, 0, 5 IESPGM.PTET51.1.35.379025 0.00, 0.00 IRS 30000 0, 0, 1, 21 IESPGM.PTET51.1.4.86780 0.00, 0.05 IRS 35000 0, 0, 0, 24 IESPGM.PTET51.1.63.281325 0.00, 0.00 IRS 40000 1, 0, 0, 25 IESPGM.PTET51.1.81.154702 0.02, 0.04 Number of IRS values: 44926 [('IESPGM.PTET51.1.100.218', 0.0), ('IESPGM.PTET51.1.100.348', 0.0), ('IESPGM.PTET51.1.100.478', 0.0), ('IESPGM.PTET51.1.100.600', 0.0), ('IESPGM.PTET51.1.100.729', 0.05263157894736842), ('IESPGM.PTET51.1.100.774', 0.4230769230769231), ('IESPGM.PTET51.1.100.859', 0.0), ('IESPGM.PTET51.1.100.989', 0.0), ('IESPGM.PTET51.1.100.1119', 0.0), ('IESPGM.PTET51.1.100.1212', 0.0), ('IESPGM.PTET51.1.100.1342', 0.0), ('IESPGM.PTET51.1.100.1472', 0.0), ('IESPGM.PTET51.1.100.1601', 0.0), ('IESPGM.PTET51.1.100.1732', 0.045454545454545456), ('IESPGM.PTET51.1.100.3813', 0.058823529411764705), ('IESPGM.PTET51.1.100.4388', 0.0), ('IESPGM.PTET51.1.100.4531', 0.08333333333333333), ('IESPGM.PTET51.1.100.5534', 0.021739130434782608), ('IESPGM.PTET51.1.100.5850', 0.03333333333333333), ('IESPGM.PTET51.1.100.6562', 0.03225806451612903)] ########################################### ########################################### ########################################### Reading, sub-sampling, trim by CIGAR, only coordinates, for sam file: D:\LinuxShare\Projects\Sarah\BWA\DCL_SCG\Mac\D23_2_R1R2.trim.sam Number of scaffolds: 696 ['scaffold51_16', 'scaffold51_110', 'scaffold51_90', 'scaffold51_49', 'scaffold51_55', 'scaffold51_95', 'scaffold51_138', 'scaffold51_50', 'scaffold51_42', 'scaffold51_38'] Desired Number of reads: 15000000 Number of sub-sampled reads = 15000000 Reading, sub-sampling, trim by CIGAR, only coordinates, for sam file: D:\LinuxShare\Projects\Sarah\BWA\DCL_SCG\MacAndIES\D23_2_R1R2.trim.sort.sam Number of scaffolds: 696 ['scaffold51_140', 'scaffold51_139', 'scaffold51_1', 'scaffold51_107', 'scaffold51_29', 'scaffold51_54', 'scaffold51_67', 'scaffold51_81', 'scaffold51_138', 'scaffold51_11'] Desired Number of reads: 15000000 Number of sub-sampled reads = 15000000 ##### Starting IRS Calculations ##### Processed 50 scaffolds scaffold51_145 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 100 scaffolds scaffold51_190 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 150 scaffolds scaffold51_237 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 200 scaffolds scaffold51_293 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 250 scaffolds scaffold51_345 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 300 scaffolds scaffold51_39 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 350 scaffolds scaffold51_474 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 400 scaffolds scaffold51_558 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 450 scaffolds scaffold51_647 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 500 scaffolds scaffold51_8 is present in d_mac, dsam_mac, and dsam_mac_ies Calculating IRS IRS 5000 2, 0, 0, 22 IESPGM.PTET51.1.120.78091 0.04, 0.08 IRS 10000 1, 0, 0, 13 IESPGM.PTET51.1.145.3648 0.04, 0.07 IRS 15000 0, 2, 0, 42 IESPGM.PTET51.1.16.271128 0.02, 0.05 IRS 20000 3, 0, 0, 21 IESPGM.PTET51.1.23.555354 0.06, 0.12 IRS 25000 0, 0, 0, 8 IESPGM.PTET51.1.35.379025 0.00, 0.00 IRS 30000 0, 0, 0, 18 IESPGM.PTET51.1.4.86780 0.00, 0.00 IRS 35000 2, 0, 0, 12 IESPGM.PTET51.1.63.281325 0.07, 0.14 IRS 40000 1, 0, 0, 28 IESPGM.PTET51.1.81.152618 0.02, 0.03 Number of IRS values: 44928 [('IESPGM.PTET51.1.100.218', 0.0), ('IESPGM.PTET51.1.100.348', 0.0), ('IESPGM.PTET51.1.100.478', 0.0), ('IESPGM.PTET51.1.100.600', 0.0), ('IESPGM.PTET51.1.100.729', 0.0), ('IESPGM.PTET51.1.100.774', 0.3667763157894737), ('IESPGM.PTET51.1.100.859', 0.0625), ('IESPGM.PTET51.1.100.989', 0.0), ('IESPGM.PTET51.1.100.1119', 0.0), ('IESPGM.PTET51.1.100.1212', 0.0), ('IESPGM.PTET51.1.100.1342', 0.0), ('IESPGM.PTET51.1.100.1472', 0.0), ('IESPGM.PTET51.1.100.1601', 0.0), ('IESPGM.PTET51.1.100.1732', 0.0), ('IESPGM.PTET51.1.100.3813', 0.06650071123755334), ('IESPGM.PTET51.1.100.4388', 0.0), ('IESPGM.PTET51.1.100.4531', 0.17714285714285713), ('IESPGM.PTET51.1.100.5534', 0.012195121951219513), ('IESPGM.PTET51.1.100.5850', 0.046875), ('IESPGM.PTET51.1.100.6562', 0.01691777323799796)] ########################################### ########################################### ########################################### Reading, sub-sampling, trim by CIGAR, only coordinates, for sam file: D:\LinuxShare\Projects\Sarah\BWA\DCL_SCG\Mac\D5_1_R1R2.trim.sam Number of scaffolds: 689 ['scaffold51_77', 'scaffold51_65', 'scaffold51_35', 'scaffold51_4', 'scaffold51_6', 'scaffold51_34', 'scaffold51_87', 'scaffold51_32', 'scaffold51_53', 'scaffold51_45'] Desired Number of reads: 15000000 Number of sub-sampled reads = 15000000 Reading, sub-sampling, trim by CIGAR, only coordinates, for sam file: D:\LinuxShare\Projects\Sarah\BWA\DCL_SCG\MacAndIES\D5_1_R1R2.trim.sort.sam Number of scaffolds: 689 ['scaffold51_28', 'scaffold51_7', 'scaffold51_415', 'scaffold51_84', 'scaffold51_76', 'scaffold51_124', 'scaffold51_39', 'scaffold51_55', 'scaffold51_1', 'scaffold51_27'] Desired Number of reads: 15000000 Number of sub-sampled reads = 15000000 ##### Starting IRS Calculations ##### Processed 50 scaffolds scaffold51_145 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 100 scaffolds scaffold51_190 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 150 scaffolds scaffold51_237 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 200 scaffolds scaffold51_293 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 250 scaffolds scaffold51_345 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 300 scaffolds scaffold51_39 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 350 scaffolds scaffold51_474 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 400 scaffolds scaffold51_558 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 450 scaffolds scaffold51_647 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 500 scaffolds scaffold51_8 is present in d_mac, dsam_mac, and dsam_mac_ies Calculating IRS IRS 5000 3, 1, 0, 12 IESPGM.PTET51.1.120.78091 0.14, 0.25 IRS 10000 1, 0, 1, 5 IESPGM.PTET51.1.145.3648 0.08, 0.29 IRS 15000 1, 2, 0, 50 IESPGM.PTET51.1.16.271128 0.03, 0.06 IRS 20000 0, 1, 0, 34 IESPGM.PTET51.1.23.555678 0.01, 0.03 IRS 25000 0, 0, 0, 5 IESPGM.PTET51.1.35.381945 0.00, 0.00 IRS 30000 0, 1, 0, 20 IESPGM.PTET51.1.4.86877 0.02, 0.05 IRS 35000 0, 0, 0, 9 IESPGM.PTET51.1.63.315801 0.00, 0.00 IRS 40000 1, 9, 0, 89 IESPGM.PTET51.1.81.200893 0.05, 0.10 Number of IRS values: 44895 [('IESPGM.PTET51.1.100.218', 0.0), ('IESPGM.PTET51.1.100.348', 0.02631578947368421), ('IESPGM.PTET51.1.100.478', 0.125), ('IESPGM.PTET51.1.100.600', 0.0), ('IESPGM.PTET51.1.100.729', 0.0), ('IESPGM.PTET51.1.100.774', 0.4375), ('IESPGM.PTET51.1.100.859', 0.08333333333333333), ('IESPGM.PTET51.1.100.989', 0.0), ('IESPGM.PTET51.1.100.1119', 0.0), ('IESPGM.PTET51.1.100.1212', 0.125), ('IESPGM.PTET51.1.100.1342', 0.0), ('IESPGM.PTET51.1.100.1472', 0.0), ('IESPGM.PTET51.1.100.1601', 0.0), ('IESPGM.PTET51.1.100.1732', 0.0), ('IESPGM.PTET51.1.100.3813', 0.0), ('IESPGM.PTET51.1.100.4388', 0.0), ('IESPGM.PTET51.1.100.4531', 0.021227364185110665), ('IESPGM.PTET51.1.100.5534', 0.0), ('IESPGM.PTET51.1.100.5850', 0.045454545454545456), ('IESPGM.PTET51.1.100.6562', 0.022727272727272728)] ########################################### ########################################### ########################################### Reading, sub-sampling, trim by CIGAR, only coordinates, for sam file: D:\LinuxShare\Projects\Sarah\BWA\DCL_SCG\Mac\D5_2_R1R2.trim.sam Number of scaffolds: 687 ['scaffold51_44', 'scaffold51_102', 'scaffold51_64', 'scaffold51_72', 'scaffold51_58', 'scaffold51_71', 'scaffold51_5', 'scaffold51_35', 'scaffold51_110', 'scaffold51_158'] Desired Number of reads: 15000000 Number of sub-sampled reads = 15000000 Reading, sub-sampling, trim by CIGAR, only coordinates, for sam file: D:\LinuxShare\Projects\Sarah\BWA\DCL_SCG\MacAndIES\D5_2_R1R2.trim.sort.sam Number of scaffolds: 689 ['scaffold51_112', 'scaffold51_25', 'scaffold51_127', 'scaffold51_10', 'scaffold51_89', 'scaffold51_175', 'scaffold51_60', 'scaffold51_129', 'scaffold51_16', 'scaffold51_18'] Desired Number of reads: 15000000 Number of sub-sampled reads = 15000000 ##### Starting IRS Calculations ##### Processed 50 scaffolds scaffold51_145 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 100 scaffolds scaffold51_190 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 150 scaffolds scaffold51_237 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 200 scaffolds scaffold51_293 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 250 scaffolds scaffold51_345 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 300 scaffolds scaffold51_39 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 350 scaffolds scaffold51_474 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 400 scaffolds scaffold51_558 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 450 scaffolds scaffold51_647 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 500 scaffolds scaffold51_8 is present in d_mac, dsam_mac, and dsam_mac_ies Calculating IRS IRS 5000 3, 0, 0, 23 IESPGM.PTET51.1.120.78091 0.06, 0.12 IRS 10000 2, 0, 0, 10 IESPGM.PTET51.1.145.3648 0.08, 0.17 IRS 15000 1, 0, 0, 57 IESPGM.PTET51.1.16.271128 0.01, 0.02 IRS 20000 1, 0, 0, 15 IESPGM.PTET51.1.23.555354 0.03, 0.06 IRS 25000 0, 0, 0, 4 IESPGM.PTET51.1.35.379025 0.00, 0.00 IRS 30000 0, 0, 0, 17 IESPGM.PTET51.1.4.86780 0.00, 0.00 IRS 35000 1, 2, 0, 15 IESPGM.PTET51.1.63.285752 0.09, 0.17 IRS 40000 1, 0, 2, 20 IESPGM.PTET51.1.81.154926 0.02, 0.13 Number of IRS values: 44925 [('IESPGM.PTET51.1.100.218', 0.0), ('IESPGM.PTET51.1.100.348', 0.0), ('IESPGM.PTET51.1.100.478', 0.0), ('IESPGM.PTET51.1.100.600', 0.0), ('IESPGM.PTET51.1.100.729', 0.0), ('IESPGM.PTET51.1.100.774', 0.4665718349928876), ('IESPGM.PTET51.1.100.859', 0.05), ('IESPGM.PTET51.1.100.989', 0.0), ('IESPGM.PTET51.1.100.1119', 0.0), ('IESPGM.PTET51.1.100.1212', 0.0), ('IESPGM.PTET51.1.100.1342', 0.0), ('IESPGM.PTET51.1.100.1472', 0.010416666666666666), ('IESPGM.PTET51.1.100.1601', 0.03125), ('IESPGM.PTET51.1.100.1732', 0.0), ('IESPGM.PTET51.1.100.3813', 0.1484593837535014), ('IESPGM.PTET51.1.100.4388', 0.06), ('IESPGM.PTET51.1.100.4531', 0.02631578947368421), ('IESPGM.PTET51.1.100.5534', 0.027777777777777776), ('IESPGM.PTET51.1.100.5850', 0.0), ('IESPGM.PTET51.1.100.6562', 0.03884615384615385)] ########################################### ########################################### ########################################### Reading, sub-sampling, trim by CIGAR, only coordinates, for sam file: D:\LinuxShare\Projects\Sarah\BWA\DCL_SCG\Mac\ND7_1_R1R2.trim.sam Number of scaffolds: 687 ['scaffold51_38', 'scaffold51_42', 'scaffold51_139', 'scaffold51_30', 'scaffold51_18', 'scaffold51_4', 'scaffold51_165', 'scaffold51_81', 'scaffold51_132', 'scaffold51_111'] Desired Number of reads: 15000000 Number of sub-sampled reads = 15000000 Reading, sub-sampling, trim by CIGAR, only coordinates, for sam file: D:\LinuxShare\Projects\Sarah\BWA\DCL_SCG\MacAndIES\ND7_1_R1R2.trim.sort.sam Number of scaffolds: 686 ['scaffold51_39', 'scaffold51_83', 'scaffold51_9', 'scaffold51_101', 'scaffold51_41', 'scaffold51_118', 'scaffold51_165', 'scaffold51_158', 'scaffold51_43', 'scaffold51_19'] Desired Number of reads: 15000000 Number of sub-sampled reads = 15000000 ##### Starting IRS Calculations ##### Processed 50 scaffolds scaffold51_145 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 100 scaffolds scaffold51_190 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 150 scaffolds scaffold51_237 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 200 scaffolds scaffold51_293 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 250 scaffolds scaffold51_345 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 300 scaffolds scaffold51_39 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 350 scaffolds scaffold51_474 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 400 scaffolds scaffold51_558 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 450 scaffolds scaffold51_647 is present in d_mac, dsam_mac, and dsam_mac_ies Processed 500 scaffolds scaffold51_8 is present in d_mac, dsam_mac, and dsam_mac_ies Calculating IRS IRS 5000 3, 0, 0, 19 IESPGM.PTET51.1.120.78091 0.07, 0.14 IRS 10000 0, 0, 0, 6 IESPGM.PTET51.1.145.3648 0.00, 0.00 IRS 15000 0, 2, 0, 68 IESPGM.PTET51.1.16.271128 0.01, 0.03 IRS 20000 1, 1, 0, 23 IESPGM.PTET51.1.23.555678 0.04, 0.08 IRS 25000 3, 0, 0, 8 IESPGM.PTET51.1.35.381945 0.14, 0.27 IRS 30000 0, 0, 0, 15 IESPGM.PTET51.1.4.86818 0.00, 0.00 IRS 35000 0, 0, 1, 24 IESPGM.PTET51.1.63.339263 0.00, 0.04 IRS 40000 2, 0, 0, 46 IESPGM.PTET51.1.81.214692 0.02, 0.04 Number of IRS values: 44890 [('IESPGM.PTET51.1.100.218', 0.0), ('IESPGM.PTET51.1.100.348', 0.0), ('IESPGM.PTET51.1.100.478', 0.0), ('IESPGM.PTET51.1.100.600', 0.0), ('IESPGM.PTET51.1.100.729', 0.0), ('IESPGM.PTET51.1.100.774', 0.55), ('IESPGM.PTET51.1.100.859', 0.16666666666666666), ('IESPGM.PTET51.1.100.989', 0.0), ('IESPGM.PTET51.1.100.1119', 0.0), ('IESPGM.PTET51.1.100.1212', 0.0), ('IESPGM.PTET51.1.100.1342', 0.0), ('IESPGM.PTET51.1.100.1472', 0.0), ('IESPGM.PTET51.1.100.1601', 0.0), ('IESPGM.PTET51.1.100.1732', 0.0), ('IESPGM.PTET51.1.100.3813', 0.0625), ('IESPGM.PTET51.1.100.4388', 0.06060606060606061), ('IESPGM.PTET51.1.100.4531', 0.023255813953488372), ('IESPGM.PTET51.1.100.5534', 0.027777777777777776), ('IESPGM.PTET51.1.100.5850', 0.0), ('IESPGM.PTET51.1.100.6562', 0.019314019314019312)] ########################################### ########################################### ########################################### 2019-09-24 18:38:12.376373 %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import os import pandas as pd import numpy as np import datetime print(datetime.datetime.now())2019-10-29 10:54:54.748585 filelist = ['D:\\LinuxShare\\Projects\\Sarah\\BWA\\DCL_SCG\\Mac\\D23_1_R1R2.trim.IRS.tsv', \ 'D:\\LinuxShare\\Projects\\Sarah\\BWA\\DCL_SCG\\Mac\\D23_2_R1R2.trim.IRS.tsv', \ 'D:\\LinuxShare\\Projects\\Sarah\\BWA\\DCL_SCG\\Mac\\D5_1_R1R2.trim.IRS.tsv', \ 'D:\\LinuxShare\\Projects\\Sarah\\BWA\\DCL_SCG\\Mac\\D5_2_R1R2.trim.IRS.tsv', \ 'D:\\LinuxShare\\Projects\\Sarah\\BWA\\DCL_SCG\\Mac\\ND7_1_R1R2.trim.IRS.tsv'\ ] filelist2 = ['D:\\LinuxShare\\Projects\\Sarah\\BWA\\DCL_SCG\\Mac\\D23_1_R1R2.trim.IRS.Alternative.tsv', \ 'D:\\LinuxShare\\Projects\\Sarah\\BWA\\DCL_SCG\\Mac\\D23_2_R1R2.trim.IRS.Alternative.tsv', \ 'D:\\LinuxShare\\Projects\\Sarah\\BWA\\DCL_SCG\\Mac\\D5_1_R1R2.trim.IRS.Alternative.tsv', \ 'D:\\LinuxShare\\Projects\\Sarah\\BWA\\DCL_SCG\\Mac\\D5_2_R1R2.trim.IRS.Alternative.tsv', \ 'D:\\LinuxShare\\Projects\\Sarah\\BWA\\DCL_SCG\\Mac\\ND7_1_R1R2.trim.IRS.Alternative.tsv'\ ] limit = 20 binnumber = 50 binnumbermulti = 30 IRS = [] allIRS = [] actualallIRS = {} fileprefixes = [] for f in filelist: temp = [] with open(f, 'r') as FILE: # int(line.strip().split('\t')[2]) + int(line.strip().split('\t')[3]) >= limit # this sums the left boundary, right boundary sequences when IES is present (reads aligned to Mac+IES genome) # int(line.strip().split('\t')[5]) >= limit # this sees if the reads overlapping the IES feature (when IES has been removed) is greater than or equal to limit (Mac genome) data = [float(line.strip().split('\t')[1]) for line in FILE if (int(line.strip().split('\t')[2]) + int(line.strip().split('\t')[3]) >= limit) or (int(line.strip().split('\t')[5]) >= limit) ] IRS = IRS + data temp = temp + data with open(f, 'r') as FILE: for line in FILE: if (int(line.strip().split('\t')[2]) + int(line.strip().split('\t')[3]) >= limit) or (int(line.strip().split('\t')[5]) >= limit): actualallIRS.setdefault(line.strip().split('\t')[0], []).append(float(line.strip().split('\t')[1])) allIRS.append(temp) print('##### For file: %s #####' % f) print('%d IES elements with counts greater than %d' % (len(data), limit)) print('%d IES elements with IRS less than 0.1' % len([i for i in data if i < 0.1])) print('%d IES elements with IRS greater than 0.1 and less than 0.25' % len([i for i in data if (i > 0.1) and (i < 0.25)])) print('%d IES elements with IRS above 0.25 and less than 0.75' % len([i for i in data if (i < 0.75) and (i > 0.25)])) print('%d IES elements with IRS greater than 0.75' % len([i for i in data if i > 0.75])) path, file = os.path.split(f) print(file) fileprefixes = fileprefixes + [file.split('.')[0]]*len(data) outpath = os.path.join(path, '%s.hist.pdf' % file.split('.')[0]) plt.figure(figsize=(11.7,8.27)) # figsize=(3,4)) plt.hist(data, bins = binnumber) plt.title('%s IRS Histogram' % file.split('.')[0]) plt.xlabel('IRS') plt.ylabel('Frequency') plt.savefig(outpath) plt.show() plt.close() IRS2 = [] allIRS2 = [] actualallIRS2 = {} fileprefixes2 = [] for f in filelist2: temp = [] with open(f, 'r') as FILE: # int(line.strip().split('\t')[2]) + int(line.strip().split('\t')[3]) + int(line.strip().split('\t')[4]) >= limit # this sums the left boundary, right boundary, and both boundary sequences when IES is present (reads aligned to Mac+IES genome) # int(line.strip().split('\t')[5]) >= limit # this sees if the reads overlapping the IES feature (when IES has been removed) is greater than or equal to limit (Mac genome) data2 = [float(line.strip().split('\t')[1]) for line in FILE if (int(line.strip().split('\t')[2]) + int(line.strip().split('\t')[3]) + int(line.strip().split('\t')[4]) >= limit) or (int(line.strip().split('\t')[5]) >= limit) ] IRS2 = IRS2 + data2 temp = temp + data2 with open(f, 'r') as FILE: for line in FILE: if (int(line.strip().split('\t')[2]) + int(line.strip().split('\t')[3]) + int(line.strip().split('\t')[4]) >= limit) or (int(line.strip().split('\t')[5]) >= limit): actualallIRS2.setdefault(line.strip().split('\t')[0], []).append(float(line.strip().split('\t')[1])) allIRS2.append(temp) print('##### For file: %s #####' % f) print('%d IES elements with counts greater than %d' % (len(data2), limit)) print('%d IES elements with IRS less than 0.1' % len([i for i in data2 if i < 0.1])) print('%d IES elements with IRS greater than 0.1 and less than 0.25' % len([i for i in data2 if (i > 0.1) and (i < 0.25)])) print('%d IES elements with IRS above 0.25 and less than 0.75' % len([i for i in data2 if (i < 0.75) and (i > 0.25)])) print('%d IES elements with IRS greater than 0.75' % len([i for i in data2 if i > 0.75])) path2, file2 = os.path.split(f) fileprefixes2 = fileprefixes2 + [file2.split('.')[0]]*len(data2) outpath = os.path.join(path2, '%s.Alt.hist.pdf' % file2.split('.')[0]) plt.figure(figsize=(11.7,8.27)) # figsize=(3,4)) plt.hist(data2, bins = binnumber) plt.title('%s IRS Alternative Histogram' % file2.split('.')[0]) plt.xlabel('IRS Alternative') plt.ylabel('Frequency') plt.savefig(outpath) plt.show() plt.close() print(set(fileprefixes)) print(set(fileprefixes2)) print(len(IRS)) print(len(fileprefixes)) # make grouped histogram chart outpath = os.path.join(path, 'IRS.MultiHist.pdf') plt.figure(figsize=(11.7,8.27)) colors = ['blue', 'blue', 'red', 'red', 'black'] labels = ['DCL23.1', 'DCL23.2', 'DCL5.1', 'DCL5.2', 'ND7'] t_allIRS = np.array(list(map(list, zip(*allIRS)))) # transposing list of lists, lists are IRS values for each file #print(allIRS[:2]) print(t_allIRS[:5]) plt.hist(t_allIRS, binnumbermulti, density=True, histtype='bar', color=colors, label=labels) plt.legend(prop={'size': 10}) plt.title('IRS Multi-Histogram') plt.xlabel('IRS') plt.ylabel('Frequency') plt.gca().set_ylim([0,4]) # ymin,ymax plt.savefig(outpath) plt.show() plt.close() # make grouped histogram chart outpath = os.path.join(path, 'IRS.MultiHist.Alt.pdf') plt.figure(figsize=(11.7,8.27)) colors = ['blue', 'blue', 'red', 'red', 'black'] labels = ['DCL23.1', 'DCL23.2', 'DCL5.1', 'DCL5.2', 'ND7'] t_allIRS2 = np.array(list(map(list, zip(*allIRS2)))) # transposing list of lists, lists are IRS values for each file plt.hist(t_allIRS2, binnumbermulti, density=True, histtype='bar', color=colors, label=labels) plt.legend(prop={'size': 10}) plt.title('IRSAlt Multi-Histogram') plt.xlabel('IRSAlt') plt.ylabel('Frequency') plt.gca().set_ylim([0,4]) # ymin,ymax plt.savefig(outpath) plt.show() plt.close() # make strip charts IRS_df = pd.DataFrame(list(zip(IRS, fileprefixes)), columns=['IRS', 'FilePrefix']) IRSAlt_df = pd.DataFrame(list(zip(IRS2, fileprefixes2)), columns=['IRSAlt', 'FilePrefix']) print(IRS_df.FilePrefix.unique()) print(IRSAlt_df.FilePrefix.unique()) outpath = os.path.join(path, 'IRS.Stripchart.pdf') sns.set(rc={'figure.figsize':(11.7,8.27)}) sns.stripplot(x='FilePrefix', y='IRS', data=IRS_df, jitter=True, size=1) plt.savefig(outpath) plt.show() plt.close() sns.set(rc={'figure.figsize':(11.7,8.27)}) outpath = os.path.join(path2, 'IRS.Stripchart.Alt.pdf') sns.stripplot(x='FilePrefix', y='IRSAlt', data=IRSAlt_df, jitter=True, size=1) plt.savefig(outpath) plt.show() plt.close() print(datetime.datetime.now()) # if IRS value present in all 5 files append list of 5 IRS (one for each file) values to a list actualallIRS = [actualallIRS[k] for k in actualallIRS.keys() if len(actualallIRS[k]) == 5] actualallIRS2 = [actualallIRS2[k] for k in actualallIRS2.keys() if len(actualallIRS2[k]) == 5]##### For file: D:\LinuxShare\Projects\Sarah\BWA\DCL_SCG\Mac\D23_1_R1R2.trim.IRS.tsv ##### 29641 IES elements with counts greater than 20 28757 IES elements with IRS less than 0.1 561 IES elements with IRS greater than 0.1 and less than 0.25 108 IES elements with IRS above 0.25 and less than 0.75 42 IES elements with IRS greater than 0.75 D23_1_R1R2.trim.IRS.tsv import statistics diff23, diff5 = [], [] for IRS_list in actualallIRS: diff23.append(abs(IRS_list[0]-IRS_list[1])) diff5.append(abs(IRS_list[2]-IRS_list[3])) print('DCL23 IRS\nMean: %.4f\nMedian:%.4f' % (statistics.mean(diff23), statistics.median(diff23))) print('DCL5 IRS\nMean: %.4f\nMedian:%.4f' % (statistics.mean(diff5), statistics.median(diff5))) outpath = os.path.join(path, 'IRS.CorrelationDeltaIRS23Vs5.pdf') plt.figure(figsize=(11.7,8.27)) plt.scatter(diff23, diff5) plt.title('ΔIRS: Single Cell IRS Difference') plt.xlabel('DCL23 ΔIRS') plt.ylabel('DCL5 ΔIRS') plt.savefig(outpath) plt.show() plt.close() diff23, diff5 = [], [] for IRS_list in actualallIRS2: diff23.append(abs(IRS_list[0]-IRS_list[1])) diff5.append(abs(IRS_list[2]-IRS_list[3])) print('DCL23 IRSAlt\nMean: %.4f\nMedian:%.4f' % (statistics.mean(diff23), statistics.median(diff23))) print('DCL5 IRSAlt\nMean: %.4f\nMedian:%.4f' % (statistics.mean(diff5), statistics.median(diff5))) outpath = os.path.join(path, 'IRS.CorrelationDeltaIRS23Vs5.Alt.pdf') plt.figure(figsize=(11.7,8.27)) plt.scatter(diff23, diff5) plt.title('ΔIRSAlt: Single Cell IRS Difference') plt.xlabel('DCL23 ΔIRSAlt') plt.ylabel('DCL5 ΔIRSAlt') plt.savefig(outpath) plt.show() plt.close() outpath = os.path.join(path, 'IRSAlt.CorrelationDCL23.pdf') plt.figure(figsize=(11.7,8.27)) plt.scatter(list(map(list, zip(*actualallIRS2)))[0], list(map(list, zip(*actualallIRS2)))[1]) plt.title('IRSAlt Correlation between Two DCL23 KD cells') plt.xlabel('IRSAlt for DCL23 KD Cell#1') plt.ylabel('IRSAlt for DCL23 KD Cell#2') plt.savefig(outpath) plt.show() plt.close() outpath = os.path.join(path, 'IRSAlt.CorrelationDCL5.pdf') plt.figure(figsize=(11.7,8.27)) plt.scatter(list(map(list, zip(*actualallIRS2)))[2], list(map(list, zip(*actualallIRS2)))[3]) plt.title('IRSAlt Correlation between Two DCL5 KD cells') plt.xlabel('IRSAlt for DCL5 KD Cell#1') plt.ylabel('IRSAlt for DCL5 KD Cell#2') plt.savefig(outpath) plt.show() plt.close() outpath = os.path.join(path, 'IRSAlt.CorrelationDCL23Cell1VsDCL5Cell1.pdf') plt.figure(figsize=(11.7,8.27)) plt.scatter(list(map(list, zip(*actualallIRS2)))[0], list(map(list, zip(*actualallIRS2)))[2]) plt.title('IRSAlt Correlation between DCL23 KD cell 1 and DCL5 KD cell 1') plt.xlabel('IRSAlt for DCL23 KD Cell#1') plt.ylabel('IRSAlt for DCL5 KD Cell#1') plt.savefig(outpath) plt.show() plt.close() outpath = os.path.join(path, 'IRSAlt.CorrelationDCL23Cell2VsDCL5Cell2.pdf') plt.figure(figsize=(11.7,8.27)) plt.scatter(list(map(list, zip(*actualallIRS2)))[1], list(map(list, zip(*actualallIRS2)))[3]) plt.title('IRSAlt Correlation between DCL23 KD cell 2 and DCL5 KD cell 2') plt.xlabel('IRSAlt for DCL23 KD Cell#2') plt.ylabel('IRSAlt for DCL5 KD Cell#2') plt.savefig(outpath) plt.show() plt.close()DCL23 IRS Mean: 0.0211 Median:0.0100 DCL5 IRS Mean: 0.0228 Median:0.0200 # libraries # import matplotlib.pyplot as plt # import numpy as np from scipy.stats import kde x = np.array(diff23) y = np.array(diff5) # Evaluate a gaussian kde on a regular grid of nbins x nbins over data extents nbins=300 k = kde.gaussian_kde([x,y]) xi, yi = np.mgrid[x.min():x.max():nbins*1j, y.min():y.max():nbins*1j] zi = k(np.vstack([xi.flatten(), yi.flatten()])) # Make the plot plt.pcolormesh(xi, yi, zi.reshape(xi.shape)) # Add color bar # plt.pcolormesh(xi, yi, zi.reshape(xi.shape), cmap=plt.cm.Greens_r) plt.colorbar() plt.show() plt.close() print('Finished 1') x = np.array(list(map(list, zip(*actualallIRS2)))[0]) y = np.array(list(map(list, zip(*actualallIRS2)))[1]) # Evaluate a gaussian kde on a regular grid of nbins x nbins over data extents nbins=300 k = kde.gaussian_kde([x,y]) xi, yi = np.mgrid[x.min():x.max():nbins*1j, y.min():y.max():nbins*1j] zi = k(np.vstack([xi.flatten(), yi.flatten()])) # Make the plot plt.pcolormesh(xi, yi, zi.reshape(xi.shape)) # Add color bar # plt.pcolormesh(xi, yi, zi.reshape(xi.shape), cmap=plt.cm.Greens_r) plt.colorbar() plt.show() plt.close() print('Finished 2')_____no_output_____import seaborn as sns import pandas as pd # diff23 and diff5 are for IRSAlt score df = pd.DataFrame.from_records(list(map(list, zip(*[diff23, diff5]))), columns=['ΔDCL23', 'ΔDCL5']) print(df.shape) sns.jointplot(x=df["ΔDCL23"], y=df["ΔDCL5"], kind='kde') plt.show() plt.close() # df = pd.DataFrame.from_records(actualallIRS2, columns=['DCL23.1', 'DCL23.2', 'DCL5.1', 'DCL5.2', 'ND7']) # sns.jointplot(x=df["sepal_length"], y=df["sepal_width"], kind='kde')(18185, 2) </code>
{ "repository": "VCMason/PyGenToolbox", "path": "notebooks/Therese/IRS_v2_Coil1.Coil2.Eed.Suz12.ipynb", "matched_keywords": [ "bwa" ], "stars": null, "size": 728965, "hexsha": "4867a9e3affab8372ee808a041b4094c871e3c6a", "max_line_length": 68032, "avg_line_length": 619.3415463042, "alphanum_fraction": 0.9375278648 }
# Notebook from alik604/ThinkBayes2 Path: soln/chap12.ipynb # Classification_____no_output_____Think Bayes, Second Edition Copyright 2020 Allen B. Downey License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)_____no_output_____ <code> # If we're running on Colab, install empiricaldist # https://pypi.org/project/empiricaldist/ import sys IN_COLAB = 'google.colab' in sys.modules if IN_COLAB: !pip install empiricaldist_____no_output_____# Get utils.py and create directories import os if not os.path.exists('utils.py'): !wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py_____no_output_____from utils import set_pyplot_params set_pyplot_params()_____no_output_____from utils import Or70, Pu50, Gr30 color_list3 = [Or70, Pu50, Gr30]_____no_output_____import matplotlib.pyplot as plt from cycler import cycler marker_cycle = cycler(marker=['s', 'o', '^']) color_cycle = cycler(color=color_list3) plt.rcParams['axes.prop_cycle'] = color_cycle + marker_cycle _____no_output_____ </code> Classification might be the most well-known application of Bayesian methods, made famous in the 1990s as the basis of the first generation of [spam filters](https://en.wikipedia.org/wiki/Naive_Bayes_spam_filtering). In this chapter, I'll demonstrate Bayesian classification using data collected and made available by Dr. Kristen Gorman at the Palmer Long-Term Ecological Research Station in Antarctica (see Gorman, Williams, and Fraser, ["Ecological Sexual Dimorphism and Environmental Variability within a Community of Antarctic Penguins (Genus *Pygoscelis*)"](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0090081), March 2014). We'll use this data to classify penguins by species._____no_output_____The following cell downloads the raw data._____no_output_____ <code> # Load the data files from # https://github.com/allisonhorst/palmerpenguins # With gratitude to Allison Horst (@allison_horst) import os if not os.path.exists('penguins_raw.csv'): !wget https://github.com/allisonhorst/palmerpenguins/raw/master/inst/extdata/penguins_raw.csv_____no_output_____ </code> The dataset contains one row for each penguin and one column for each variable, including the measurements we will use for classification._____no_output_____ <code> import pandas as pd df = pd.read_csv('penguins_raw.csv') df.shape_____no_output_____df.head()_____no_output_____import os if not os.path.exists('chap11_files/EaAWkZ0U4AA1CQf.jpeg'): !mkdir -p chap11_files !wget -P test https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/images/EaAWkZ0U4AA1CQf.jpeg--2020-12-22 18:39:06-- https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/images/EaAWkZ0U4AA1CQf.jpeg Resolving github.com (github.com)... 140.82.112.3 Connecting to github.com (github.com)|140.82.112.3|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://raw.githubusercontent.com/AllenDowney/ThinkBayes2/master/soln/images/EaAWkZ0U4AA1CQf.jpeg [following] --2020-12-22 18:39:06-- https://raw.githubusercontent.com/AllenDowney/ThinkBayes2/master/soln/images/EaAWkZ0U4AA1CQf.jpeg Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.116.133 Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.116.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 402507 (393K) [image/jpeg] Saving to: ‘test/EaAWkZ0U4AA1CQf.jpeg.4’ EaAWkZ0U4AA1CQf.jpe 100%[===================>] 393.07K --.-KB/s in 0.05s 2020-12-22 18:39:07 (7.26 MB/s) - ‘test/EaAWkZ0U4AA1CQf.jpeg.4’ saved [402507/402507] </code> Three species of penguins are represented in the dataset: Adélie, Chinstrap and Gentoo, as shown in this illustration (by Allison Horst, available under the [CC-BY](https://creativecommons.org/licenses/by/2.0/) license): <img width="400" src="images/EaAWkZ0U4AA1CQf.jpeg" alt="Drawing of three penguin species">_____no_output_____The measurements we'll use are: * Body Mass in grams (g). * Flipper Length in millimeters (mm). * Culmen Length in millimeters. * Culmen Depth in millimeters. If you are not familiar with the word "culmen", it refers to the [top margin of the beak](https://en.wikipedia.org/wiki/Bird_measurement#Culmen), as shown in the following illustration (also by Allison Horst): <img width="300" src="images/EaAXQn8U4AAoKUj.jpeg">_____no_output_____## Distributions of measurements These measurements will be most useful for classification if there are substantial differences between species and small variation within species. To see whether that is true, and to what degree, I'll plot cumulative distribution functions (CDFs) of each measurement for each species. For convenience, I'll create a new column called `Species2` that contains a shorter version of the species names._____no_output_____ <code> def shorten(species): return species.split()[0] df['Species2'] = df['Species'].apply(shorten)_____no_output_____ </code> The following function takes the `DataFrame` and a column name, and returns a dictionary that maps from each species name to a `Cdf` of the values in the given column._____no_output_____ <code> def make_cdf_map(df, varname, by='Species2'): """Make a CDF for each species. df: DataFrame varname: string column name by: string column name returns: dictionary from species name to Cdf """ cdf_map = {} grouped = df.groupby(by)[varname] for species, group in grouped: cdf_map[species] = Cdf.from_seq(group, name=species) return cdf_map_____no_output_____ </code> The following function plots a `Cdf` of the values in the given column for each species: _____no_output_____ <code> from empiricaldist import Cdf from utils import decorate def plot_cdfs(df, varname, by='Species2'): """Make a CDF for each species. df: DataFrame varname: string column name by: string column name returns: dictionary from species name to Cdf """ cdf_map = make_cdf_map(df, varname, by) for species, cdf in cdf_map.items(): cdf.plot(marker='') decorate(xlabel=varname, ylabel='CDF')_____no_output_____ </code> Here's what the distributions look like for culmen length._____no_output_____ <code> varname = 'Culmen Length (mm)' plot_cdfs(df, varname)_____no_output_____ </code> It looks like we can use culmen length to identify Adélie penguins, but the distributions for the other two species almost entirely overlap. Here are the distributions for flipper length._____no_output_____ <code> varname = 'Flipper Length (mm)' plot_cdfs(df, varname)_____no_output_____ </code> Using flipper length, we can distinguish Gentoo penguins from the other two species. So with just these two features, it seems like we should be able to classify penguins with some accuracy. Here are the distributions for culmen depth._____no_output_____ <code> varname = 'Culmen Depth (mm)' plot_cdfs(df, varname)_____no_output_____ </code> And here are the distributions of body mass._____no_output_____ <code> varname = 'Body Mass (g)' plot_cdfs(df, varname)_____no_output_____ </code> Culmen depth and body mass distinguish Gentoo penguins from the other two species, but these features might not add a lot of additional information, beyond what we get from flipper length and culmen length. All of these CDFs show the sigmoid shape characteristic of the normal distribution; I will take advantage of that observation in the next section._____no_output_____## Normal models Now let's use these features to classify penguins. We'll proceed in the usual Bayesian way: 1. Define a prior distribution with the three possible species and a prior probability for each, 2. Compute the likelihood of the data for each hypothetical species, and then 3. Compute the posterior probability of each hypothesis. To compute the likelihood of the data under each hypothesis, I'll use the data to estimate the parameters of a normal distribution for each species. The following function takes a `DataFrame` and a column name; it returns a dictionary that maps from each species name to a `norm` object. `norm` is defined in SciPy; it represents a normal distribution with a given mean and standard deviation._____no_output_____ <code> from scipy.stats import norm def make_norm_map(df, varname, by='Species2'): """Make a map from species to norm object. df: DataFrame varname: string column name by: string column name returns: dictionary from species name to norm object """ norm_map = {} grouped = df.groupby(by)[varname] for species, group in grouped: mean = group.mean() std = group.std() norm_map[species] = norm(mean, std) return norm_map_____no_output_____ </code> For example, here's the dictionary of `norm` objects for flipper length:_____no_output_____ <code> flipper_map = make_norm_map(df, 'Flipper Length (mm)') flipper_map_____no_output_____ </code> Now suppose we measure a penguin and find that its flipper is 210 cm. What is the probability of that measurement under each hypothesis? The `norm` object provides `pdf`, which computes the probability density function (PDF) of the normal distribution. We can use it to compute the likelihood of the observed data in a given distribution._____no_output_____ <code> data = 210 flipper_map['Gentoo'].pdf(data)_____no_output_____ </code> The result is a probability density, so we can't interpret it as a probability. But it is proportional to the likelihood of the data, so we can use it to update the prior. Here's how we compute the likelihood of the data in each distribution._____no_output_____ <code> hypos = flipper_map.keys() likelihood = [flipper_map[hypo].pdf(data) for hypo in hypos] likelihood_____no_output_____ </code> Now we're ready to do the update._____no_output_____## The Update As usual I'll use a `Pmf` to represent the prior distribution. For simplicity, let's assume that the three species are equally likely._____no_output_____ <code> from empiricaldist import Pmf prior = Pmf(1/3, hypos) prior_____no_output_____ </code> Now we can do the update in the usual way._____no_output_____ <code> posterior = prior * likelihood posterior.normalize() posterior_____no_output_____ </code> A penguin with a 210 mm flipper has an 80% chance of being a Gentoo and about an 19% chance of being a Chinstrap (assuming that the three species were equally likely before the measurement). The following function encapsulates the steps we just ran. It takes a `Pmf` representing the prior distribution, the observed data, and a map from each hypothesis to the distribution of the feature._____no_output_____ <code> def update_penguin(prior, data, norm_map): """Update hypothetical species. prior: Pmf data: measurement of a feature norm_map: map from hypothesis to distribution of data returns: posterior Pmf """ hypos = prior.qs likelihood = [norm_map[hypo].pdf(data) for hypo in hypos] posterior = prior * likelihood posterior.normalize() return posterior_____no_output_____ </code> The return value is the posterior distribution. Here's the previous example again, using `update_penguin`:_____no_output_____ <code> posterior1 = update_penguin(prior, 210, flipper_map) posterior1_____no_output_____ </code> As we saw in the CDFs, flipper length does not distinguish strongly between Adélie and Chinstrap penguins. For example, if a penguin has a 193 mm flipper, it is almost equally likely to be Adélie or Chinstrap._____no_output_____ <code> posterior2 = update_penguin(prior, 193, flipper_map) posterior2_____no_output_____ </code> But culmen length *can* make this distinction, so let's use it to do a second round of classification. First we estimate distributions of culmen length for each species like this:_____no_output_____ <code> culmen_map = make_norm_map(df, 'Culmen Length (mm)')_____no_output_____ </code> Now suppose we see a penguin with culmen length 38 mm. We can use this data to update the prior._____no_output_____ <code> posterior3 = update_penguin(prior, 38, culmen_map) posterior3_____no_output_____ </code> A penguin with culmen length 38 mm is almost certainly an Adélie. On the other hand, a penguin with culmen length 48 mm is about equally likely to be a Chinstrap or Gentoo._____no_output_____ <code> posterior4 = update_penguin(prior, 48, culmen_map) posterior4_____no_output_____ </code> Using one feature at a time, sometimes we can classify penguins with high confidence; sometimes we can't. We can do better using multiple features._____no_output_____## Naive Bayesian classification To make it easier to do multiple updates, I'll use the following function, which takes a prior `Pmf`, sequence of measurements and a corresponding sequence of dictionaries containing estimated distributions._____no_output_____ <code> def update_naive(prior, data_seq, norm_maps): """Naive Bayesian classifier prior: Pmf data_seq: sequence of measurements norm_maps: sequence of maps from species to distribution returns: Pmf representing the posterior distribution """ posterior = prior.copy() for data, norm_map in zip(data_seq, norm_maps): posterior = update_penguin(posterior, data, norm_map) return posterior_____no_output_____ </code> The return value is a posterior `Pmf`. I'll use the same features we looked at in the previous section: culmen length and flipper length._____no_output_____ <code> varnames = ['Flipper Length (mm)', 'Culmen Length (mm)'] norm_maps = [flipper_map, culmen_map]_____no_output_____ </code> Now suppose we find a penguin with culmen length 48 mm and flipper length 210 mm. Here's the update:_____no_output_____ <code> data_seq = 210, 48 posterior = update_naive(prior, data_seq, norm_maps) posterior_____no_output_____ </code> It's most likely to be a Gentoo._____no_output_____ <code> posterior.max_prob()_____no_output_____ </code> I'll loop through the dataset and classify each penguin with these two features._____no_output_____ <code> import numpy as np df['Classification'] = np.nan for i, row in df.iterrows(): data_seq = row[varnames] posterior = update_naive(prior, data_seq, norm_maps) df.loc[i, 'Classification'] = posterior.max_prob()_____no_output_____ </code> This loop adds a column called `Classification` to the `DataFrame`; it contains the species with the maximum posterior probability for each penguin. So let's see how many we got right._____no_output_____There are 344 penguins in the dataset, but two of them are missing measurements, so we have 342 valid cases._____no_output_____ <code> len(df)_____no_output_____valid = df['Classification'].notna() valid.sum()_____no_output_____ </code> Of those, 324 are classified correctly._____no_output_____ <code> same = df['Species2'] == df['Classification'] same.sum()_____no_output_____ </code> Which is almost 95%._____no_output_____ <code> same.sum() / valid.sum()_____no_output_____ </code> The following function encapsulates these steps._____no_output_____ <code> def accuracy(df): """Compute the accuracy of classification. Compares columns Classification and Species2 df: DataFrame """ valid = df['Classification'].notna() same = df['Species2'] == df['Classification'] return same.sum() / valid.sum()_____no_output_____ </code> The classifier we used in this section is called "naive" because it ignores correlations between the features. To see why that matters, I'll make a less naive classifier: one that takes into account the joint distribution of the features._____no_output_____## Joint distributions I'll start by making a scatter plot of the data._____no_output_____ <code> import matplotlib.pyplot as plt def scatterplot(df, var1, var2): """Make a scatter plot. df: DataFrame var1: string column name, x-axis var2: string column name, y-axis """ grouped = df.groupby('Species2') for species, group in grouped: plt.plot(group[var1], group[var2], label=species, lw=0, alpha=0.3) decorate(xlabel=var1, ylabel=var2)_____no_output_____ </code> Here's a scatter plot of culmen length and flipper length for the three species._____no_output_____ <code> var1 = 'Flipper Length (mm)' var2 = 'Culmen Length (mm)' scatterplot(df, var1, var2)_____no_output_____ </code> Within each species, the joint distribution of these measurements forms an oval shape, at least roughly. The orientation of the ovals is along a diagonal, which indicates that there is a correlation between culmen length and flipper length. If we ignore these correlations, we are assuming that the features are independent. To see what that looks like, I'll make a joint distribution for each species assuming independence. The following function makes a discrete `Pmf` that approximates a normal distribution._____no_output_____ <code> def make_pmf(dist, sigmas=3, n=101): """Make a Pmf approximation to a normal distribution. dist: norm object returns: Pmf """ mean, std = dist.mean(), dist.std() low = mean - sigmas * std high = mean + sigmas * std qs = np.linspace(low, high, n) ps = dist.pdf(qs) pmf = Pmf(ps, qs) pmf.normalize() return pmf_____no_output_____ </code> We can use it, along with `make_joint`, to make a joint distribution of culmen length and flipper length for each species._____no_output_____ <code> from utils import make_joint joint_map = {} for species in hypos: pmf1 = make_pmf(flipper_map[species]) pmf2 = make_pmf(culmen_map[species]) joint_map[species] = make_joint(pmf1, pmf2)_____no_output_____ </code> The following figure compares a scatter plot of the data to the contours of the joint distributions, assuming independence._____no_output_____ <code> from utils import plot_contour scatterplot(df, var1, var2) for species in hypos: plot_contour(joint_map[species], alpha=0.5)_____no_output_____ </code> The contours of a joint normal distribution form ellipses. In this example, because the features are uncorrelated, the ellipses are aligned with the axes. But they are not well aligned with the data. We can make a better model of the data, and use it to compute better likelihoods, with a multivariate normal distribution._____no_output_____## Multivariate normal distribution As we have seen, a univariate normal distribution is characterized by its mean and standard deviation. A multivariate normal distribution is characterized by the means of the features and the **covariance matrix**, which contains **variances**, which quantify the spread of the features, and the **covariances**, which quantify the relationships among them. We can use the data to estimate the means and covariance matrix for the population of penguins. First I'll select the columns we want._____no_output_____ <code> features = df[[var1, var2]] features.head()_____no_output_____ </code> And compute the means._____no_output_____ <code> mean = features.mean() mean_____no_output_____ </code> The result is a `Series` containing the mean culmen length and flipper length. We can also compute the covariance matrix:_____no_output_____ <code> cov = features.cov() cov_____no_output_____ </code> The result is a `DataFrame` with one row and one column for each feature. The elements on the diagonal are the variances; the elements off the diagonal are covariances. By themselves, variances and covariances are hard to interpret. We can use them to compute standard deviations and correlation coefficients, which are easier to interpret, but the details of that calculation are not important right now. Instead, we'll pass the covariance matrix to `multivariate_normal` which is a SciPy function that creates an object that represents a multivariate normal distribution. As arguments it takes a sequence of means and a covariance matrix: _____no_output_____ <code> from scipy.stats import multivariate_normal multinorm = multivariate_normal(mean, cov) multinorm_____no_output_____ </code> The following function makes a `multivariate_normal` object for each species._____no_output_____ <code> def make_multinorm_map(df, varnames): """Make a map from each species to a multivariate normal. df: DataFrame varnames: list of string column names returns: map from species name to multivariate_normal """ multinorm_map = {} grouped = df.groupby('Species2') for species, group in grouped: features = group[varnames] mean = features.mean() cov = features.cov() multinorm_map[species] = multivariate_normal(mean, cov) return multinorm_map_____no_output_____ </code> Here's how we make this map for the first two features, flipper length and culmen length._____no_output_____ <code> multinorm_map = make_multinorm_map(df, [var1, var2]) multinorm_map_____no_output_____ </code> In the next section we'll see what these distributions looks like. Then we'll use them to classify penguins, and we'll see if the results are more accurate than the naive Bayesian classifier._____no_output_____## Visualizing a Multivariate Normal Distribution This section uses some NumPy magic to generate contour plots for multivariate normal distributions. If that's interesting for you, great! Otherwise, feel free to skip to the results. In the next section we'll do the actual classification, which turns out to be easier than the visualization. I'll start by making a contour map for the distribution of features among Adélie penguins. Here are the univariate distributions for the two features we'll use and the multivariate distribution we just computed._____no_output_____ <code> norm1 = flipper_map['Adelie'] norm2 = culmen_map['Adelie'] multinorm = multinorm_map['Adelie']_____no_output_____ </code> I'll make a discrete `Pmf` approximation for each of the univariate distributions._____no_output_____ <code> pmf1 = make_pmf(norm1) pmf2 = make_pmf(norm2)_____no_output_____ </code> And use them to make a mesh grid that contains all pairs of values._____no_output_____ <code> X, Y = np.meshgrid(pmf1.qs, pmf2.qs) X.shape_____no_output_____ </code> The mesh is represented by two arrays: the first contains the quantities from `pmf1` along the `x` axis; the second contains the quantities from `pmf2` along the `y` axis. In order to evaluate the multivariate distribution for each pair of values, we have to "stack" the arrays._____no_output_____ <code> pos = np.dstack((X, Y)) pos.shape_____no_output_____ </code> The result is a 3-D array that you can think of as a 2-D array of pairs. When we pass this array to `multinorm.pdf`, it evaluates the probability density function of the distribution for each pair of values._____no_output_____ <code> densities = multinorm.pdf(pos) densities.shape_____no_output_____ </code> The result is an array of probability densities. If we put them in a `DataFrame` and normalize them, the result is a discrete approximation of the joint distribution of the two features._____no_output_____ <code> from utils import normalize joint = pd.DataFrame(densities, columns=pmf1.qs, index=pmf2.qs) normalize(joint)_____no_output_____ </code> Here's what the result looks like._____no_output_____ <code> plot_contour(joint) decorate(xlabel=var1, ylabel=var2)_____no_output_____ </code> The contours of a multivariate normal distribution are still ellipses, but now that we have taken into account the correlation between the features, the ellipses are no longer aligned with the axes. The following function encapsulate the steps we just did._____no_output_____ <code> def make_joint(norm1, norm2, multinorm): """Make a joint distribution. norm1: `norm` object representing the distribution of the first feature norm2: `norm` object representing the distribution of the second feature multinorm: `multivariate_normal` object representing the joint distribution """ pmf1 = make_pmf(norm1) pmf2 = make_pmf(norm2) X, Y = np.meshgrid(pmf1.qs, pmf2.qs) pos = np.dstack((X, Y)) densities = multinorm.pdf(pos) joint = pd.DataFrame(densities, columns=pmf1.qs, index=pmf2.qs) return joint_____no_output_____ </code> The following figure shows a scatter plot of the data along with the contours of the multivariate normal distribution for each species._____no_output_____ <code> scatterplot(df, var1, var2) for species in hypos: norm1 = flipper_map[species] norm2 = culmen_map[species] multinorm = multinorm_map[species] joint = make_joint(norm1, norm2, multinorm) plot_contour(joint, alpha=0.5)_____no_output_____ </code> Because the multivariate normal distribution takes into account the correlations between features, it is a better model for the data. And there is less overlap in the contours of the three distributions, which suggests that they should yield better classifications._____no_output_____## A Less Naive Classifier In a previous section we used `update_penguin` to update a prior `Pmf` based on observed data and a collection of `norm` objects that model the distribution of observations under each hypothesis. Here it is again:_____no_output_____ <code> def update_penguin(prior, data, norm_map): """Update hypothetical species. prior: Pmf data: measurement of a feature norm_map: map from hypothesis to distribution of data returns: posterior Pmf """ hypos = prior.qs likelihood = [norm_map[hypo].pdf(data) for hypo in hypos] posterior = prior * likelihood posterior.normalize() return posterior_____no_output_____ </code> Last time we used this function, the values in `norm_map` were `norm` objects, but it also works if they are `multivariate_normal` objects. We can use it to classify a penguin with flipper length 190 and culmen length 38:_____no_output_____ <code> data = 190, 38 update_penguin(prior, data, multinorm_map)_____no_output_____ </code> A penguin with those measurements is almost certainly an Adélie. As another example, here's an update for a penguin with flipper length 195 and culmen length 48._____no_output_____ <code> data = 195, 48 update_penguin(prior, data, multinorm_map)_____no_output_____ </code> A penguin with those measurements is almost certainly a Chinstrip. Finally, here's an update with flipper length 215 mm and culmen length 48 mm._____no_output_____ <code> data = 215, 48 update_penguin(prior, data, multinorm_map)_____no_output_____ </code> It's a Gentoo! Now let's see if this classifier does any better than the naive Bayesian classifier. I'll apply it to each penguin in the dataset:_____no_output_____ <code> df['Classification'] = np.nan for i, row in df.iterrows(): data = row[varnames] posterior = update_penguin(prior, data, multinorm_map) df.loc[i, 'Classification'] = posterior.idxmax()_____no_output_____ </code> And compute the accuracy:_____no_output_____ <code> accuracy(df)_____no_output_____ </code> It turns out to be only a little better: the accuracy is 95.3%, compared to 94.7% for the naive Bayesian classifier. In one way, that's disappointing. After all that work, it would have been nice to see a bigger improvement. But in another way, it's good news. In general, a naive Bayesian classifier is easier to implement and requires less computation. If it works nearly as well as a more complex algorithm, it might be a good choice for practical purposes. But speaking of practical purposes, you might have noticed that this example isn't very useful. If we want to identify the species of a penguin, there are easier ways than measuring its flippers and beak. However, there are valid scientific uses for this type of classification. One of them is the subject of the research paper we started with: [sexual dimorphism](https://en.wikipedia.org/wiki/Sexual_dimorphism), that is, differences in shape between male and female animals. In some species, like angler fish, males and females look very different. In other species, like mockingbirds, they are difficult to tell apart. And dimorphism is worth studying because it provides insight into social behavior, sexual selection, and evolution. One way to quantify the degree of sexual dimorphism in a species is to use a classification algorithm like the one in this chapter. If you can find a set of features that makes it possible to classify individuals by sex with high accuracy, that's evidence of high dimorphism. As an exercise, you can use the dataset from this chapter to classify penguins by sex and see which of the three species is the most dimorphic._____no_output_____## Exercises_____no_output_____**Exercise:** In my example I used culmen length and flipper length because they seemed to provide the most power to distinguish the three species. But maybe we can do better by using more features. Make a naive Bayesian classifier that uses all four measurements in the dataset: culmen length and depth, flipper length, and body mass. Is it more accurate than the model with two features?_____no_output_____ <code> # Solution # Here are the norm maps for the other two features depth_map = make_norm_map(df, 'Culmen Depth (mm)') mass_map = make_norm_map(df, 'Body Mass (g)')_____no_output_____# Solution # And here are sequences for the features and the norm maps varnames4 = ['Culmen Length (mm)', 'Flipper Length (mm)', 'Culmen Depth (mm)', 'Body Mass (g)'] norm_maps4 = [culmen_map, flipper_map, depth_map, mass_map]_____no_output_____# Solution # Now let's classify and compute accuracy. # We can do a little better with all four features, # almost 97% accuracy df['Classification'] = np.nan for i, row in df.iterrows(): data_seq = row[varnames4] posterior = update_naive(prior, data_seq, norm_maps4) df.loc[i, 'Classification'] = posterior.max_prob() accuracy(df)_____no_output_____ </code> **Exercise:** One of the reasons the penguin dataset was collected was to quantify sexual dimorphism in different penguin species, that is, physical differences between male and female penguins. One way to quantify dimorphism is to use measurements to classify penguins by sex. If a species is more dimorphic, we expect to be able to classify them more accurately. As an exercise, pick a species and use a Bayesian classifier (naive or not) to classify the penguins by sex. Which features are most useful? What accuracy can you achieve? Note: One Gentoo penguin has an invalid value for `Sex`. I used the following code to select one species and filter out invalid data._____no_output_____ <code> gentoo = (df['Species2'] == 'Gentoo') subset = df[gentoo].copy()_____no_output_____subset['Sex'].value_counts()_____no_output_____valid = df['Sex'] != '.' valid.sum()_____no_output_____subset = df[valid & gentoo].copy()_____no_output_____ </code> OK, you can finish it off from here._____no_output_____ <code> # Solution # Here are the feature distributions grouped by sex plot_cdfs(subset, 'Culmen Length (mm)', by='Sex')_____no_output_____# Solution plot_cdfs(subset, 'Culmen Depth (mm)', by='Sex')_____no_output_____# Solution plot_cdfs(subset, 'Flipper Length (mm)', by='Sex')_____no_output_____# Solution plot_cdfs(subset, 'Body Mass (g)', by='Sex')_____no_output_____# Solution # Here are the norm maps for the features, grouped by sex culmen_map = make_norm_map(subset, 'Culmen Length (mm)', by='Sex') flipper_map = make_norm_map(subset, 'Flipper Length (mm)', by='Sex') depth_map = make_norm_map(subset, 'Culmen Depth (mm)', by='Sex') mass_map = make_norm_map(subset, 'Body Mass (g)', by='Sex')_____no_output_____# Solution # And here are the sequences we need for `update_naive` norm_maps4 = [culmen_map, flipper_map, depth_map, mass_map] varnames4 = ['Culmen Length (mm)', 'Flipper Length (mm)', 'Culmen Depth (mm)', 'Body Mass (g)']_____no_output_____# Solution # Here's the prior hypos = culmen_map.keys() prior = Pmf(1/2, hypos) prior_____no_output_____# Solution # And the update subset['Classification'] = np.nan for i, row in subset.iterrows(): data_seq = row[varnames4] posterior = update_naive(prior, data_seq, norm_maps4) subset.loc[i, 'Classification'] = posterior.max_prob()_____no_output_____# Solution # This function computes accuracy def accuracy_sex(df): """Compute the accuracy of classification. Compares columns Classification and Sex df: DataFrame """ valid = df['Classification'].notna() same = df['Sex'] == df['Classification'] return same.sum() / valid.sum()_____no_output_____# Solution # Using these features we can classify Gentoo penguins by # sex with almost 92% accuracy accuracy_sex(subset)_____no_output_____# Solution # Here's the whole process in a function so we can # classify the other species def classify_by_sex(subset): """Run the whole classification process. subset: DataFrame """ culmen_map = make_norm_map(subset, 'Culmen Length (mm)', by='Sex') flipper_map = make_norm_map(subset, 'Flipper Length (mm)', by='Sex') depth_map = make_norm_map(subset, 'Culmen Depth (mm)', by='Sex') mass_map = make_norm_map(subset, 'Body Mass (g)', by='Sex') norm_maps4 = [culmen_map, flipper_map, depth_map, mass_map] hypos = culmen_map.keys() prior = Pmf(1/2, hypos) subset['Classification'] = np.nan for i, row in subset.iterrows(): data_seq = row[varnames4] posterior = update_naive(prior, data_seq, norm_maps4) subset.loc[i, 'Classification'] = posterior.max_prob() return accuracy_sex(subset)_____no_output_____# Solution # Here's the subset of Adelie penguins # The accuracy is about 88% adelie = df['Species2']=='Adelie' subset = df[adelie].copy() classify_by_sex(subset)_____no_output_____# Solution # And for Chinstrap, accuracy is about 92% chinstrap = df['Species2']=='Chinstrap' subset = df[chinstrap].copy() classify_by_sex(subset)_____no_output_____# Solution # It looks like Gentoo and Chinstrap penguins are about equally # dimorphic, Adelie penguins a little less so. # All of these results are consistent with what's in the paper._____no_output_____ </code>
{ "repository": "alik604/ThinkBayes2", "path": "soln/chap12.ipynb", "matched_keywords": [ "evolution" ], "stars": null, "size": 442216, "hexsha": "486836cd6d9679067e1550ea2521bc74b5e46bf3", "max_line_length": 59880, "avg_line_length": 136.2341343192, "alphanum_fraction": 0.8734351539 }
# Notebook from WisePanda007/douban_sentiment Path: RNN-LSTM.ipynb ### 待改进部分 1 过拟合 调参 2 数据不均衡:(1)使用下采样(2)使用auc分数_____no_output_____ <code> import numpy as np import pandas as pd import pymongo import tensorflow as tf import os import time day=time.strftime("%Y-%m-%d", time.localtime())_____no_output_____#从数据库中读取数据 client=pymongo.MongoClient('localhost',27017)#连接数据库 db1=client['douban']#创建新数据库 pred_comments=db1['pred_comments'] data = pd.DataFrame(list(pred_comments.find())) data.head(2)_____no_output_____word_vectors=np.load('sample_Tencent_AILab_ChineseEmbedding.npy',allow_pickle=True).item() #转换词向量 def convert_to_vec(words_list): words_vec=[] for i in words_list: if i in word_vectors: words_vec.append(np.array(word_vectors[i])) return np.array(words_vec) #获取每个评论词向量的数量 def get_seq_length(data): return len(data) # 转换评星 def convert_stars(star): if int(star)>3: return 1 else: return 0 data['word_vec']=data.processed_comment.apply(convert_to_vec) data['pos1neg0']=data.star.apply(convert_stars) data['seq_length']=data.word_vec.apply(get_seq_length) data=data[[list(i)!=[] for i in data['word_vec']]]#删除词向量为[]的文本 data.head(2)_____no_output_____data.describe()_____no_output_____num2=np.mean(data[:].processed_comment.apply(lambda x:len(x))) num1=np.mean(data[:].word_vec.apply(lambda x:len(x))) print('平均每个评论有',num2,'个词') print('平均每个评论有',num1,'个 wordvec')平均每个评论有 17.11447859982942 个词 平均每个评论有 16.523874734224655 个 wordvec #下采样 negdata=data[~data['pos1neg0'].isin([1])] posdata=data[~data['pos1neg0'].isin([0])] xiacaiyangpos=posdata.sample(frac=0.25,replace=False) newdata=pd.concat( [negdata,xiacaiyangpos], axis=0 ) data=newdata_____no_output_____X=np.array(data['word_vec']) num=30#要保留的维度,不足补0 X_new=np.zeros((len(X),num,200)) for i,j in enumerate(X): if j.shape[0]>=num: X_new[i]=X[i][:num] else: X_new[i]=np.concatenate([X[i],np.zeros(((num-j.shape[0]),200))]) X=X_new y=np.array(data['pos1neg0']) seq=np.array(data['seq_length']) # for i,j in enumerate(seq): # if j>num: # seq[i]=num print('数据总数量:',y.shape[0]) print('负类所占比例:',list(y).count(0)/(list(y).count(1)+list(y).count(0))) from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test,seq_train,seq_test=train_test_split(X,np.array(y),seq,test_size=0.2) print('训练集数量:',y_train.shape[0]) print('测试集数量:',y_test.shape[0]) print('训练集负类所占比例:',list(y_train).count(0)/(list(y_train).count(1)+list(y_train).count(0))) print('测试集负类所占比例:',list(y_test).count(0)/(list(y_test).count(1)+list(y_test).count(0)))数据总数量: 30485 负类所占比例: 0.42306052156798424 训练集数量: 24388 测试集数量: 6097 训练集负类所占比例: 0.4230769230769231 测试集负类所占比例: 0.42299491553222895 def shuffle_batch(X, y, seq, batch_size): rnd_idx = np.random.permutation(len(X)) n_batches = len(X) // batch_size for batch_idx in np.array_split(rnd_idx, n_batches): X_batch, y_batch,seq_batch = X[batch_idx], y[batch_idx],seq[batch_idx] yield X_batch, y_batch,seq_batch_____no_output_____tf.reset_default_graph() n_steps = num n_inputs=200 n_neurons = 64 n_outputs = 2 X = tf.placeholder(tf.float32, [None, None,n_inputs],name='X') y = tf.placeholder(tf.int32, [None],name='y') # seq_length = tf.placeholder(tf.int32, [None], name="seq_length") with tf.name_scope('RNN'): lstmCell = tf.contrib.rnn.BasicLSTMCell(num_units=n_neurons) # 配置dropout参数,以此避免过拟合 droplstmCell = tf.contrib.rnn.DropoutWrapper(cell=lstmCell, output_keep_prob=0.75) outputs, states = tf.nn.dynamic_rnn(droplstmCell, X, dtype=tf.float32)#,sequence_length=seq_length) with tf.name_scope('Loss'): logits = tf.layers.dense(outputs[:,-1,:],n_outputs) xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,logits=logits) loss = tf.reduce_mean(xentropy) loss_summary = tf.summary.scalar('log_loss', loss)#使用tensorboard learning_rate = 0.001 with tf.name_scope('Train'): optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) training_op = optimizer.minimize(loss) with tf.name_scope("Eval"): correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) accuracy_summary = tf.summary.scalar('accuracy', accuracy) init = tf.global_variables_initializer() saver = tf.train.Saver() file_writer = tf.summary.FileWriter('douban_log/'+day, tf.get_default_graph()) WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md * https://github.com/tensorflow/addons If you depend on functionality not listed there, please file an issue. WARNING:tensorflow:From <ipython-input-10-d5bdfe3cfdc3>:13: BasicLSTMCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version. Instructions for updating: This class is equivalent as tf.keras.layers.LSTMCell, and will be replaced by that in Tensorflow 2.0. WARNING:tensorflow:From <ipython-input-10-d5bdfe3cfdc3>:16: dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell)`, which is equivalent to this API WARNING:tensorflow:From D:\Program\anaconda3\lib\site-packages\tensorflow\python\ops\tensor_array_ops.py:162: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. WARNING:tensorflow:From D:\Program\anaconda3\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py:1259: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`. WARNING:tensorflow:From <ipython-input-10-d5bdfe3cfdc3>:19: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.dense instead. #定义mini-batch大小与循环轮次 n_epochs = 200 batch_size = 100 n_batches = int(np.ceil(X_train.shape[0] / batch_size)) #保存检查点 checkpoint_path = "tmp_model/"+day+"/my_douban_rnn_model.ckpt" checkpoint_epoch_path = checkpoint_path + ".epoch" final_model_path = "model/"+day+"/my_douban_rnn_model" #如果出现50次loss大于之前最好的损失,提前停止 best_loss = np.infty epochs_without_progress = 0 max_epochs_without_progress = 20 with tf.Session() as sess: if os.path.isfile(checkpoint_epoch_path): # if the checkpoint file exists, restore the model and load the epoch number with open(checkpoint_epoch_path, "rb") as f: start_epoch = int(f.read()) print("Training was interrupted. Continuing at epoch", start_epoch) saver.restore(sess, checkpoint_path) else: start_epoch = 0 sess.run(init) for epoch in range(start_epoch, n_epochs): for X_batch, y_batch,seq_batch in shuffle_batch(X_train, y_train, seq, batch_size): sess.run(training_op, feed_dict={X: X_batch, y: y_batch})#, seq_length:seq_batch}) accuracy_val, loss_val, accuracy_summary_str, loss_summary_str = sess.run( [accuracy, loss, accuracy_summary, loss_summary], feed_dict={X: X_batch, y: y_batch}#, seq_length:seq_batch} ) file_writer.add_summary(accuracy_summary_str, epoch) file_writer.add_summary(loss_summary_str, epoch) acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})#,seq_length:seq_test}) print("Epoch:", epoch,"\tValidation accuracy: {:.3f}%".format(accuracy_val * 100),"\tLoss: {:.5f}".format(loss_val), "\tTest accuracy: {:.3f}%".format(acc_test * 100)) saver.save(sess, checkpoint_path) with open(checkpoint_epoch_path, "wb") as f: f.write(b"%d" % (epoch + 1)) if loss_val < best_loss: saver.save(sess, final_model_path) best_loss = loss_val else: epochs_without_progress += 1 if epochs_without_progress > max_epochs_without_progress: print("Early stopping") breakEpoch: 0 Validation accuracy: 73.000% Loss: 0.56698 Test accuracy: 77.612% Epoch: 1 Validation accuracy: 76.000% Loss: 0.46372 Test accuracy: 75.513% Epoch: 2 Validation accuracy: 79.000% Loss: 0.44231 Test accuracy: 79.482% Epoch: 3 Validation accuracy: 81.000% Loss: 0.43146 Test accuracy: 78.760% Epoch: 4 Validation accuracy: 76.000% Loss: 0.46080 Test accuracy: 79.449% Epoch: 5 Validation accuracy: 82.000% Loss: 0.36151 Test accuracy: 80.384% Epoch: 6 Validation accuracy: 80.000% Loss: 0.53343 Test accuracy: 81.450% Epoch: 7 Validation accuracy: 83.000% Loss: 0.44729 Test accuracy: 80.417% Epoch: 8 Validation accuracy: 88.000% Loss: 0.25400 Test accuracy: 80.761% Epoch: 9 Validation accuracy: 89.000% Loss: 0.32133 Test accuracy: 80.433% Epoch: 10 Validation accuracy: 92.000% Loss: 0.23793 Test accuracy: 80.892% Epoch: 11 Validation accuracy: 89.000% Loss: 0.22683 Test accuracy: 80.663% Epoch: 12 Validation accuracy: 84.000% Loss: 0.33554 Test accuracy: 79.957% Epoch: 13 Validation accuracy: 87.000% Loss: 0.33907 Test accuracy: 80.335% with tf.Session() as sess: saver.restore(sess, final_model_path) accuracy_val = accuracy.eval(feed_dict={X: X_test, y: y_test}) accuracy_val_____no_output_____with tf.Session() as sess: saver.restore(sess, final_model_path) logits1 = logits.eval(feed_dict={X: X_train, y: y_train}) logits2 = logits.eval(feed_dict={X: X_test, y: y_test}) from sklearn.metrics import confusion_matrix y_train_pre=[0 if i[0]>i[1] else 1 for i in logits1] y_test_pre=[0 if i[0]>i[1] else 1 for i in logits2] print('训练集: \n',confusion_matrix(y_train,y_train_pre)) print('测试集: \n',confusion_matrix(y_test,y_test_pre))_____no_output_____final_model_path = "model/my_douban_rnn_model" with tf.Session() as sess: saver.restore(sess, final_model_path) output = outputs.eval(feed_dict={X: X_train, y: y_train}) # states = states.eval(feed_dict={X: X_train, y: y_train})_____no_output_____states_____no_output_____ </code>
{ "repository": "WisePanda007/douban_sentiment", "path": "RNN-LSTM.ipynb", "matched_keywords": [ "STAR" ], "stars": 5, "size": 22784, "hexsha": "486874407690037776bdeed98ebce5c35d91d8dd", "max_line_length": 248, "avg_line_length": 35.5444617785, "alphanum_fraction": 0.5061885534 }
# Notebook from manaminer/NLP-YELP Path: NLP on Dataset From YELP.ipynb # Importing Libraries & Dataset_____no_output_____ <code> import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set_style('white') %matplotlib inline_____no_output_____yelp = pd.read_csv('yelp.csv')_____no_output_____yelp.head()_____no_output_____yelp.info()<class 'pandas.core.frame.DataFrame'> RangeIndex: 10000 entries, 0 to 9999 Data columns (total 10 columns): business_id 10000 non-null object date 10000 non-null object review_id 10000 non-null object stars 10000 non-null int64 text 10000 non-null object type 10000 non-null object user_id 10000 non-null object cool 10000 non-null int64 useful 10000 non-null int64 funny 10000 non-null int64 dtypes: int64(4), object(6) memory usage: 781.3+ KB yelp.describe()_____no_output_____yelp['text length'] = yelp['text'].apply(len)_____no_output_____ </code> # EDA_____no_output_____ <code> grid = sns.FacetGrid(yelp , col = 'stars') grid.map(plt.hist , 'text length' , bins = 60)_____no_output_____plt.figure(figsize = (8,8)) sns.boxplot(x='stars' , y= 'text length' , data = yelp) _____no_output_____sns.countplot(x = 'stars' , data = yelp)_____no_output_____stars = yelp.groupby('stars').mean() stars_____no_output_____stars.corr()_____no_output_____sns.heatmap(stars.corr(), cmap = 'coolwarm' , annot=True )_____no_output_____ </code> ### We will create DF of 'yelp' dataframe but only for 1 and 5 star reviews_____no_output_____ <code> yelp_class = yelp[(yelp['stars'] == 1)|(yelp['stars'] == 5)]_____no_output_____yelp_class.info()<class 'pandas.core.frame.DataFrame'> Int64Index: 4086 entries, 0 to 9999 Data columns (total 11 columns): business_id 4086 non-null object date 4086 non-null object review_id 4086 non-null object stars 4086 non-null int64 text 4086 non-null object type 4086 non-null object user_id 4086 non-null object cool 4086 non-null int64 useful 4086 non-null int64 funny 4086 non-null int64 text length 4086 non-null int64 dtypes: int64(5), object(6) memory usage: 383.1+ KB </code> #### X will be 'text' column of yelp_class and y will be 'stars' column of yelp_class_____no_output_____ <code> X = yelp_class['text'] y = yelp_class['stars']_____no_output_____from sklearn.feature_extraction.text import CountVectorizer cv = CountVectorizer() X = cv.fit_transform(X)_____no_output_____ </code> # Train Test Split_____no_output_____ <code> from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.3,random_state=101)_____no_output_____ </code> # Training the Model_____no_output_____ <code> from sklearn.naive_bayes import MultinomialNB nb = MultinomialNB()_____no_output_____nb.fit(X_train , y_train)_____no_output_____y_pred = nb.predict(X_test)_____no_output_____ </code> # Model Evaluation_____no_output_____ <code> from sklearn.metrics import confusion_matrix,classification_report print(confusion_matrix(y_test,y_pred)) print('\n') print(classification_report(y_test,y_pred))[[159 69] [ 22 976]] precision recall f1-score support 1 0.88 0.70 0.78 228 5 0.93 0.98 0.96 998 avg / total 0.92 0.93 0.92 1226 </code> #### Now we will include TF-IDF to this process using a pipeline._____no_output_____# Text Processing & Pipeline_____no_output_____ <code> from sklearn.feature_extraction.text import TfidfTransformer from sklearn.pipeline import Pipeline_____no_output_____pl = Pipeline([('bow' , CountVectorizer()), ('tfidf', TfidfTransformer()), ('model' , MultinomialNB())])_____no_output_____X = yelp_class['text'] y = yelp_class['stars'] X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.3,random_state=101)_____no_output_____pl.fit(X_train , y_train)_____no_output_____y_pred = pl.predict(X_test)_____no_output_____print(confusion_matrix(y_test,y_pred)) print('\n') print(classification_report(y_test,y_pred))[[ 0 228] [ 0 998]] precision recall f1-score support 1 0.00 0.00 0.00 228 5 0.81 1.00 0.90 998 avg / total 0.66 0.81 0.73 1226 </code> #### Its obvious that adding TfidfTransformer to the model affected the results negatively for this project, still may be helpfull of other projects._____no_output_____
{ "repository": "manaminer/NLP-YELP", "path": "NLP on Dataset From YELP.ipynb", "matched_keywords": [ "STAR" ], "stars": null, "size": 73228, "hexsha": "48689ee7ed34859109cd415c20e7bb2db797f13c", "max_line_length": 17442, "avg_line_length": 74.3431472081, "alphanum_fraction": 0.7694597695 }
# Notebook from yaosichao0915/DeepImmuno Path: reproduce/fig/.ipynb_checkpoints/supp4-checkpoint.ipynb <code> %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np import matplotlib as mpl import pickle import itertools_____no_output_____import tensorflow as tf import tensorflow.keras as keras from tensorflow.keras import layers_____no_output_____mpl.rcParams['pdf.fonttype'] = 42 mpl.rcParams['ps.fonttype'] = 42 mpl.rcParams['font.family'] = 'Arial'_____no_output_____def seperateCNN(): input1 = keras.Input(shape=(10, 12, 1)) input2 = keras.Input(shape=(46, 12, 1)) x = layers.Conv2D(filters=16, kernel_size=(2, 12))(input1) # 9 x = layers.BatchNormalization()(x) x = keras.activations.relu(x) x = layers.Conv2D(filters=32, kernel_size=(2, 1))(x) # 8 x = layers.BatchNormalization()(x) x = keras.activations.relu(x) x = layers.MaxPool2D(pool_size=(2, 1), strides=(2, 1))(x) # 4 x = layers.Flatten()(x) x = keras.Model(inputs=input1, outputs=x) y = layers.Conv2D(filters=16, kernel_size=(15, 12))(input2) # 32 y = layers.BatchNormalization()(y) y = keras.activations.relu(y) y = layers.MaxPool2D(pool_size=(2, 1), strides=(2, 1))(y) # 16 y = layers.Conv2D(filters=32,kernel_size=(9,1))(y) # 8 y = layers.BatchNormalization()(y) y = keras.activations.relu(y) y = layers.MaxPool2D(pool_size=(2, 1),strides=(2,1))(y) # 4 y = layers.Flatten()(y) y = keras.Model(inputs=input2,outputs=y) combined = layers.concatenate([x.output,y.output]) z = layers.Dense(128,activation='relu')(combined) z = layers.Dropout(0.2)(z) z = layers.Dense(1,activation='sigmoid')(z) model = keras.Model(inputs=[input1,input2],outputs=z) return model def pull_peptide_aaindex(dataset): result = np.empty([len(dataset),10,12,1]) for i in range(len(dataset)): result[i,:,:,:] = dataset[i][0] return result def pull_hla_aaindex(dataset): result = np.empty([len(dataset),46,12,1]) for i in range(len(dataset)): result[i,:,:,:] = dataset[i][1] return result def pull_label_aaindex(dataset): col = [item[2] for item in dataset] result = [0 if item == 'Negative' else 1 for item in col] result = np.expand_dims(np.array(result),axis=1) return result def pull_label_aaindex(dataset): result = np.empty([len(dataset),1]) for i in range(len(dataset)): result[i,:] = dataset[i][2] return result def aaindex(peptide,after_pca): amino = 'ARNDCQEGHILKMFPSTWYV-' matrix = np.transpose(after_pca) # [12,21] encoded = np.empty([len(peptide), 12]) # (seq_len,12) for i in range(len(peptide)): query = peptide[i] if query == 'X': query = '-' query = query.upper() encoded[i, :] = matrix[:, amino.index(query)] return encoded def peptide_data_aaindex(peptide,after_pca): # return numpy array [10,12,1] length = len(peptide) if length == 10: encode = aaindex(peptide,after_pca) elif length == 9: peptide = peptide[:5] + '-' + peptide[5:] encode = aaindex(peptide,after_pca) encode = encode.reshape(encode.shape[0], encode.shape[1], -1) return encode def dict_inventory(inventory): dicA, dicB, dicC = {}, {}, {} dic = {'A': dicA, 'B': dicB, 'C': dicC} for hla in inventory: type_ = hla[4] # A,B,C first2 = hla[6:8] # 01 last2 = hla[8:] # 01 try: dic[type_][first2].append(last2) except KeyError: dic[type_][first2] = [] dic[type_][first2].append(last2) return dic def rescue_unknown_hla(hla, dic_inventory): type_ = hla[4] first2 = hla[6:8] last2 = hla[8:] big_category = dic_inventory[type_] #print(hla) if not big_category.get(first2) == None: small_category = big_category.get(first2) distance = [abs(int(last2) - int(i)) for i in small_category] optimal = min(zip(small_category, distance), key=lambda x: x[1])[0] return 'HLA-' + str(type_) + '*' + str(first2) + str(optimal) else: small_category = list(big_category.keys()) distance = [abs(int(first2) - int(i)) for i in small_category] optimal = min(zip(small_category, distance), key=lambda x: x[1])[0] return 'HLA-' + str(type_) + '*' + str(optimal) + str(big_category[optimal][0]) def hla_data_aaindex(hla_dic,hla_type,after_pca): # return numpy array [34,12,1] try: seq = hla_dic[hla_type] except KeyError: hla_type = rescue_unknown_hla(hla_type,dic_inventory) seq = hla_dic[hla_type] encode = aaindex(seq,after_pca) encode = encode.reshape(encode.shape[0], encode.shape[1], -1) return encode def construct_aaindex(ori,hla_dic,after_pca): series = [] for i in range(ori.shape[0]): peptide = ori['peptide'].iloc[i] hla_type = ori['HLA'].iloc[i] immuno = np.array(ori['immunogenicity'].iloc[i]).reshape(1,-1) # [1,1] encode_pep = peptide_data_aaindex(peptide,after_pca) # [10,12] encode_hla = hla_data_aaindex(hla_dic,hla_type,after_pca) # [46,12] series.append((encode_pep, encode_hla, immuno)) return series def hla_df_to_dic(hla): dic = {} for i in range(hla.shape[0]): col1 = hla['HLA'].iloc[i] # HLA allele col2 = hla['pseudo'].iloc[i] # pseudo sequence dic[col1] = col2 return dic def retain_910(ori): cond = [] for i in range(ori.shape[0]): peptide = ori['peptide'].iloc[i] if len(peptide) == 9 or len(peptide) == 10: cond.append(True) else: cond.append(False) data = ori.loc[cond] data = data.set_index(pd.Index(np.arange(data.shape[0]))) return data _____no_output_____def read_fasta_and_chop_to_N(path,N): with open(path,'r') as f: lis = f.readlines()[1:] lis = [raw.rstrip('\n') for raw in lis] seq = ''.join(lis) bucket = [] for i in range(0,len(seq)-N+1,1): frag = seq[i:i+N] bucket.append(frag) return seq,bucket_____no_output_____def set_query_df(frag): from itertools import product hla = ['HLA-A*0101','HLA-A*0201','HLA-A*0301','HLA-A*1101','HLA-A*2402','HLA-B*0702','HLA-B*0801','HLA-B*1501','HLA-B*4001','HLA-C*0702'] combine = list(product(frag,hla)) col1 = [item[0] for item in combine] # peptide col2 = [item[1] for item in combine] # hla col3 = [0 for item in combine] # immunogenicity df = pd.DataFrame({'peptide':col1,'HLA':col2,'immunogenicity':col3}) return df_____no_output_____def get_score(ori): dataset = construct_aaindex(ori,hla_dic,after_pca) input1 = pull_peptide_aaindex(dataset) input2 = pull_hla_aaindex(dataset) result = cnn_model.predict([input1,input2]) ori['result'] = result[:,0] return ori_____no_output_____def prepare_plot_each_region(score_df,count,h=10): # how many hla you query from itertools import repeat # x coordinate x = [] for i in range(count): x.extend(list(repeat(i,h))) # y coordinate y = score_df['result'].values # color coordiate tmp = list(repeat([0,1,2,3,4,5,6,7,8,9],count)) c = [j for i in tmp for j in i] # # plot # fig,ax = plt.subplots() # ax.scatter(x=x,y=y,c=c,cmap='tab10',alpha=1,s=5) # plt.show() return x,y,c_____no_output_____def prepare_plot_each_region_mean(score_df,count,h=10): lis = np.split(score_df['result'].values,count) y = np.array([item.mean() for item in lis]) # fig,ax = plt.subplots() # ax.bar(x=np.arange(count),height=y) # plt.show() return y_____no_output_____def wrapper(frag,count): orf_score_df = set_query_df(frag) orf_score_df = get_score(orf_score_df) x,y,c = prepare_plot_each_region(orf_score_df,count) y = prepare_plot_each_region_mean(orf_score_df,count) return y_____no_output_____''' orf1ab: polypeptide, nsp, replicase..., length 7096, 9mer: 7088, 10mer: 7087 orf2: spike, length 1273, 9mer: 1265, 10mer: 1264 orf3a: accessory, length 275, 9mer: 267, 10mer: 266 orf4: envelope, length 75, 9mer: 67, 10mer: 66 orf5: membrane, length 222, 9mer: 214, 10mer: 213 orf6: accessory, length 61, 9mer: 53, 10mer: 52 orf7a: accessory, length 121, 9mer 113, 10mer: 112 orf7b: accessory, length 43, 9mer 35 (missing in nature immunology paper), 10mer: 34 orf8: accessory, length 121, 9mer 113, 10mer: 112 orf9: nucleocapside glycoprotein, length 419, 9mer 411, 10mer 410 orf10: accessory, length 38, 9mer: 30, 10mer: 29 '''_____no_output_____# set up the model and necessaray files for getting score of each SARS-CoV-2 region cnn_model = seperateCNN() cnn_model.load_weights('../data/models/cnn_model_331_3_7/') after_pca = np.loadtxt('../data/after_pca.txt') hla = pd.read_csv('../data/hla2paratopeTable_aligned.txt',sep='\t') hla_dic = hla_df_to_dic(hla) inventory = list(hla_dic.keys()) dic_inventory = dict_inventory(inventory)_____no_output_____# first consider 9-mer orf1ab_seq,orf1ab_frag = read_fasta_and_chop_to_N('../data/covid/ORF1ab.fa',9) orf2_seq, orf2_frag = read_fasta_and_chop_to_N('../data/covid/ORF2-spike.fa', 9) orf3a_seq, orf3a_frag = read_fasta_and_chop_to_N('../data/covid/ORF3a-accessory.fa', 9) orf4, orf4_frag = read_fasta_and_chop_to_N('../data/covid/ORF4-env.fa', 9) orf5, orf5_frag = read_fasta_and_chop_to_N('../data/covid/ORF5-mem.fa', 9) orf6, orf6_frag = read_fasta_and_chop_to_N('../data/covid/ORF6-accessory.fa', 9) orf7a, orf7a_frag = read_fasta_and_chop_to_N('../data/covid/ORF7a-accessory.fa', 9) orf7b,orf7b_frag = read_fasta_and_chop_to_N('../data/covid/ORF7b-accessory.fa', 9) orf8,orf8_frag = read_fasta_and_chop_to_N('../data/covid/ORF8-accessory.fa', 9) orf9,orf9_frag = read_fasta_and_chop_to_N('../data/covid/ORF9-nuc.fa', 9) orf10,orf10_frag = read_fasta_and_chop_to_N('../data/covid/ORF10-accessory.fa', 9) y1 = wrapper(orf1ab_frag,7088) y2 = wrapper(orf2_frag,1265) y3 = wrapper(orf3a_frag,267) y4 = wrapper(orf4_frag,67) y5 = wrapper(orf5_frag,214) y6 = wrapper(orf6_frag,53) y7 = wrapper(orf7a_frag,113) y7b = wrapper(orf7b_frag,35) y8 = wrapper(orf8_frag,113) y9 = wrapper(orf9_frag,411) y10 = wrapper(orf10_frag,30)_____no_output_____fig,ax = plt.subplots() bp = ax.boxplot([y1,y2,y3,y4,y5,y6,y7,y8,y9,y10],positions=[0,1,2,3,4,5,6,7,8,9],patch_artist=True,widths=0.8) # bp is a dictionary for box in bp['boxes']: # box is matplotlib.lines.Line2d object box.set(facecolor='#087E8B',alpha=0.6,linewidth=1) for whisker in bp['whiskers']: whisker.set(linewidth=1) for median in bp['medians']: median.set(color='black',linewidth=1) for flier in bp['fliers']: flier.set(markersize=1.5) ax.set_xticks(np.arange(10)) ax.set_xticklabels(['ORF1','ORF2','ORF3','ORF4','ORF5','ORF6','ORF7','ORF8','ORF9','ORF10']) ax.set_ylabel('Average immunogenicity score')_____no_output_____# let's inspect 10mer orf1ab_seq,orf1ab_frag = read_fasta_and_chop_to_N('../data/covid/ORF1ab.fa',10) orf2_seq, orf2_frag = read_fasta_and_chop_to_N('../data/covid/ORF2-spike.fa', 10) orf3a_seq, orf3a_frag = read_fasta_and_chop_to_N('../data/covid/ORF3a-accessory.fa', 10) orf4, orf4_frag = read_fasta_and_chop_to_N('../data/covid/ORF4-env.fa', 10) orf5, orf5_frag = read_fasta_and_chop_to_N('../data/covid/ORF5-mem.fa', 10) orf6, orf6_frag = read_fasta_and_chop_to_N('../data/covid/ORF6-accessory.fa', 10) orf7a, orf7a_frag = read_fasta_and_chop_to_N('../data/covid/ORF7a-accessory.fa', 10) orf7b,orf7b_frag = read_fasta_and_chop_to_N('../data/covid/ORF7b-accessory.fa', 10) orf8,orf8_frag = read_fasta_and_chop_to_N('../data/covid/ORF8-accessory.fa', 10) orf9,orf9_frag = read_fasta_and_chop_to_N('../data/covid/ORF9-nuc.fa', 10) orf10,orf10_frag = read_fasta_and_chop_to_N('../data/covid/ORF10-accessory.fa', 10) y1 = wrapper(orf1ab_frag,7087) y2 = wrapper(orf2_frag,1264) y3 = wrapper(orf3a_frag,266) y4 = wrapper(orf4_frag,66) y5 = wrapper(orf5_frag,213) y6 = wrapper(orf6_frag,52) y7a = wrapper(orf7a_frag,112) y7b = wrapper(orf7b_frag,34) y8 = wrapper(orf8_frag,112) y9 = wrapper(orf9_frag,410) y10 = wrapper(orf10_frag,29)_____no_output_____fig, ax = plt.subplots() bp = ax.boxplot([y1, y2, y3, y4, y5, y6, y7a, y8, y9, y10], positions=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9], patch_artist=True, widths=0.8) # bp is a dictionary for box in bp['boxes']: # box is matplotlib.lines.Line2d object box.set(facecolor='#087E8B', alpha=0.6, linewidth=1) for whisker in bp['whiskers']: whisker.set(linewidth=1) for median in bp['medians']: median.set(color='black', linewidth=1) for flier in bp['fliers']: flier.set(markersize=1.5) ax.set_xticks(np.arange(10)) ax.set_xticklabels(['ORF1', 'ORF2', 'ORF3', 'ORF4', 'ORF5', 'ORF6', 'ORF7', 'ORF8', 'ORF9', 'ORF10']) ax.set_ylabel('Average immunogenicity score')_____no_output_____ </code>
{ "repository": "yaosichao0915/DeepImmuno", "path": "reproduce/fig/.ipynb_checkpoints/supp4-checkpoint.ipynb", "matched_keywords": [ "immunology" ], "stars": 20, "size": 39295, "hexsha": "486a1b112cf0d79aefe66d2c7b30f76fbaf56610", "max_line_length": 10516, "avg_line_length": 69.6719858156, "alphanum_fraction": 0.7519786232 }
# Notebook from aman983/QCourse_Project-2021-2022 Path: Notebooks/Notebook-3.ipynb <table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table> $ \newcommand{\bra}[1]{\langle #1|} $ $ \newcommand{\ket}[1]{|#1\rangle} $ $ \newcommand{\braket}[2]{\langle #1|#2\rangle} $ $ \newcommand{\dot}[2]{ #1 \cdot #2} $ $ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $ $ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $ $ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $ $ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $ $ \newcommand{\mypar}[1]{\left( #1 \right)} $ $ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $ $ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $ $ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $ $ \newcommand{\onehalf}{\frac{1}{2}} $ $ \newcommand{\donehalf}{\dfrac{1}{2}} $ $ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $ $ \newcommand{\vzero}{\myvector{1\\0}} $ $ \newcommand{\vone}{\myvector{0\\1}} $ $ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $ $ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $ $ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $ $ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $ $ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $ $ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $ $ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $ $ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $ $ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $_____no_output_____<h3>Quantum version of Bit Game</h3> There are two player in this game Alice and Bob. They are given a Quantum bit (Qubit) to manuplate. - Initally the Qubit is set to spin up or zero state $ \ket{0}$. - Alice applies the Gate to the Qubit - Bob applies the Gate to the Qubit - Alice again applies the Gate to the Qubit $ \newcommand{\bra}[1]{\langle #1|} $ $ \newcommand{\ket}[1]{|#1\rangle} $ $$ \ket{0} \xrightarrow{\mbox{Alice move}} A_1\ket{0} \xrightarrow{\mbox{Bob move}} B_1 A_1 \ket{0} \xrightarrow{\mbox{Alice move}} A_2 B_1 A_1\ket{0} \xrightarrow{\mbox{Measure}} \ket{0} or \ket{1} $$ Where$ A_1$,$B_1$,$A_2$ are the gates applied by Alice and Bob_____no_output_____$ $_____no_output_____<b>Lets say that alice can now apply Hadamard gate to the Qubit while Bob can only choose from Not & Identity </b> - Alice can choose from ($I,X,H$) - Bob can choose form ($I,X$) If the Qubit remains in the state $\ket{0}$ then Alice wins If the Qubit remains in the state $\ket{1}$ then Bob wins - Intial state of the Qubit : $\ket{\psi} = \myvector{1 \\ 0}$ - Now Alice applies Hadamard to the Qubit $\ket{\psi} = \hadamard\myvector{1 \\ 0} = \myvector{\frac{1}{\sqrt2}\\ \frac{1}{\sqrt2}}$ - Now Bob can apply either choose $I$ or $X$ gate - If Bob chooses $I$ then state $\ket{\psi} =\mymatrix {}{1 & 0 \\ 0 & 1 } \myvector{\frac{1}{\sqrt2}\\ \frac{1}{\sqrt2}} = \myvector{\frac{1}{\sqrt2}\\ \frac{1}{\sqrt2}}$ - If Bob chooses $X$ then state $\ket{\psi} =\mymatrix {}{0& 1 \\ 1 & 0 } \myvector{\frac{1}{\sqrt2}\\ \frac{1}{\sqrt2}} = \myvector{\frac{1}{\sqrt2}\\ \frac{1}{\sqrt2}}$ - Now Alice again applies Hadamard to the Qubit $\ket{\psi} = \hadamard\myvector{\frac{1}{\sqrt2}\\ \frac{1}{\sqrt2}} = \myvector{1 \\ 0}$_____no_output_____<h3>Task 1 </h3> From the information provided above construct the Circuit in Qiskit that executes the above scenario. _____no_output_____ <code> from qiskit import * q = QuantumRegister(1,'q') c = ClassicalRegister(1,'c') qc = QuantumCircuit(q,c) # Your code here_____no_output_____ </code> <a href="Task Answers 2.ipynb#Task 1">click for our solution</a>_____no_output_____We observed that when Alice used Quantum moves and Bob used Classical moves, Alice always wins. now Bob decides to go Quantum. Lets say that Bob can now apply Z gate to the Qubit while Alice can only choose from Not , Identity & Hadamard. - Alice can choose from ($I,X,H$) - Bob can choose form ($I,X,Z$) If the Qubit remains in the state $\ket{0}$ then Alice wins If the Qubit remains in the state $\ket{1}$ then Bob wins - Intial state of the Qubit : $\ket{\psi} = \myvector{1 \\ 0}$ - Now Alice applies Hadamard to the Qubit $\ket{\psi} = \hadamard\myvector{1 \\ 0} = \myvector{\frac{1}{\sqrt2}\\ \frac{1}{\sqrt2}}$ - Bob chooses $Z$ then state $\ket{\psi} =\mymatrix {}{1 & 0 \\ 0 & -1 } \myvector{\frac{1}{\sqrt2}\\ \frac{1}{\sqrt2}} = \myvector{\frac{1}{\sqrt2}\\ - \frac{1}{\sqrt2}}$ - Now Alice applies Hadamard to the Qubit $\ket{\psi} = \hadamard\myvector{\frac{1}{\sqrt2}\\ - \frac{1}{\sqrt2}} = \myvector{0 \\ 1}$_____no_output_____<h3>Task 2 </h3> From the information provided above construct the Circuit in Qiskit that executes the above scenario. _____no_output_____ <code> from qiskit import * q = QuantumRegister(1,'q') c = ClassicalRegister(1,'c') qc = QuantumCircuit(q,c) # Your code here_____no_output_____ </code> <a href="Task Answers 2.ipynb#Task 2">click for our solution</a>_____no_output_____<h3>Quantum version of prisoner's dillema</h3>_____no_output_____Quantum Game Theory is a field that was first introduced by D. Meyer in 1999 but it wasn't until later that year that a formal quantum game theory protocol was invented by Eisert, Wilkens & Lewenstein and hence is called the "EWL quantization protocol". EWL protocol is essential for playing Quantum games. EWL quantization protocol: <br><b>1. Initialization the circuit</b> : we start with a quantum register with 2 qubits being in state $\ket{00}$ and with a classical register with 2 bits being in state (00) to store the output. $$ \ket{\psi} = \ket{00}$$ <br><b>2. Entanglement</b>: In order for EWL protocol to work we need to have the two qubits maximally entangled with each other before the players play the game. We can achieve this by appling an operator. For simplicity lets call this operator ($O$)</br> $$ \ket{\psi} = O\ket{00}$$ <br><b>3. Strategy</b>: Each player is given two qubit and they can apply Unitary operations($U$) on their qubits (while in classical game they can only apply $X or I$). Now lets say players have applied their operator $U_1$ and $U_2$ respectively. Now the state is:</br> $$ \ket{\psi} = U_1 U_2 O\ket{00}$$ <br><b>4. Untanglement</b>: The qubit are passed thru an Untanglement operator $O^{\dagger}$. Now the state is: $$ \ket{\psi} = O^{\dagger} U_1 U_2 O\ket{00}$$ <br><b>5. Measure</b>: Now we measure the qubits and store the outcomes into our classical register which has 2 bits</br> <br><b>6. Output</b>: The output is the combination of the two bits in the classical register.</br>_____no_output_____<h3> The entanglement operator $O$</h3> The Entanglement and Untanglement operator $O $ and $ O^{\dagger}$ must follow following rules: - The operator makes maximum entanglement between two qubits before they are given to players. - when both the player are playing with the classical stratergy such as $I$ or $X$. Then the outcome of the game should be same as the outcome of the classical version game. This leaves us with many operators but we choose this operator: $$O = \frac{1}{\sqrt{2}} (I^{\otimes{2}} + i X^{\otimes{2}})$$ $$ I^{\otimes{2}} = \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 } $$ $$ iX^{\otimes{2}} = \mymatrix{cccc}{0 & 0 & 0 & i \\ 0 & 0 & i & 0 \\ 0 & i & 0 & 0 \\ i & 0 & 0 & 0 } $$ $$ O = \frac{1}{\sqrt{2}}\mymatrix{cccc}{1 & 0 & 0 & i \\ 0 & 1 & i & 0 \\ 0 & i & 1 & 0 \\ i & 0 & 0 & 1 } $$ where $I$ is identity operator and $X$ is pauli-X gate. The Operator $O$ satasfies the above two conditions. <br>After applying the operator $O$ on the qubits the state is:</br> \begin{align*} \ket{\psi} &= O\ket{00} \\ &= \frac{1}{\sqrt{2}}(\ket{00} + i\ket{11}) \end{align*}_____no_output_____<b>Step 1</b> : Initialize the circuit with quantum register containing 2 qubits and the classical register containing 2 bits. <br>Initial state is :</br> $$ \ket{\psi} = \ket{00}$$_____no_output_____ <code> from qiskit import * from qiskit.quantum_info import Operator from qiskit.visualization import plot_histogram import numpy as np q = QuantumRegister(2,'q') c = ClassicalRegister(2,'c') qc = QuantumCircuit(2,2) qc.draw(output = 'mpl')_____no_output_____ </code> Create the $O$ operator for Entanglement between two qubits_____no_output_____ <code> I = np.identity(4) X = np.matrix([[0,0,0,1], [0,0,1,0], [0,1,0,0], [1,0,0,0]]) O = Operator(1/np.sqrt(2) * (I + 1j * X)) print("The identity operator is --->",I) print() print("The pauli-x operator is --->",X) print() print("The O operator is --->",O)_____no_output_____ </code> <b>Step 2</b>: Apply the operator $O$. The state of the qubit is : $$ \ket{\psi} = O\ket{00} $$ <br>$$ \ket{\psi} = \frac{1}{\sqrt{2}}(\ket{00} + i\ket{11})$$</br>_____no_output_____ <code> qc = QuantumCircuit(q,c) qc.append(O,[0,1]) qc.draw(output = 'mpl')_____no_output_____ </code> <b>Step 3</b>: Let the players apply the Unitary operator on their individual qubits. For Example player 1 applies the pauli-x and player 2 applies Hadamard on their respective qubits. _____no_output_____ <code> qc = QuantumCircuit(q,c) qc.append(O,[0,1]) qc.x(q[0]) qc.h(q[1]) qc.draw(output = 'mpl')_____no_output_____ </code> <b>Step 4</b>: Now we apply the Unentanglement operator $O^{\dagger}$ _____no_output_____ <code> O_dg = Operator(1 / np.sqrt(2) * (I - 1j * X)) qc = QuantumCircuit(2,2) qc.append(O,[0,1]) qc.x(0) qc.h(1) qc.append(O_dg,[0,1]) display(qc.draw(output = 'mpl')) print("The Odg operator is --->",O_dg)_____no_output_____ </code> <b>Step 5</b>: Now we measure the qubit and store the outcomes in the classical register._____no_output_____ <code> qc = QuantumCircuit(2,2) qc.append(O,[0,1]) qc.barrier() qc.x(q[0]) qc.s(q[1]) qc.barrier() qc.append(O_dg,[0,1]) qc.barrier() qc.measure(q,c) display(qc.draw(output = 'mpl')) job = execute(qc,Aer.get_backend('qasm_simulator'),shots = 1000) counts = job.result().get_counts(qc) print(counts) plot_histogram(counts)_____no_output_____ </code> If the outcome is: - 00 then both of them have cooperated - 01 then player 2 has defected - 10 then player 1 has defected - 11 then both of them have defected The plot $01$ shows the number of times player 1 won against player 2 similarly plot $10$ shows then number of time player 2 won against player 1 of them defected._____no_output_____<h3>Task 3</h3> when both the player are playing with the classical stratergy such as 𝐼 or 𝑋 . Then verify that the outcome of the game is same as the outcome of the classical version game._____no_output_____ <code> qc = QuantumCircuit(2,2) qc.append(O,[0,1]) qc.barrier() # your code here_____no_output_____ </code> <a href="Task Answers 2.ipynb#Task 3">click for our solution</a>_____no_output_____<h3>Task 4</h3> Find the counter strategy that Bob can use against Alice if she applies $H$ on her qubit. Also observe what happens when: - Alice applies $H$, Bob applies $I$. - Alice applies $I$, Bob applies $H$. - if both of them apply $H$ _____no_output_____ <code> %run Game.py qc = QuantumCircuit(2,2) qc.append(O,[0,1]) # Apply the operator O qc.barrier() # Your code here_____no_output_____ </code> <a href="Task Answers 2.ipynb#Task 4">click for our solution</a>_____no_output_____$ \begin{array}{lc|cc} & & Bob&Bob&Bob&Bob \\ & & \mathbf{I} & \mathbf{X} & \mathbf{H} & \mathbf{Z} \\ \hline Alice& \mathbf{I} & \mbox{(3,3)}& \mbox{(0,5)} & \mbox{(0.5 , 3)}& \mbox{(1,1)} \\ Alice& \mathbf{X} & \mbox{(5,0)}& \mbox{(1,1)} & \mbox{(0.5 , 3)}& \mbox{(0 , 5)} \\ Alice& \mathbf{H} & \mbox{(3 , 0.5)}& \mbox{(3 , 0.5)} & \mbox{(2.25 , 2.25)} & \mbox{(1.5 , 4)}\\ Alice& \mathbf{Z} & \mbox{(1,1)}& \mbox{(5,0)} & \mbox{(4 , 1.5)} & \mbox{(3,3)}\\ \end{array} $ more payoff = more desirable In classical version of the game the Nash Equlibrium is (1,1) ($X,X$) but when the game is extended to the classical version both the players tend to use the $Z$ strategy because if we compare the strategy $Z$ with other strategyes payoff then the payoff of the player playing the $Z$ are always high than a player who plays other strategy excluding $Z$. Now the Nash Equlibrium strategy is changes from ($X,X$) to ($Z,Z$)._____no_output_____<h3>Quantum vs Classical</h3> Lets say if i play classical game with the probabilistic apporach such that i play 60% $X$ and 40% $I$. we still have limited number of gates while in Quantum games a player can apply any unitary operation on the qubits (i.e 10% $U_1|$ 20% $ U_2 |$40% $U_3 |$ 30% $U_4$). Quantum games produce a richer game experience because of the avaliability of many strategy that Classical games cannot reproduce._____no_output_____<h3>Quantum player vs Classical player </h3> Alice always plays $X$ or defect but insted of Bob appling $X$ he applies Quantum Strategy to counter to Classical strategy $X$. lets say Bob applies pauli-z gate on his qubit while alice applies pauli-x gate on her qubit._____no_output_____ <code> %run Game.py q = QuantumRegister(2,'q') c = ClassicalRegister(2,'c') qc = QuantumCircuit(q,c) qc.append(O,[0,1]) # Apply the operator O qc.barrier() qc.x(q[0]) # Alice's qubit qc.z(q[1]) # Bob's qubit qc.barrier() qc.append(O_dg,[0,1]) # Apply the operator Odg qc.barrier() qc.measure(q,c) # measure the qubits display(qc.draw(output = 'mpl',reverse_bits = True)) job = execute(qc,Aer.get_backend('qasm_simulator'),shots = 1000) counts = job.result().get_counts(qc) print(counts) Game.result(counts) plot_histogram(counts) _____no_output_____ </code> From the results it is observed that bob wins when alice defects all the time.so therofore quantum strategy dominates the classical strategy. _____no_output_____We just saw that for $X$ the perfect counter strategy is $Z$ but now what is the counter strategy for $Z$. Alice is finding the perfect counter strategy to $Z$._____no_output_____$$ U = \mymatrix{cc}{\cos{\frac{\theta}{2}} & -e^{i\lambda}\sin{\frac{\theta}{2}} \\ e^{i\phi}\sin{\frac{\theta}{2}} & e^{i\lambda + i\phi}\cos{\frac{\theta}{2}}}, $$ $$ Z = \mymatrix{cc}{1&0 \\ 0 & -1} $$ \begin{align*} \ket{\psi} &= O^{\dagger}Z U O\ket{00}\\ &= O^{\dagger} Z U \frac{1}{\sqrt{2}}(\ket{00} + i\ket{11})\\ &= O^{\dagger} U \frac{1}{\sqrt{2}}(\ket{00} - i\ket{11})\\ &= O^{\dagger} \frac{1}{\sqrt{2}} \biggr[\ket{0}\biggr (\cos \biggr(\frac{\theta}{2}\biggr)\ket{0} + e^{i\phi} \sin \biggr(\frac{\theta}{2}\biggr) \ket{1} \biggr) - i \ket{1} \biggr( -e^{i\lambda} \sin \biggr(\frac{\theta}{2}\biggr)\ket{0} + e^{i\phi+i\lambda} \cos \biggr(\frac{\theta}{2}\biggr)\ket{1}\biggr) \biggr] \\ &= O^{\dagger} \frac{1}{\sqrt{2}}\biggr[\cos \biggr(\frac{\theta}{2}\biggr)\ket{00} + e^{i\phi} \sin \biggr(\frac{\theta}{2}\biggr)\ket{01} + ie^{i\lambda} \sin \biggr(\frac{\theta}{2}\biggr)\ket{10} - ie^{i\phi+i\lambda} \cos \biggr(\frac{\theta}{2}\biggr)\ket{11}\biggr] \\ &= \frac{1}{2}\biggr[ \cos \biggr(\frac{\theta}{2}\biggr)\biggr(\ket{00} - i\ket{11}\biggr) + e^{i\phi}\sin \biggr(\frac{\theta}{2}\biggr)\biggr(\ket{01} - i\ket{10}\biggr) + ie^{i\lambda} \sin \biggr(\frac{\theta}{2}\biggr)\biggr(\ket{10} - i\ket{01}\biggr) - ie^{i\phi + i\lambda} \cos \biggr(\frac{\theta}{2}\biggr)\biggr(\ket{11} - i\ket{00} \biggr)\biggr] \end{align*} we want the outcome $\ket{01}$ with maximum probability therefore we want the coefficients of $\ket{01}$ to be maximum. we need to set: $$ \theta = \pm \pi $$ $$ e^{i\phi} = -e^{i\lambda} $$ Now we set the obtained values in $U$ and we get: \begin{align*} U &= \mymatrix{cc}{0&1 \\ -1 & 0}\\ &= Z.X \end{align*}_____no_output_____<h3>Task 5</h3> Lets say that player 1 applies pauli-Z and the player 2 applies pauli-Z and pauli-X. Then apply the following gates observe the results. _____no_output_____ <code> from qiskit import * q = QuantumRegister(2) c = ClassicalRegister(2) qc = QuantumCircuit(q,c) qc.append(O,[0,1]) # Apply the operator O qc.barrier() # your code here_____no_output_____ </code> <h3>Quantum version of Minority game </h3>_____no_output_____In the classical version of minority game each player has a ($\frac{1}{8}$) chance of winning but in the quantum version of the game things become inresting. Now to play the 4 player quantum game we need to use $O_4$ operator insted of $O$: $$ O_4 = \frac{1}{\sqrt{2}} (I^{\otimes{4}} + i X^{\otimes{4}}) $$ \begin{align*} \ket{\psi} &= O_4\ket{00} \\ &= \frac{1}{\sqrt{2}}(\ket{0000} + i\ket{1111}) \end{align*}_____no_output_____ <code> I4 = np.identity(16) # 16X16 identity matrix for 4 qubits X4 = np.matrix([[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1], # 16X16 pauli-X matrix for 4 qubits [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]]) O4 = Operator(1/np.sqrt(2) * (I4 + 1j * X4)) # EWL Operator for 4 qubits O4_dg = Operator(1 / np.sqrt(2) * (I4 - 1j * X4)) # O_dg Operator for 4 qubits_____no_output_____ </code> In the classical game, the best that one can do is pick the strategy 0 or 1 randomly with a 50% probability and hope for the best. This is indeed a Nash equilibrium and gives each player a 1/8 chance of winning. However, things get very interesting in the quantum world... It turns out that there are non-trivial pure strategy Nash equilibria in the quantum world. An example is every player deciding to apply the following strategy: $R_z(-\frac{3\pi}{8})$ , $R_y(-\frac{\pi}{2})$ , $R_x(-\frac{\pi}{2})$_____no_output_____ <code> from qiskit import * from math import pi q = QuantumRegister(4) c = ClassicalRegister(4) qc = QuantumCircuit(q,c) qc.append(O4,[0,1,2,3]) # Apply the operator O qc.barrier() qc.rz(-3*pi/8,q[0]) qc.ry(pi/2,q[0]) # Qubit 1 qc.rx(pi/2,q[0]) qc.rz(-3*pi/8,q[1]) qc.ry(pi/2,q[1]) # Qubit 2 qc.rx(pi/2,q[1]) qc.rz(-3*pi/8,q[2]) qc.ry(pi/2,q[2]) # Qubit 3 qc.rx(pi/2,q[2]) qc.rz(-3*pi/8,q[3]) qc.ry(pi/2,q[3]) # Qubit 4 qc.rx(pi/2,q[3]) qc.barrier() qc.append(O4_dg,[0,1,2,3]) # Apply the operator Odg qc.barrier() qc.measure(q,c) # measure the qubits display(qc.draw(output = 'mpl',reverse_bits = True)) job = execute(qc,Aer.get_backend('qasm_simulator'),shots = 1000) counts = job.result().get_counts(qc) print(counts) Game.result_minority_game(counts) #display the result of the game plot_histogram(counts)_____no_output_____ </code> This result is rather remarkable since it shows that every player has a 1/4 chance of winning despite using only pure strategies! This is unachievable in the classical version of the game even when using mixed strategies. And what is even more intriguing is that in the 3-player version of this game 1) there are no pure strategy Nash equilibria and 2) the players each have a 1/4 chance of winning when in a mixed strategy Nash equilibrium, which is also true for the Nash equilibrium solution for the classical version of the game. So basically, we can't do any better in the 3-player version of the game by making it quantum, but we can in the 4-player version of the game. Why should this be the case? And in what cases of games can we do better in the quantum version compared to the classical ones? The answer to these questions that no one exactly knows yet... welcome to the wonderful and weird world of quantum game theory! _____no_output_____<h3>Why is Quantum game theory important</h3>_____no_output_____One of the application of Quantum game theory is Quantum biology: Quantum biology is an emerging field; most of the current research is theoretical and subject to questions that require further experimentation. Though the field has only recently received an influx of attention, it has been conceptualized by physicists throughout the 20th century. It has been suggested that quantum biology might play a critical role in the future of the medical world. Early pioneers of quantum physics saw applications of quantum mechanics in biological problems. Erwin Schrödinger's 1944 book What is Life? discussed applications of quantum mechanics in biology. Schrödinger introduced the idea of an "aperiodic crystal" that contained genetic information in its configuration of covalent chemical bonds. He further suggested that mutations are introduced by "quantum leaps". Other pioneers Niels Bohr, Pascual Jordan, and Max Delbruck argued that the quantum idea of complementarity was fundamental to the life sciences. In 1963, Per-Olov Löwdin published proton tunneling as another mechanism for DNA mutation. In his paper, he stated that there is a new field of study called "quantum biology"._____no_output_____Organisms that undergo photosynthesis absorb light energy through the process of electron excitation in antennae. These antennae vary among organisms. For example, bacteria use ring-like antennae, while plants use chlorophyll pigments to absorb photons. Photosynthesis creates Frenkel excitons, which provide a separation of charge that cells convert into usable chemical energy. The energy collected in reaction sites must be transferred quickly before it is lost to fluorescence or thermal vibrational motion. Various structures, such as the FMO complex in green sulfur bacteria, are responsible for transferring energy from antennae to a reaction site. FT electron spectroscopy studies of electron absorption and transfer show an efficiency of above 99%, which cannot be explained by classical mechanical models like the diffusion model. Instead, as early as 1938, scientists theorized that quantum coherence was the mechanism for excitation energy transfer. Scientists have recently looked for experimental evidence of this proposed energy transfer mechanism. A study published in 2007 claimed the identification of electronic quantum coherence at −196 °C (77 K). Another theoretical study from 2010 provided evidence that quantum coherence lives as long as 300 femtoseconds at biologically relevant temperatures (4 °C or 277 K) . In that same year, experiments conducted on photosynthetic cryptophyte algae using two-dimensional photon echo spectroscopy yielded further confirmation for long-term quantum coherence. These studies suggest that, through evolution, nature has developed a way of protecting quantum coherence to enhance the efficiency of photosynthesis. However, critical follow-up studies question the interpretation of these results. Single molecule spectroscopy now shows the quantum characteristics of photosynthesis without the interference of static disorder, and some studies use this method to assign reported signatures of electronic quantum coherence to nuclear dynamics occurring in chromophores. A number of proposals emerged trying to explain unexpectedly long coherence. According to one proposal, if each site within the complex feels its own environmental noise, the electron will not remain in any local minimum due to both quantum coherence and thermal environment, but proceed to the reaction site via quantum walks. Another proposal is that the rate of quantum coherence and electron tunneling create an energy sink that moves the electron to the reaction site quickly. Other work suggested that geometric symmetries in the complex may favor efficient energy transfer to the reaction center, mirroring perfect state transfer in quantum networks. Furthermore, experiments with artificial dye molecules cast doubts on the interpretation that quantum effects last any longer than one hundred femtoseconds. Another process in photosynthesis that has almost 100% efficiency is charge transfer, again suggesting that quantum mechanical phenomena are at play. In 1966, a study on the photosynthetic bacteria Chromatium found that at temperatures below 100 K, cytochrome oxidation is temperature-independent, slow (on the order of milliseconds), and very low in activation energy. The authors, Don DeVault and Britton Chase, postulated that these characteristics of electron transfer are indicative of quantum tunneling, whereby electrons penetrate a potential barrier despite possessing less energy than is classically necessary. Seth Lloyd is also notable for his contributions to this area of research. <img src=" ../Notebooks/images/FMO_Complex_Simple_Diagram.jpg" width="30%" align="centre">_____no_output_____<h4>DNA mutation</h4> Deoxyribonucleic acid, DNA, acts as the instructions for making proteins throughout the body. It consists of 4 nucleotides guanine, thymine, cytosine, and adenine. The order of these nucleotides gives the “recipe” for the different proteins. Whenever a cell reproduces, it must copy these strands of DNA. However, sometimes throughout the process of copying the strand of DNA a mutation, or an error in the DNA code, can occur. A theory for the reasoning behind DNA mutation is explained in the Lowdin DNA mutation model. In this model, a nucleotide may change its form through a process of quantum tunneling. Because of this, the changed nucleotide will lose its ability to pair with its original base pair and consequently changing the structure and order of the DNA strand. Exposure to ultraviolet lights and other types of radiation can cause DNA mutation and damage. The radiations also can modify the bonds along the DNA strand in the pyrimidines and cause them to bond with themselves creating a dimer. In many prokaryotes and plants, these bonds are repaired to their original form by a DNA repair enzyme photolyase. As its prefix implies, photolyase is reliant on light in order to repair the strand. Photolyase works with its cofactor FADH, flavin adenine dinucleotide, while repairing the DNA. Photolyase is excited by visible light and transfers an electron to the cofactor FADH-. FADH- now in the possession of an extra electron gives the electron to the dimer to break the bond and repair the DNA. This transfer of the electron is done through the tunneling of the electron from the FADH to the dimer. Although the range of the tunneling is much larger than feasible in a vacuum, the tunneling in this scenario is said to be “superexchange-mediated tunneling,” and is possible due to the protein's ability to boost the tunneling rates of the electron. <img src=" ../Notebooks/images/dna mutations.png" width="60%" align="centre">_____no_output_____<h3>Vision</h3> Vision relies on quantized energy in order to convert light signals to an action potential in a process called phototransduction. In phototransduction, a photon interacts with a chromophore in a light receptor. The chromophore absorbs the photon and undergoes photoisomerization. This change in structure induces a change in the structure of the photo receptor and resulting signal transduction pathways lead to a visual signal. However, the photoisomerization reaction occurs at a rapid rate, in under 200 femtoseconds, with high yield. Models suggest the use of quantum effects in shaping the ground state and excited state potentials in order to achieve this efficiency. <h3>Quantum vision implications</h3> Experiments have shown that the sensors in the retina of human eye is sensitive enough to detect a single photon. Single photon detection could lead to multiple different technologies. One area of development is in quantum communication and cryptography. The idea is to use a biometric system to measure the eye using only a small number of points across the retina with random flashes of photons that “read” the retina and identify the individual. This biometric system would only allow a certain individual with a specific retinal map to decode the message. This message can not be decoded by anyone else unless the eavesdropper were to guess the proper map or could read the retina of the intended recipient of the message. _____no_output_____<h3>Magnetoreception</h3> Magnetoreception refers to the ability of animals to navigate using the inclination of the magnetic field of the earth. A possible explanation for magnetoreception is the entangled radical pair mechanism. The radical-pair mechanism is well-established in spin chemistry, and was speculated to apply to magnetoreception in 1978 by Schulten et al.. The ratio between singlet and triplet pairs is changed by the interaction of entangled electron pairs with the magnetic field of the earth. In 2000, cryptochrome was proposed as the "magnetic molecule" that could harbor magnetically sensitive radical-pairs. Cryptochrome, a flavoprotein found in the eyes of European robins and other animal species, is the only protein known to form photoinduced radical-pairs in animals. When it interacts with light particles, cryptochrome goes through a redox reaction, which yields radical pairs both during the photo-reduction and the oxidation. The function of cryptochrome is diverse across species, however, the photoinduction of radical-pairs occurs by exposure to blue light, which excites an electron in a chromophore. Magnetoreception is also possible in the dark, so the mechanism must rely more on the radical pairs generated during light-independent oxidation. Experiments in the lab support the basic theory that radical-pair electrons can be significantly influenced by very weak magnetic fields, i.e. merely the direction of weak magnetic fields can affect radical-pair's reactivity and therefore can "catalyze" the formation of chemical products. Whether this mechanism applies to magnetoreception and/or quantum biology, that is, whether earth's magnetic field "catalyzes" the formation of biochemical products by the aid of radical-pairs, is undetermined for two reasons. The first is that radical-pairs may need not be entangled, the key quantum feature of the radical-pair mechanism, to play a part in these processes. There are entangled and non-entangled radical-pairs. However, researchers found evidence for the radical-pair mechanism of magnetoreception when European robins, cockroaches, and garden warblers, could no longer navigate when exposed to a radio frequency that obstructs magnetic fields and radical-pair chemistry. To empirically suggest the involvement of entanglement, an experiment would need to be devised that could disturb entangled radical-pairs without disturbing other radical-pairs, or vice versa, which would first need to be demonstrated in a laboratory setting before being applied to in vivo radical-pairs. <img src=" ../Notebooks/images/Magnetoreception.png" width="80%" align="centre">_____no_output_____<h3>Conclusion</h3> From all the notebooks included in this tutorial we have understood basic knowledge of Classical game theory and Quantum game theory. These notebook merely scratches the very surface of quantum game theory. Thank you for taking the time to go through this tutorial and we hope you enjoyed it, and maybe even learnt a little something :). If you are further instrested in Quantum Game Theory then you can find addtional resources <a href="Additional Resources.ipynb#Task 1">Here</a>._____no_output_____
{ "repository": "aman983/QCourse_Project-2021-2022", "path": "Notebooks/Notebook-3.ipynb", "matched_keywords": [ "evolution", "biology" ], "stars": null, "size": 42025, "hexsha": "486b3207933b8c09bdef041401f6669ec91c63cb", "max_line_length": 1887, "avg_line_length": 45.4816017316, "alphanum_fraction": 0.6001903629 }
# Notebook from michalk8/NeuralEE Path: tests/notebooks/cortex_dataset.ipynb # NeuralEE on CORTEX Dataset_____no_output_____`CORTEX` dataset contains 3005 mouse cortex cells and gold-standard labels for seven distinct cell types. Each cell type corresponds to a cluster to recover._____no_output_____ <code> import random import numpy as np import torch from neuralee.embedding import NeuralEE from neuralee.dataset import CortexDataset from neuralee._aux import scatter %matplotlib inline_____no_output_____ </code> Choose a GPU if a GPU available. It could be defined as follow: ``` device = torch.device('cuda:0') device = torch.device('cuda:1') device = torch.device('cpu') ```_____no_output_____ <code> device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')_____no_output_____ </code> To reproduce the following results, fix the random seed._____no_output_____ <code> torch.manual_seed(1234) random.seed(1234) np.random.seed(1234)_____no_output_____ </code> First, we apply log(1 + x) transformation to each element of the cell-gene expression matrix. Then, We retain top 558 genes ordered by variance as the original paper. Finally, we normalize the expression of each gene by subtracting its mean and dividing its standard deviation._____no_output_____ <code> cortex_dataset = CortexDataset(save_path='../') cortex_dataset.log_shift() cortex_dataset.subsample_genes(558) cortex_dataset.standardscale()File ../expression.bin already downloaded Preprocessing Cortex data Finished preprocessing Cortex data Downsampling from 19972 to 558 genes </code> We apply NeuralEE with different hyper-paramters. `N_small` takes from {1.0, 0.5, 0.25}, while `N_smalls`= 1.0 means not applied with stochastic optimization. `lam` takes from {1, 10}. `perplexity` fixs as 30._____no_output_____ <code> N_smalls = [1.0, 0.5, 0.25] N_str = ["nobatch", "2batches", "4batches"] lams = [1, 10] cortex_dataset.affinity(perplexity=30.0) for i in range(len(N_smalls)): cortex_dataset.affinity_split(N_small=N_smalls[i], perplexity=30.0) for lam in lams: NEE = NeuralEE(cortex_dataset, lam=lam, device=device) results_Neural = NEE.fine_tune() np.save('embedding/CORTEX_' + 'lam' + str(lam) + '_' + N_str[i], results_Neural['X'].numpy()) scatter(results_Neural['X'].numpy(), NEE.labels, cortex_dataset.cell_types)Compute affinity, perplexity=30.0, on entire dataset Compute affinity, perplexity=30.0, N_small=3005, on each batch Neural Elastic Embedding, lambda=1, completed in 4.62s. Neural Elastic Embedding, lambda=10, completed in 2.37s. Compute affinity, perplexity=30.0, N_small=1502, on each batch Neural Elastic Embedding, lambda=1, completed in 4.18s. Neural Elastic Embedding, lambda=10, completed in 4.19s. Compute affinity, perplexity=30.0, N_small=751, on each batch Neural Elastic Embedding, lambda=1, completed in 8.24s. Neural Elastic Embedding, lambda=10, completed in 8.22s. </code>
{ "repository": "michalk8/NeuralEE", "path": "tests/notebooks/cortex_dataset.ipynb", "matched_keywords": [ "gene expression" ], "stars": 6, "size": 809444, "hexsha": "486b4c9466e877091e3a68c90c912c2c3456c7a9", "max_line_length": 148148, "avg_line_length": 3237.776, "alphanum_fraction": 0.9616131567 }
# Notebook from jerobado/lightkurve Path: docs/source/tutorials/2-creating-light-curves/2-3-removing-scattered-light-using-regressioncorrector.ipynb # Removing scattered light from *TESS* light curves using linear regression (`RegressionCorrector`)_____no_output_____## Learning Goals By the end of this tutorial, you will: - Be familiar with the Lightkurve [RegressionCorrector](https://docs.lightkurve.org/reference/api/lightkurve.correctors.RegressionCorrector.html?highlight=regressioncorrector). - Understand how to create regressors from a [TargetPixelFile](https://docs.lightkurve.org/reference/targetpixelfile.html?highlight=targetpixelfile) object. - Be able to remove the scattered light background signal from *TESS* data._____no_output_____## Introduction Lightkurve offers several tools to the community for removing instrument noise and systematics from data from the *Kepler*, *K2*, and *TESS* missions. This tutorial will demonstrate the use of Lightkurve's `RegressionCorrector` class to remove the scattered light and spacecraft motion noise from [*TESS* Full Frame Images (FFIs)](https://heasarc.gsfc.nasa.gov/docs/tess/data-products.html#full-frame-images). *TESS* FFIs have an additive scattered light background that has not been removed by the pipeline. This scattered light must be removed by the user. This can be done in a few ways, including a basic median subtraction. In this tutorial, we'll show you how to use Lightkurve's corrector tools to remove the scattered light. _____no_output_____## Imports This tutorial requires the [**Lightkurve**](http://docs.lightkurve.org/) package, and also makes use of **[NumPy](https://numpy.org/)** and **[Matplotlib](https://matplotlib.org/)**._____no_output_____ <code> import lightkurve as lk import numpy as np import matplotlib.pyplot as plt %matplotlib inline_____no_output_____ </code> ---_____no_output_____## 1. Using `RegressionCorrector` on TESSCut FFI Cutouts For this tutorial we will use the *TESS* Sector 15 data of [KIC 8462852](https://en.wikipedia.org/wiki/Tabby%27s_Star) (also known as Boyajian's Star). We'll start by downloading the FFI data using MAST's TESSCut service, querying it through Lightkurve._____no_output_____ <code> target = "KIC 8462852" # Boyajian's Star tpf = lk.search_tesscut(target, sector=15).download(cutout_size=(50, 50))_____no_output_____tpf_____no_output_____ </code> This cutout works the same as any Lightkurve target pixel file (TPF). *TESS* FFI cutouts do not have aperture masks created by the pipeline. Instead, users must create their own apertures. There are many methods we could use to do this, but for now we can create a threshold aperture, using Lightkurve's [create_threshold_mask()](https://docs.lightkurve.org/reference/api/lightkurve.KeplerTargetPixelFile.create_threshold_mask.html?highlight=create_threshold_mask#lightkurve.KeplerTargetPixelFile.create_threshold_mask) method._____no_output_____ <code> aper = tpf.create_threshold_mask()_____no_output_____ </code> Let's plot the aperture to make sure it selected the star in the center and has a reasonable number of pixels._____no_output_____ <code> tpf.plot(aperture_mask=aper);_____no_output_____ </code> Looks good. We can sum up the pixels in this aperture, and create an uncorrected light curve._____no_output_____ <code> uncorrected_lc = tpf.to_lightcurve(aperture_mask=aper)_____no_output_____uncorrected_lc.plot();_____no_output_____ </code> ## 2. Creating a `DesignMatrix` from Pixel Regressors_____no_output_____The flux in the aperture appears to be dominated by scattered light. We can tell because *TESS* orbits Earth twice in each sector, thus patterns which appear twice within a sector are typically related to the *TESS* orbit (such as the scattered light effect). To remove this light, we are going to detrend the light curve against some vectors which we think are predictive of this systematic noise. In this case, we can use the **pixels outside the aperture** as vectors that are highly predictive of the systematic noise, that is, we will make the assumption that these pixels do not contain any flux from our target. We can select these pixels by specifying flux outside of the aperture using Python's bitwise invert operator `~` to take the inverse of the aperture mask._____no_output_____ <code> regressors = tpf.flux[:, ~aper]_____no_output_____regressors.shape_____no_output_____ </code> `regressors` is now an array with shape *ntime* x *npixels outside of the aperture*. If we plot the first 30 of these pixels, we can see that they contain mostly scattered light, with some offset terms._____no_output_____ <code> plt.plot(regressors[:, :30]);_____no_output_____ </code> In linear regression problems, it is common to refer to the matrix of regressors as the design matrix (also known as model matrix or regressor matrix). Lightkurve provides a convenient `DesignMatrix` class which is designed to help you work with detrending vectors. The [DesignMatrix](https://docs.lightkurve.org/reference/api/lightkurve.correctors.DesignMatrix.html?highlight=designmatrix#lightkurve.correctors.DesignMatrix) class has several convenience functions, and can be passed into Lightkurve's corrector objects. _____no_output_____ <code> from lightkurve.correctors import DesignMatrix dm = DesignMatrix(regressors, name='regressors')_____no_output_____dm_____no_output_____ </code> As shown above, `dm` is now a design matrix with the same shape as the input pixels. Currently, we have 2,541 pixels that we are using to detrend our light curve against. Rather than using all of the pixels, we can reduce these to their principal components using Principal Component Analysis (PCA). We do this for several reasons: 1. By reducing to a smaller number of vectors, we can remove some of the stochastic noise in our detrending vectors. 2. By reducing to the principal components, we can avoid pixels that have intrinsic variability (for example, from astrophysical long-period variables) that can be confused with the true astrophysical signal of our target. 3. By reducing the number of vectors, our detrending will be faster (although in this case, the detrending will still take seconds). The choice of the number of components is a tricky issue, but in general you should choose a number that is much smaller than the number of vectors._____no_output_____ <code> dm = dm.pca(5)_____no_output_____dm_____no_output_____ </code> Using the `pca()` method, we have now reduced the number of components in our design matrix to five. These vectors show a combination of scattered light and spacecraft motion, which makes them suited to detrend our input light curve._____no_output_____ <code> plt.plot(tpf.time.value, dm.values + np.arange(5)*0.2, '.');_____no_output_____ </code> Note: the `DesignMatrix` object provides a convenient `plot()` method to visualize the vectors:_____no_output_____ <code> dm.plot();_____no_output_____ </code> We can now detrend the uncorrected light curve against these vectors. Lightkurve's [RegressionCorrector](https://docs.lightkurve.org/reference/api/lightkurve.correctors.RegressionCorrector.html?highlight=regressioncorrector#lightkurve.correctors.RegressionCorrector) will use linear algebra to find the combination of vectors that makes the input light curve **closest to zero**. To do this, we need one more component; we need an "offset" term, to be able to fit the mean level of the light curve. We can do this by appending a "constant" to our design matrix._____no_output_____ <code> dm = dm.append_constant()_____no_output_____ </code> ## 3. Removing Background Scattered Light Using Linear Regression_____no_output_____Now that we have a design matrix, we only need to pass it into a [lightkurve.Corrector](https://docs.lightkurve.org/reference/api/lightkurve.correctors.corrector.Corrector.html?highlight=lightkurve%20corrector#lightkurve.correctors.corrector.Corrector). To use our design matrix, we can pass it to the [RegressionCorrector](https://docs.lightkurve.org/reference/api/lightkurve.correctors.RegressionCorrector.html?highlight=regressioncorrector), which will detrend the input light curve against the vectors we've built._____no_output_____ <code> from lightkurve.correctors import RegressionCorrector corrector = RegressionCorrector(uncorrected_lc)_____no_output_____corrector_____no_output_____ </code> To correct the light curve, we pass in our design matrix._____no_output_____ <code> corrected_lc = corrector.correct(dm)_____no_output_____ </code> Now we can plot the results:_____no_output_____ <code> ax = uncorrected_lc.plot(label='Original light curve') corrected_lc.plot(ax=ax, label='Corrected light curve');_____no_output_____ </code> As shown above, the scattered light from the background has been removed. If we want to take a more in-depth look at the correction, we can use the [diagnose()](https://docs.lightkurve.org/reference/api/lightkurve.correctors.RegressionCorrector.diagnose.html?highlight=diagnose#lightkurve.correctors.RegressionCorrector.diagnose) method to see what the `RegressionCorrector` found as the best fitting solution._____no_output_____## 4. Diagnosing the Correction_____no_output_____ <code> corrector.diagnose();_____no_output_____ </code> The `RegressionCorrector` has clipped out some outliers during the fit of the trend. You can read more about the outlier removal, how to pass a cadence mask, and error propagation in the [docs](https://docs.lightkurve.org/reference/api/lightkurve.correctors.RegressionCorrector.html?highlight=regressioncorrector)._____no_output_____**Watch Out!** The `RegressionCorrector` assumes that you want to remove the trend and set the light curve to the **mean** level of the **uncorrected light curve**. This isn't true for *TESS* scattered light. *TESS* FFI light curves have **additive background**, and so we want to reduce the flux to the lowest recorded level, assuming that at that point the contribution from scattered light is approximately zero. To do this, we will first need to look at the model of the background that `RegressionCorrector` built. We can access that in the `corrector` object._____no_output_____ <code> corrector.model_lc_____no_output_____model = corrector.model_lc model.plot();_____no_output_____ </code> As you can see above, the model drops below zero flux. This is impossible; the scattered light can't be removing flux from our target! To rectify this, we can subtract the model flux value at the 5th percentile._____no_output_____ <code> # Normalize to the 5th percentile of model flux model -= np.percentile(model.flux, 5)_____no_output_____model.plot();_____no_output_____ </code> This looks better. Now we can remove this model from our uncorrected light curve._____no_output_____ <code> corrected_lc = uncorrected_lc - model_____no_output_____ax = uncorrected_lc.plot(label='Original light curve') corrected_lc.plot(ax=ax, label='Corrected light curve');_____no_output_____ </code> This looks great. As a final test, let's investigate how the light curve we obtained using `RegressionCorrector` compares against a light curve obtained using a more basic median background removal method._____no_output_____ <code> bkg = np.median(regressors, axis=1) bkg -= np.percentile(bkg, 5) npix = aper.sum() median_subtracted_lc = uncorrected_lc - npix * bkg ax = median_subtracted_lc.plot(label='Median background subtraction') corrected_lc.plot(ax=ax, label='RegressionCorrector');_____no_output_____ </code> Lastly, let's show how you can do all of the above in a single cell._____no_output_____ <code> # Make an aperture mask and an uncorrected light curve aper = tpf.create_threshold_mask() uncorrected_lc = tpf.to_lightcurve(aperture_mask=aper) # Make a design matrix and pass it to a linear regression corrector dm = DesignMatrix(tpf.flux[:, ~aper], name='regressors').pca(5).append_constant() rc = RegressionCorrector(uncorrected_lc) corrected_ffi_lc = rc.correct(dm) # Optional: Remove the scattered light, allowing for the large offset from scattered light corrected_ffi_lc = uncorrected_lc - rc.model_lc + np.percentile(rc.model_lc.flux, 5)_____no_output_____ax = uncorrected_lc.plot(label='Original light curve') corrected_ffi_lc.plot(ax=ax, label='Corrected light curve');_____no_output_____ </code> ## 5. Using `RegressionCorrector` on *TESS* Two-Minute Cadence Target Pixel Files *TESS* releases high-time resolution TPFs of interesting targets. These higher time resolution TPFs have background removed for users by the pipeline. However, there are still common trends in TPF pixels that are not due to scattered light, but could be from, for example, spacecraft motion. `RegressionCorrector` can be used in exactly the same way to remove these common trends._____no_output_____ <code> # Download a 2-minute cadence Target Pixel File (TPF) tpf_2min = lk.search_targetpixelfile(target, author='SPOC', cadence=120, sector=15).download()_____no_output_____tpf_2min_____no_output_____ </code> Note, unlike the FFI data, the TPF has been processed by the SPOC pipeline, and includes an aperture mask._____no_output_____ <code> # Use the pipeline aperture and an uncorrected light curve aper = tpf_2min.pipeline_mask uncorrected_lc = tpf_2min.to_lightcurve() # Make a design matrix dm = DesignMatrix(tpf_2min.flux[:, ~aper], name='pixels').pca(5).append_constant() # Regression Corrector Object reg = RegressionCorrector(uncorrected_lc) corrected_lc = reg.correct(dm)_____no_output_____ax = uncorrected_lc.errorbar(label='Original light curve') corrected_lc.errorbar(ax=ax, label='Corrected light curve');_____no_output_____ </code> As you can see, the corrected light curve has removed long-term trends and some motion noise, for example, see time around 1720 Barycentric *TESS* Julian Date (BTJD). We can use the same `diagnose()` method to understand the model that has been fit and subtracted by `RegressionCorrector`._____no_output_____ <code> reg.diagnose();_____no_output_____ </code> To show the corrected version has improved, we can use the Combined Differential Photometric Precision (CDPP) metric. As shown below, the corrected light curve has a lower CDPP, showing it is less noisy._____no_output_____ <code> uncorrected_lc.estimate_cdpp()_____no_output_____corrected_lc.estimate_cdpp()_____no_output_____ </code> ## 6. Should I use `RegressionCorrector` or `PLDCorrector`? In addition to the corrector demonstrated in this tutorial, Lightkurve has a special case of `RegressionCorrector` called [PLDCorrector](https://docs.lightkurve.org/reference/api/lightkurve.correctors.PLDCorrector.html?highlight=pldcorrector). PLD, or Pixel Level Decorrelation, is a method of removing systematic noise from light curves using linear regression, with a design matrix constructed from a combination of pixel-level light curves. For more information about the `PLDCorrector`, please see the tutorial specifically on removing instrumental noise from *K2* and *TESS* light curves using PLD. For *TESS*, the `PLDCorrector` works in a very similar way to the `RegressionCorrector`. The major difference between them is that `PLDCorrector` constructs its own design matrix, making it a streamlined, versatile tool to apply to any *TESS* or *K2* light curve. Here, we perform PLD and diagnose the correction in just three lines. To make a more direct comparison to the `RegressionCorrector`, we pass in arguments to set the number of components to five (as in section 2), as well as remove the spline fit._____no_output_____ <code> from lightkurve.correctors import PLDCorrector pld = PLDCorrector(tpf) pld_corrected_lc = pld.correct(restore_trend=False, pca_components=5) pld.diagnose();_____no_output_____ </code> And there we go! Now let's compare the performance of these two corrections._____no_output_____ <code> ax = corrected_ffi_lc.normalize().plot(label='RegressionCorrector') pld_corrected_lc.normalize().plot(label='PLDCorrector', ax=ax);_____no_output_____ </code> `PLDCorrector` offers an additional diagnostic plot, named [diagnose_masks](https://docs.lightkurve.org/reference/api/lightkurve.correctors.PLDCorrector.diagnose_masks.html?highlight=diagnose_masks#lightkurve.correctors.PLDCorrector.diagnose_masks). This allows you to inspect the pixels that were used to create your design matrix._____no_output_____ <code> pld.diagnose_masks();_____no_output_____ </code> While it is more convenient to apply to light curves and generally works well with default parameters, the `PLDCorrector` is less flexible than the `RegressionCorrector`, which allows you to create your own custom design matrix. However, the `PLDCorrector` also allows you to create "higher order" PLD regressors by taking the products of existing pixel regressors, which improves the performance of corrections to *K2* data (see the paper by [Luger et al. 2016](https://arxiv.org/abs/1607.00524) for more information). When considering which corrector to use, remember that `PLDCorrector` is minimal and designed to be effective at removing both background scattered light from *TESS* and motion noise from *K2*, while `RegressionCorrector` is flexible and gives you more control over the creation of the design matrix and the correction._____no_output_____## About this Notebook **Authors:** Christina Hedges ([email protected]), Nicholas Saunders ([email protected]), Geert Barentsen **Updated On:** 2020-09-28_____no_output_____## Citing Lightkurve and its Dependencies If you use `lightkurve` or its dependencies for published research, please cite the authors. Click the buttons below to copy BibTeX entries to your clipboard._____no_output_____ <code> lk.show_citation_instructions()_____no_output_____ </code> <img style="float: right;" src="https://raw.githubusercontent.com/spacetelescope/notebooks/master/assets/stsci_pri_combo_mark_horizonal_white_bkgd.png" alt="Space Telescope Logo" width="200px"/>_____no_output_____
{ "repository": "jerobado/lightkurve", "path": "docs/source/tutorials/2-creating-light-curves/2-3-removing-scattered-light-using-regressioncorrector.ipynb", "matched_keywords": [ "STAR" ], "stars": 235, "size": 41340, "hexsha": "486bb6f9dc61e4b004c702260df9fc55bdfc4ae4", "max_line_length": 571, "avg_line_length": 29.7410071942, "alphanum_fraction": 0.6161828737 }
# Notebook from ecuriotto/training-data-analyst Path: quests/rl/a2c/a2c_on_gcp.ipynb # Policy Gradients and A2C In the <a href="../dqn/dqns_on_gcp.ipynb">previous notebook</a>, we learned how to use hyperparameter tuning to help DQN agents balance a pole on a cart. In this notebook, we'll explore two other types of alogrithms: Policy Gradients and A2C. ## Setup Hypertuning takes some time, and in this case, it can take anywhere between **10 - 30 minutes**. If this hasn't been done already, run the cell below to kick off the training job now. We'll step through what the code is doing while our agents learn._____no_output_____ <code> %%bash BUCKET=<your-bucket-here> # Change to your bucket name JOB_NAME=pg_on_gcp_$(date -u +%y%m%d_%H%M%S) REGION='us-central1' # Change to your bucket region IMAGE_URI=gcr.io/cloud-training-prod-bucket/pg:latest gcloud ai-platform jobs submit training $JOB_NAME \ --staging-bucket=gs://$BUCKET \ --region=$REGION \ --master-image-uri=$IMAGE_URI \ --scale-tier=BASIC_GPU \ --job-dir=gs://$BUCKET/$JOB_NAME \ --config=templates/hyperparam.yaml_____no_output_____ </code> Thankfully, we can use the same environment for these algorithms as DQN, so this notebook will focus less on the operational work of feeding our agents the data, and more on the theory behind these algorthims. Let's start by loading our libraries and environment._____no_output_____ <code> import gym import numpy as np import tensorflow as tf from tensorflow.keras import layers, models from tensorflow.keras import backend as K CLIP_EDGE = 1e-8 def print_state(state, step, reward=None): format_string = 'Step {0} - Cart X: {1:.3f}, Cart V: {2:.3f}, Pole A: {3:.3f}, Pole V:{4:.3f}, Reward:{5}' print(format_string.format(step, *tuple(state), reward)) env = gym.make('CartPole-v0')_____no_output_____ </code> ## The Theory Behind Policy Gradients Whereas Q-learning attempts to assign each state a value, Policy Gradients tries to find actions directly, increasing or decreaing a chance to take an action depending on how an episode plays out. To compare, Q-learning has a table that keeps track of the value of each combination of state and action: || Meal | Snack | Wait | |-|-|-|-| | Hangry | 1 | .5 | -1 | | Hungry | .5 | 1 | 0 | | Full | -1 | -.5 | 1.5 | Instead for Policy Gradients, we can imagine that we have a similar table, but instead of recording the values, we'll keep track of the probability to take the column action given the row state. || Meal | Snack | Wait | |-|-|-|-| | Hangry | 70% | 20% | 10% | | Hungry | 30% | 50% | 20% | | Full | 5% | 15% | 80% | With Q learning, whenever we take one step in our environment, we can update the value of the old state based on the value of the new state plus any rewards we picked up based on the [Q equation](https://en.wikipedia.org/wiki/Q-learning): <img style="background-color:white;" src="https://wikimedia.org/api/rest_v1/media/math/render/svg/47fa1e5cf8cf75996a777c11c7b9445dc96d4637"> Could we do the same thing if we have a table of probabilities instead values? No, because we don't have a way to calculate the value of each state from our table. Instead, we'll use a different <a href="http://incompleteideas.net/papers/sutton-88-with-erratum.pdf"> Temporal Difference Learning</a> strategy. Q Learning is an evolution of TD(0), and for Policy Gradients, we'll use TD(1). We'll calculate TD(1) accross and entire episode, and use that to indicate whether to increase or decrease the probability correspoding to the action we took. Let's look at a full day of eating. | Hour | State | Action | Reward | |-|-|-|-| |9| Hangry | Wait | -.9 | |10| Hangry | Meal | 1.2 | |11| Full | Wait | .5 | |12| Full | Snack | -.6 | |13| Full | Wait | 1 | |14| Full | Wait | .6 | |15| Full | Wait | .2 | |16| Hungry | Wait | 0 | |17| Hungry | Meal | .4 | |18| Full | Wait| .5 | We'll work backwards from the last day, using the same discount, or `gamma`, as we did with DQNs. The `total_rewards` variable is equivalent to the value of state prime. Using the [Bellman Equation](https://en.wikipedia.org/wiki/Bellman_equation), everytime we calculate the value of a state, s<sub>t</sub>, we'll set that as the value of state prime for the state before, s<sub>t-1</sub>. _____no_output_____ <code> test_gamma = .5 # Please change me to be between zero and one episode_rewards = [-.9, 1.2, .5, -.6, 1, .6, .2, 0, .4, .5] def discount_episode(rewards, gamma): discounted_rewards = np.zeros_like(rewards) total_rewards = 0 for t in reversed(range(len(rewards))): total_rewards = rewards[t] + total_rewards * gamma discounted_rewards[t] = total_rewards return discounted_rewards discount_episode(episode_rewards, test_gamma)_____no_output_____ </code> Wherever our discounted reward is positive, we'll increase the probability corresponding to the action we took. Similarly, wherever our discounted reward is negative, we'll decrease the probabilty. However, with this strategy, any actions with a positive reward will have it's probability increase, not necessarily the most optimal action. This puts us in a feedback loop, where we're more likely to pick less optimal actions which could further increase their probability. To counter this, we'll divide the size of our increases by the probability to choose the corresponding action, which will slow the growth of popular actions to give other actions a chance. Here is our update rule for our neural network, where alpha is our learning rate, and pi is our optimal policy, or the probability to take the optimal action, a<sup>*</sup>, given our current state, s. <img src="images/weight_update.png" width="200" height="100"> Doing some fancy calculus, we can combine the numerator and denominator with a log function. Since it's not clear what the optimal action is, we'll instead use our discounted rewards, or G, to increase or decrease the weights of the respective action the agent took. A full breakdown of the math can be found in [this article by Chris Yoon](https://medium.com/@thechrisyoon/deriving-policy-gradients-and-implementing-reinforce-f887949bd63). <img src="images/weight_update_calculus.png" width="300" height="150"> Below is what it looks like in code. `y_true` is the [one-hot encoding](https://en.wikipedia.org/wiki/One-hot) of the action that was taken. `y_pred` is the probabilty to take each action given the state the agent was in._____no_output_____ <code> def custom_loss(y_true, y_pred): y_pred_clipped = K.clip(y_pred, CLIP_EDGE, 1-CLIP_EDGE) log_likelihood = y_true * K.log(y_pred_clipped) return K.sum(-log_likelihood*g)_____no_output_____ </code> We won't have the discounted rewards, or `g`, when our agent is acting in the environment. No problem, we'll have one neural network with two types of pathways. One pathway, `predict`, will be the probability to take an action given an inputed state. It's only used for prediction and is not used for backpropogation. The other pathway, `policy`, will take both a state and a discounted reward, so it can be used for training. The code in its entirety looks like this. As with Deep Q Networks, the hidden layers of a Policy Gradient can use a CNN if the input state is pixels, but the last layer is typically a [Dense](https://keras.io/layers/core/) layer with a [Softmax](https://en.wikipedia.org/wiki/Softmax_function) activation function to convert the output into probabilities._____no_output_____ <code> def build_networks( state_shape, action_size, learning_rate, hidden_neurons): """Creates a Policy Gradient Neural Network. Creates a two hidden-layer Policy Gradient Neural Network. The loss function is altered to be a log-likelihood function weighted by the discounted reward, g. Args: space_shape: a tuple of ints representing the observation space. action_size (int): the number of possible actions. learning_rate (float): the nueral network's learning rate. hidden_neurons (int): the number of neurons to use per hidden layer. """ state_input = layers.Input(state_shape, name='frames') g = layers.Input((1,), name='g') hidden_1 = layers.Dense(hidden_neurons, activation='relu')(state_input) hidden_2 = layers.Dense(hidden_neurons, activation='relu')(hidden_1) probabilities = layers.Dense(action_size, activation='softmax')(hidden_2) def custom_loss(y_true, y_pred): y_pred_clipped = K.clip(y_pred, CLIP_EDGE, 1-CLIP_EDGE) log_lik = y_true*K.log(y_pred_clipped) return K.sum(-log_lik*g) policy = models.Model( inputs=[state_input, g], outputs=[probabilities]) optimizer = tf.keras.optimizers.Adam(lr=learning_rate) policy.compile(loss=custom_loss, optimizer=optimizer) predict = models.Model(inputs=[state_input], outputs=[probabilities]) return policy, predict_____no_output_____ </code> Let's get a taste of how these networks function. Run the below cell to build our test networks._____no_output_____ <code> space_shape = env.observation_space.shape action_size = env.action_space.n # Feel free to play with these test_learning_rate = .2 test_hidden_neurons = 10 test_policy, test_predict = build_networks( space_shape, action_size, test_learning_rate, test_hidden_neurons)_____no_output_____ </code> We can't use the policy network until we build our learning function, but we can feed a state to the predict network so we can see our chances to pick our actions._____no_output_____ <code> state = env.reset() test_predict.predict(np.expand_dims(state, axis=0))_____no_output_____ </code> Right now, the numbers should be close to `[.5, .5]`, with a little bit of variance due to the randomization of initializing the weights and the cart's starting position. In order to train, we'll need some memories to train on. The memory buffer here is simpler than DQN, as we don't have to worry about random sampling. We'll clear the buffer every time we train as we'll only hold one episode's worth of memory._____no_output_____ <code> class Memory(): """Sets up a memory replay buffer for Policy Gradient methods. Args: gamma (float): The "discount rate" used to assess TD(1) values. """ def __init__(self, gamma): self.buffer = [] self.gamma = gamma def add(self, experience): """Adds an experience into the memory buffer. Args: experience: a (state, action, reward) tuple. """ self.buffer.append(experience) def sample(self): """Returns the list of episode experiences and clears the buffer. Returns: (list): A tuple of lists with structure ( [states], [actions], [rewards] } """ batch = np.array(self.buffer).T.tolist() states_mb = np.array(batch[0], dtype=np.float32) actions_mb = np.array(batch[1], dtype=np.int8) rewards_mb = np.array(batch[2], dtype=np.float32) self.buffer = [] return states_mb, actions_mb, rewards_mb_____no_output_____ </code> Let's make a fake buffer to get a sense of the data we'll be training on. The cell below initializes our memory and runs through one episode of the game by alternating pushing the cart left and right. Try running it to see the data we'll be using for training._____no_output_____ <code> test_memory = Memory(test_gamma) actions = [x % 2 for x in range(200)] state = env.reset() step = 0 episode_reward = 0 done = False while not done and step < len(actions): action = actions[step] # In the future, our agents will define this. state_prime, reward, done, info = env.step(action) episode_reward += reward test_memory.add((state, action, reward)) step += 1 state = state_prime test_memory.sample()_____no_output_____ </code> Ok, time to start putting together the agent! Let's start by giving it the ability to act. Here, we don't need to worry about exploration vs exploitation because we already have a random chance to take each of our actions. As the agent learns, it will naturally shift from exploration to exploitation. How conveient!_____no_output_____ <code> class Partial_Agent(): """Sets up a reinforcement learning agent to play in a game environment.""" def __init__(self, policy, predict, memory, action_size): """Initializes the agent with Policy Gradient networks and memory sub-classes. Args: policy: The policy network created from build_networks(). predict: The predict network created from build_networks(). memory: A Memory class object. action_size (int): The number of possible actions to take. """ self.policy = policy self.predict = predict self.action_size = action_size self.memory = memory def act(self, state): """Selects an action for the agent to take given a game state. Args: state (list of numbers): The state of the environment to act on. Returns: (int) The index of the action to take. """ # If not acting randomly, take action with highest predicted value. state_batch = np.expand_dims(state, axis=0) probabilities = self.predict.predict(state_batch)[0] action = np.random.choice(self.action_size, p=probabilities) return action_____no_output_____ </code> Let's see the act function in action. First, let's build our agent._____no_output_____ <code> test_agent = Partial_Agent(test_policy, test_predict, test_memory, action_size)_____no_output_____ </code> Next, run the below cell a few times to test the `act` method. Is it about a 50/50 chance to push right instead of left?_____no_output_____ <code> action = test_agent.act(state) print("Push Right" if action else "Push Left")Push Right </code> Now for the most important part. We need to give our agent a way to learn! To start, we'll [one-hot encode](https://en.wikipedia.org/wiki/One-hot) our actions. Since the output of our network is a probability for each action, we'll have a 1 corresponding to the action that was taken and 0's for the actions we didn't take. That doesn't give our agent enough information on whether the action that was taken was actually a good idea, so we'll also use our `discount_episode` to calculate the TD(1) value of each step within the episode. One thing to note, is that CartPole doesn't have any negative rewards, meaning, even if it does terribly, the agent will still think the run went well. To help counter this, we'll take the mean and standard deviation of our discounted rewards, or `discount_mb`, and use that to find the [Standard Score](https://en.wikipedia.org/wiki/Standard_score) for each discounted reward. With this, steps close to dropping the poll will have a negative reward._____no_output_____ <code> def learn(self, print_variables=False): """Trains a Policy Gradient policy network based on stored experiences.""" state_mb, action_mb, reward_mb = self.memory.sample() # One hot enocde actions actions = np.zeros([len(action_mb), self.action_size]) actions[np.arange(len(action_mb)), action_mb] = 1 if print_variables: print("action_mb:", action_mb) print("actions:", actions) # Apply TD(1) and normalize discount_mb = discount_episode(reward_mb, self.memory.gamma) discount_mb = (discount_mb - np.mean(discount_mb)) / np.std(discount_mb) if print_variables: print("reward_mb:", reward_mb) print("discount_mb:", discount_mb) return self.policy.train_on_batch([state_mb, discount_mb], actions) Partial_Agent.learn = learn test_agent = Partial_Agent(test_policy, test_predict, test_memory, action_size)_____no_output_____ </code> Try adding in some print statements to the code above to get a sense of how the data is transformed before feeding it into the model, then run the below code to see it in action._____no_output_____ <code> state = env.reset() done = False while not done: action = test_agent.act(state) state_prime, reward, done, _ = env.step(action) test_agent.memory.add((state, action, reward)) # New line here state = state_prime test_agent.learn(print_variables=True)action_mb: [0 1 1 0 1 1 1 0 1 0 0 0 0 0 1 0 1 1 0 0 1 0 1 0 0 1 0 1 1 0 1 0 0 1 1 1 0 0 1 1 1 0 1 0 0 0 1 0 1 1 0 1 0 0 0 0 1 0 1 0 0 1 1 1 1] actions: [[1. 0.] [0. 1.] [0. 1.] [1. 0.] [0. 1.] [0. 1.] [0. 1.] [1. 0.] [0. 1.] [1. 0.] [1. 0.] [1. 0.] [1. 0.] [1. 0.] [0. 1.] [1. 0.] [0. 1.] [0. 1.] [1. 0.] [1. 0.] [0. 1.] [1. 0.] [0. 1.] [1. 0.] [1. 0.] [0. 1.] [1. 0.] [0. 1.] [0. 1.] [1. 0.] [0. 1.] [1. 0.] [1. 0.] [0. 1.] [0. 1.] [0. 1.] [1. 0.] [1. 0.] [0. 1.] [0. 1.] [0. 1.] [1. 0.] [0. 1.] [1. 0.] [1. 0.] [1. 0.] [0. 1.] [1. 0.] [0. 1.] [0. 1.] [1. 0.] [0. 1.] [1. 0.] [1. 0.] [1. 0.] [1. 0.] [0. 1.] [1. 0.] [0. 1.] [1. 0.] [1. 0.] [0. 1.] [0. 1.] [0. 1.] [0. 1.]] reward_mb: [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] discount_mb: [ 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1997064e-01 2.1996979e-01 2.1996894e-01 2.1996723e-01 2.1996383e-01 2.1995701e-01 2.1994337e-01 2.1991611e-01 2.1986157e-01 2.1975248e-01 2.1953431e-01 2.1909796e-01 2.1822527e-01 2.1647990e-01 2.1298915e-01 2.0600766e-01 1.9204469e-01 1.6411872e-01 1.0826679e-01 -3.4370546e-03 -2.2684476e-01 -6.7366016e-01 -1.5672910e+00 -3.3545525e+00 -6.9290757e+00] </code> Finally, it's time to put it all together. Policy Gradient Networks have less hypertuning parameters than DQNs, but since our custom loss constructs a [TensorFlow Graph](https://www.tensorflow.org/api_docs/python/tf/Graph) under the hood, we'll set up lazy execution by wrapping our traing steps in a default graph. By changing `test_gamma`, `test_learning_rate`, and `test_hidden_neurons`, can you help the agent reach a score of 200 within 200 episodes? It takes a little bit of thinking and a little bit of luck. Hover the curser <b title="gamma=.9, learning rate=0.002, neurons=50">on this bold text</b> to see a solution to the challenge._____no_output_____ <code> test_gamma = .5 test_learning_rate = .01 test_hidden_neurons = 100 with tf.Graph().as_default(): test_memory = Memory(test_gamma) test_policy, test_predict = build_networks( space_shape, action_size, test_learning_rate, test_hidden_neurons) test_agent = Partial_Agent(test_policy, test_predict, test_memory, action_size) for episode in range(200): state = env.reset() episode_reward = 0 done = False while not done: action = test_agent.act(state) state_prime, reward, done, info = env.step(action) episode_reward += reward test_agent.memory.add((state, action, reward)) state = state_prime test_agent.learn() print("Episode", episode, "Score =", episode_reward)WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. Episode 0 Score = 11.0 Episode 1 Score = 11.0 Episode 2 Score = 24.0 Episode 3 Score = 27.0 Episode 4 Score = 24.0 Episode 5 Score = 30.0 Episode 6 Score = 24.0 Episode 7 Score = 30.0 Episode 8 Score = 27.0 Episode 9 Score = 33.0 Episode 10 Score = 36.0 Episode 11 Score = 40.0 Episode 12 Score = 34.0 Episode 13 Score = 32.0 Episode 14 Score = 99.0 Episode 15 Score = 61.0 Episode 16 Score = 70.0 Episode 17 Score = 46.0 Episode 18 Score = 90.0 Episode 19 Score = 125.0 Episode 20 Score = 132.0 Episode 21 Score = 154.0 Episode 22 Score = 200.0 Episode 23 Score = 200.0 Episode 24 Score = 200.0 Episode 25 Score = 200.0 Episode 26 Score = 93.0 Episode 27 Score = 188.0 Episode 28 Score = 134.0 Episode 29 Score = 126.0 Episode 30 Score = 157.0 Episode 31 Score = 169.0 Episode 32 Score = 161.0 Episode 33 Score = 128.0 Episode 34 Score = 115.0 Episode 35 Score = 110.0 Episode 36 Score = 167.0 Episode 37 Score = 200.0 Episode 38 Score = 102.0 Episode 39 Score = 109.0 Episode 40 Score = 61.0 Episode 41 Score = 32.0 Episode 42 Score = 14.0 Episode 43 Score = 18.0 Episode 44 Score = 10.0 Episode 45 Score = 11.0 Episode 46 Score = 10.0 Episode 47 Score = 12.0 Episode 48 Score = 17.0 Episode 49 Score = 10.0 Episode 50 Score = 8.0 Episode 51 Score = 9.0 Episode 52 Score = 13.0 Episode 53 Score = 10.0 Episode 54 Score = 9.0 Episode 55 Score = 8.0 Episode 56 Score = 10.0 Episode 57 Score = 9.0 Episode 58 Score = 10.0 Episode 59 Score = 10.0 Episode 60 Score = 10.0 Episode 61 Score = 10.0 Episode 62 Score = 10.0 Episode 63 Score = 9.0 Episode 64 Score = 10.0 Episode 65 Score = 8.0 Episode 66 Score = 10.0 Episode 67 Score = 10.0 Episode 68 Score = 9.0 Episode 69 Score = 9.0 Episode 70 Score = 10.0 Episode 71 Score = 9.0 Episode 72 Score = 9.0 Episode 73 Score = 10.0 Episode 74 Score = 8.0 Episode 75 Score = 11.0 Episode 76 Score = 10.0 Episode 77 Score = 8.0 Episode 78 Score = 10.0 Episode 79 Score = 9.0 Episode 80 Score = 10.0 Episode 81 Score = 9.0 Episode 82 Score = 9.0 Episode 83 Score = 10.0 Episode 84 Score = 8.0 Episode 85 Score = 10.0 Episode 86 Score = 10.0 Episode 87 Score = 9.0 Episode 88 Score = 9.0 Episode 89 Score = 10.0 Episode 90 Score = 9.0 Episode 91 Score = 10.0 Episode 92 Score = 9.0 Episode 93 Score = 9.0 Episode 94 Score = 10.0 Episode 95 Score = 9.0 Episode 96 Score = 11.0 Episode 97 Score = 8.0 Episode 98 Score = 10.0 Episode 99 Score = 8.0 Episode 100 Score = 10.0 Episode 101 Score = 10.0 Episode 102 Score = 9.0 Episode 103 Score = 9.0 Episode 104 Score = 8.0 Episode 105 Score = 10.0 Episode 106 Score = 11.0 Episode 107 Score = 8.0 Episode 108 Score = 10.0 Episode 109 Score = 9.0 Episode 110 Score = 9.0 Episode 111 Score = 9.0 Episode 112 Score = 9.0 Episode 113 Score = 10.0 Episode 114 Score = 11.0 Episode 115 Score = 9.0 Episode 116 Score = 9.0 Episode 117 Score = 8.0 Episode 118 Score = 9.0 Episode 119 Score = 10.0 Episode 120 Score = 10.0 Episode 121 Score = 9.0 Episode 122 Score = 9.0 Episode 123 Score = 9.0 Episode 124 Score = 9.0 Episode 125 Score = 9.0 Episode 126 Score = 8.0 Episode 127 Score = 8.0 Episode 128 Score = 8.0 Episode 129 Score = 8.0 Episode 130 Score = 10.0 Episode 131 Score = 9.0 Episode 132 Score = 9.0 Episode 133 Score = 8.0 Episode 134 Score = 9.0 Episode 135 Score = 10.0 Episode 136 Score = 9.0 Episode 137 Score = 8.0 Episode 138 Score = 8.0 Episode 139 Score = 10.0 Episode 140 Score = 10.0 Episode 141 Score = 10.0 Episode 142 Score = 9.0 Episode 143 Score = 9.0 Episode 144 Score = 10.0 Episode 145 Score = 9.0 Episode 146 Score = 10.0 Episode 147 Score = 9.0 Episode 148 Score = 9.0 Episode 149 Score = 9.0 Episode 150 Score = 10.0 Episode 151 Score = 9.0 Episode 152 Score = 9.0 Episode 153 Score = 8.0 Episode 154 Score = 9.0 Episode 155 Score = 10.0 Episode 156 Score = 9.0 Episode 157 Score = 8.0 Episode 158 Score = 9.0 Episode 159 Score = 9.0 Episode 160 Score = 10.0 Episode 161 Score = 10.0 Episode 162 Score = 9.0 Episode 163 Score = 10.0 Episode 164 Score = 9.0 Episode 165 Score = 8.0 Episode 166 Score = 10.0 Episode 167 Score = 9.0 Episode 168 Score = 10.0 Episode 169 Score = 10.0 Episode 170 Score = 10.0 Episode 171 Score = 8.0 Episode 172 Score = 10.0 Episode 173 Score = 9.0 Episode 174 Score = 9.0 Episode 175 Score = 9.0 Episode 176 Score = 10.0 Episode 177 Score = 9.0 Episode 178 Score = 9.0 Episode 179 Score = 9.0 Episode 180 Score = 9.0 Episode 181 Score = 9.0 Episode 182 Score = 10.0 Episode 183 Score = 9.0 Episode 184 Score = 9.0 Episode 185 Score = 10.0 Episode 186 Score = 10.0 Episode 187 Score = 8.0 Episode 188 Score = 9.0 Episode 189 Score = 9.0 Episode 190 Score = 8.0 Episode 191 Score = 8.0 Episode 192 Score = 10.0 Episode 193 Score = 9.0 Episode 194 Score = 10.0 Episode 195 Score = 10.0 Episode 196 Score = 9.0 Episode 197 Score = 9.0 Episode 198 Score = 9.0 Episode 199 Score = 10.0 </code> # The Theory Behind Actor - Critic Now that we have the hang of Policy Gradients, let's combine this strategy with Deep Q Agents. We'll have one architecture to rule them all! Below is the setup for our neural networks. There are plenty of ways to go combining the two strategies. We'll be focusing on one varient called A2C, or Advantage Actor Critic. <img src="images/a2c_equation.png" width="300" height="150"> Here's the philosophy: We'll use our critic pathway to estimate the value of a state, or V(s). Given a state-action-new state transition, we can use our critic and the Bellman Equation to calculate the discounted value of the new state, or r + &gamma; * V(s'). Like DQNs, this discounted value is the label the critic will train on. While that is happening, we can subtract V(s) and the discounted value of the new state to get the advantage, or A(s,a). In human terms, how much value was the action the agent took? This is what the actor, or the policy gradient portion or our network, will train on. Too long, didn't read: the critic's job is to learn how to asses the value of a state. The actor's job is to assign probabilities to it's available actions such that it increases its chance to move into a higher valued state. Below is our new `build_networks` function. Each line has been tagged with whether it comes from Deep Q Networks (`# DQN`), Policy Gradients (`# PG`), or is something new (`# New`)._____no_output_____ <code> def build_networks(state_shape, action_size, learning_rate, critic_weight, hidden_neurons, entropy): """Creates Actor Critic Neural Networks. Creates a two hidden-layer Policy Gradient Neural Network. The loss function is altered to be a log-likelihood function weighted by an action's advantage. Args: space_shape: a tuple of ints representing the observation space. action_size (int): the number of possible actions. learning_rate (float): the nueral network's learning rate. critic_weight (float): how much to weigh the critic's training loss. hidden_neurons (int): the number of neurons to use per hidden layer. entropy (float): how much to enourage exploration versus exploitation. """ state_input = layers.Input(state_shape, name='frames') advantages = layers.Input((1,), name='advantages') # PG, A instead of G # PG actor_1 = layers.Dense(hidden_neurons, activation='relu')(state_input) actor_2 = layers.Dense(hidden_neurons, activation='relu')(actor_1) probabilities = layers.Dense(action_size, activation='softmax')(actor_2) # DQN critic_1 = layers.Dense(hidden_neurons, activation='relu')(state_input) critic_2 = layers.Dense(hidden_neurons, activation='relu')(critic_1) values = layers.Dense(1, activation='linear')(critic_2) def actor_loss(y_true, y_pred): # PG y_pred_clipped = K.clip(y_pred, CLIP_EDGE, 1-CLIP_EDGE) log_lik = y_true*K.log(y_pred_clipped) entropy_loss = y_pred * K.log(K.clip(y_pred, CLIP_EDGE, 1-CLIP_EDGE)) # New return K.sum(-log_lik * advantages) - (entropy * K.sum(entropy_loss)) # Train both actor and critic at the same time. actor = models.Model( inputs=[state_input, advantages], outputs=[probabilities, values]) actor.compile( loss=[actor_loss, 'mean_squared_error'], # [PG, DQN] loss_weights=[1, critic_weight], # [PG, DQN] optimizer=tf.keras.optimizers.Adam(lr=learning_rate)) critic = models.Model(inputs=[state_input], outputs=[values]) policy = models.Model(inputs=[state_input], outputs=[probabilities]) return actor, critic, policy_____no_output_____ </code> The above is one way to go about combining both of the algorithms. Here, we're combining training of both pwathways into on operation. Keras allows for the [training against multiple outputs](https://keras.io/models/model/). They can even have their own loss functions as we have above. When minimizing the loss, Keras will take the weighted sum of all the losses, with the weights provided in `loss_weights`. The `critic_weight` is now another hyperparameter for us to tune. We could even have completely separate networks for the actor and the critic, and that type of design choice is going to be problem dependent. Having shared nodes and training between the two will be more efficient to train per batch, but more complicated problems could justify keeping the two separate. The loss function we used here is also slightly different than the one for Policy Gradients. Let's take a look._____no_output_____ <code> def actor_loss(y_true, y_pred): # PG y_pred_clipped = K.clip(y_pred, 1e-8, 1-1e-8) log_lik = y_true*K.log(y_pred_clipped) entropy_loss = y_pred * K.log(K.clip(y_pred, 1e-8, 1-1e-8)) # New return K.sum(-log_lik * advantages) - (entropy * K.sum(entropy_loss))_____no_output_____ </code> We've added a new tool called [entropy](https://arxiv.org/pdf/1912.01557.pdf). We're calculating the [log-likelihood](https://en.wikipedia.org/wiki/Likelihood_function#Log-likelihood) again, but instead of comparing the probabilities of our actions versus the action that was taken, we calculating it for the probabilities of our actions against themselves. Certainly a mouthful, but the idea is to encourage exploration: if our probability prediction is very confident (or close to 1), our entropy will be close to 0. Similary, if our probability isn't confident at all (or close to 0), our entropy will again be zero. Anywhere inbetween, our entropy will be non-zero. This encourages exploration versus exploitation, as the entropy will discourage overconfident predictions. Now that the networks are out of the way, let's look at the `Memory`. We could go with Experience Replay, like with DQNs, or we could calculate TD(1) like with Policy Gradients. This time, we'll do something in between. We'll give our memory a `batch_size`. Once there are enough experiences in the buffer, we'll use all the experiences to train and then clear the buffer to start fresh. In order to speed up training, instead of recording state_prime, we'll record the value of state prime in `state_prime_values` or `next_values`. This will give us enough information to calculate the discounted values and advantages._____no_output_____ <code> class Memory(): """Sets up a memory replay for actor-critic training. Args: gamma (float): The "discount rate" used to assess state values. batch_size (int): The number of elements to include in the buffer. """ def __init__(self, gamma, batch_size): self.buffer = [] self.gamma = gamma self.batch_size = batch_size def add(self, experience): """Adds an experience into the memory buffer. Args: experience: (state, action, reward, state_prime_value, done) tuple. """ self.buffer.append(experience) def check_full(self): return len(self.buffer) >= self.batch_size def sample(self): """Returns formated experiences and clears the buffer. Returns: (list): A tuple of lists with structure [ [states], [actions], [rewards], [state_prime_values], [dones] ] """ # Columns have different data types, so numpy array would be awkward. batch = np.array(self.buffer).T.tolist() states_mb = np.array(batch[0], dtype=np.float32) actions_mb = np.array(batch[1], dtype=np.int8) rewards_mb = np.array(batch[2], dtype=np.float32) dones_mb = np.array(batch[3], dtype=np.int8) value_mb = np.squeeze(np.array(batch[4], dtype=np.float32)) self.buffer = [] return states_mb, actions_mb, rewards_mb, dones_mb, value_mb_____no_output_____ </code> Ok, time to build out the agent! The `act` method is the exact same as it was for Policy Gradients. Nice! The `learn` method is where things get interesting. We'll find the discounted future state like we did for DQN to train our critic. We'll then subtract the value of the discount state from the value of the current state to find the advantage, which is what the actor will train on._____no_output_____ <code> class Agent(): """Sets up a reinforcement learning agent to play in a game environment.""" def __init__(self, actor, critic, policy, memory, action_size): """Initializes the agent with DQN and memory sub-classes. Args: network: A neural network created from deep_q_network(). memory: A Memory class object. epsilon_decay (float): The rate at which to decay random actions. action_size (int): The number of possible actions to take. """ self.actor = actor self.critic = critic self.policy = policy self.action_size = action_size self.memory = memory def act(self, state): """Selects an action for the agent to take given a game state. Args: state (list of numbers): The state of the environment to act on. traning (bool): True if the agent is training. Returns: (int) The index of the action to take. """ # If not acting randomly, take action with highest predicted value. state_batch = np.expand_dims(state, axis=0) probabilities = self.policy.predict(state_batch)[0] action = np.random.choice(self.action_size, p=probabilities) return action def learn(self, print_variables=False): """Trains the Deep Q Network based on stored experiences.""" gamma = self.memory.gamma experiences = self.memory.sample() state_mb, action_mb, reward_mb, dones_mb, next_value = experiences # One hot enocde actions actions = np.zeros([len(action_mb), self.action_size]) actions[np.arange(len(action_mb)), action_mb] = 1 #Apply TD(0) discount_mb = reward_mb + next_value * gamma * (1 - dones_mb) state_values = self.critic.predict([state_mb]) advantages = discount_mb - np.squeeze(state_values) if print_variables: print("discount_mb", discount_mb) print("next_value", next_value) print("state_values", state_values) print("advantages", advantages) else: self.actor.train_on_batch( [state_mb, advantages], [actions, discount_mb])_____no_output_____ </code> Run the below cell to initialize an agent, and the cell after that to see the variables used for training. Since it's early, the critic hasn't learned to estimate the values yet, and the advatanges are mostly positive because of it. Once the crtic has learned how to properly assess states, the actor will start to see negative advantages. Try playing around with the variables to help the agent see this change sooner._____no_output_____ <code> # Change me please. test_gamma = .9 test_batch_size = 32 test_learning_rate = .02 test_hidden_neurons = 50 test_critic_weight = 0.5 test_entropy = 0.0001 test_memory = Memory(test_gamma, test_batch_size) test_actor, test_critic, test_policy = build_networks( space_shape, action_size, test_learning_rate, test_critic_weight, test_hidden_neurons, test_entropy) test_agent = Agent( test_actor, test_critic, test_policy, test_memory, action_size)_____no_output_____state = env.reset() episode_reward = 0 done = False while not done: action = test_agent.act(state) state_prime, reward, done, _ = env.step(action) episode_reward += reward next_value = test_agent.critic.predict([[state_prime]]) test_agent.memory.add((state, action, reward, done, next_value)) state = state_prime test_agent.learn(print_variables=True)discount_mb [1.0772394 1.1522162 1.0811032 1.0144521 1.0888239 1.1633745 1.0974793 1.0297184 1.1090584 1.0408443 1.1215113 1.2000881 1.133828 1.2161343 1.1495053 1.0845523 1.0276814 1. ] next_value [0.08582156 0.16912904 0.09011465 0.01605788 0.09869321 0.18152726 0.10831045 0.03302046 0.12117595 0.04538257 0.13501257 0.22232015 0.14869787 0.24014927 0.16611691 0.09394698 0.0307571 0.12024279] state_values [[0.00508265] [0.08582155] [0.16912904] [0.09011464] [0.01605787] [0.09869322] [0.1815272 ] [0.10831043] [0.03302046] [0.12117594] [0.04538257] [0.13501257] [0.22232018] [0.14869787] [0.24014927] [0.1661169 ] [0.09394696] [0.0307571 ]] advantages [1.0721568 1.0663947 0.9119742 0.92433745 1.0727661 1.0646813 0.91595215 0.92140794 1.0760379 0.9196684 1.0761287 1.0650756 0.91150784 1.0674365 0.909356 0.9184354 0.9337344 0.96924293] </code> Have a set of variables you're happy with? Ok, time to shine! Run the below cell to see how the agent trains._____no_output_____ <code> with tf.Graph().as_default(): test_memory = Memory(test_gamma, test_batch_size) test_actor, test_critic, test_policy = build_networks( space_shape, action_size, test_learning_rate, test_critic_weight, test_hidden_neurons, test_entropy) test_agent = Agent( test_actor, test_critic, test_policy, test_memory, action_size) for episode in range(200): state = env.reset() episode_reward = 0 done = False while not done: action = test_agent.act(state) state_prime, reward, done, _ = env.step(action) episode_reward += reward next_value = test_agent.critic.predict([[state_prime]]) test_agent.memory.add((state, action, reward, done, next_value)) #if test_agent.memory.check_full(): #test_agent.learn(print_variables=True) state = state_prime test_agent.learn() print("Episode", episode, "Score =", episode_reward)Episode 0 Score = 13.0 Episode 1 Score = 13.0 Episode 2 Score = 13.0 Episode 3 Score = 11.0 Episode 4 Score = 10.0 Episode 5 Score = 9.0 Episode 6 Score = 11.0 Episode 7 Score = 9.0 Episode 8 Score = 11.0 Episode 9 Score = 10.0 Episode 10 Score = 10.0 Episode 11 Score = 9.0 Episode 12 Score = 9.0 Episode 13 Score = 9.0 Episode 14 Score = 10.0 Episode 15 Score = 10.0 Episode 16 Score = 9.0 Episode 17 Score = 9.0 Episode 18 Score = 10.0 Episode 19 Score = 10.0 Episode 20 Score = 10.0 Episode 21 Score = 10.0 Episode 22 Score = 9.0 Episode 23 Score = 9.0 Episode 24 Score = 10.0 Episode 25 Score = 10.0 Episode 26 Score = 9.0 Episode 27 Score = 8.0 Episode 28 Score = 9.0 Episode 29 Score = 10.0 Episode 30 Score = 10.0 Episode 31 Score = 10.0 Episode 32 Score = 9.0 Episode 33 Score = 9.0 Episode 34 Score = 10.0 Episode 35 Score = 10.0 Episode 36 Score = 10.0 Episode 37 Score = 10.0 Episode 38 Score = 10.0 Episode 39 Score = 11.0 Episode 40 Score = 9.0 Episode 41 Score = 10.0 Episode 42 Score = 10.0 Episode 43 Score = 9.0 Episode 44 Score = 11.0 Episode 45 Score = 10.0 Episode 46 Score = 9.0 Episode 47 Score = 10.0 Episode 48 Score = 10.0 Episode 49 Score = 9.0 Episode 50 Score = 9.0 Episode 51 Score = 9.0 Episode 52 Score = 9.0 Episode 53 Score = 10.0 Episode 54 Score = 9.0 Episode 55 Score = 9.0 Episode 56 Score = 10.0 Episode 57 Score = 9.0 Episode 58 Score = 10.0 Episode 59 Score = 9.0 Episode 60 Score = 10.0 Episode 61 Score = 9.0 Episode 62 Score = 8.0 Episode 63 Score = 10.0 Episode 64 Score = 11.0 Episode 65 Score = 9.0 Episode 66 Score = 9.0 Episode 67 Score = 10.0 Episode 68 Score = 10.0 Episode 69 Score = 8.0 Episode 70 Score = 10.0 Episode 71 Score = 9.0 Episode 72 Score = 9.0 Episode 73 Score = 9.0 Episode 74 Score = 8.0 Episode 75 Score = 9.0 Episode 76 Score = 8.0 Episode 77 Score = 9.0 Episode 78 Score = 10.0 Episode 79 Score = 9.0 Episode 80 Score = 9.0 Episode 81 Score = 10.0 Episode 82 Score = 11.0 Episode 83 Score = 9.0 Episode 84 Score = 8.0 Episode 85 Score = 9.0 Episode 86 Score = 8.0 Episode 87 Score = 10.0 Episode 88 Score = 10.0 Episode 89 Score = 9.0 Episode 90 Score = 10.0 Episode 91 Score = 10.0 Episode 92 Score = 10.0 Episode 93 Score = 9.0 Episode 94 Score = 10.0 Episode 95 Score = 8.0 Episode 96 Score = 9.0 Episode 97 Score = 10.0 Episode 98 Score = 9.0 Episode 99 Score = 10.0 Episode 100 Score = 9.0 Episode 101 Score = 11.0 Episode 102 Score = 9.0 Episode 103 Score = 9.0 Episode 104 Score = 9.0 Episode 105 Score = 10.0 Episode 106 Score = 9.0 Episode 107 Score = 10.0 Episode 108 Score = 9.0 Episode 109 Score = 9.0 Episode 110 Score = 10.0 Episode 111 Score = 10.0 Episode 112 Score = 9.0 Episode 113 Score = 9.0 Episode 114 Score = 8.0 Episode 115 Score = 9.0 Episode 116 Score = 9.0 Episode 117 Score = 8.0 Episode 118 Score = 10.0 Episode 119 Score = 8.0 Episode 120 Score = 10.0 Episode 121 Score = 10.0 Episode 122 Score = 9.0 Episode 123 Score = 10.0 Episode 124 Score = 8.0 Episode 125 Score = 9.0 Episode 126 Score = 9.0 Episode 127 Score = 10.0 Episode 128 Score = 10.0 Episode 129 Score = 10.0 Episode 130 Score = 10.0 Episode 131 Score = 8.0 Episode 132 Score = 10.0 Episode 133 Score = 9.0 Episode 134 Score = 9.0 Episode 135 Score = 9.0 Episode 136 Score = 11.0 Episode 137 Score = 9.0 Episode 138 Score = 10.0 Episode 139 Score = 10.0 Episode 140 Score = 9.0 Episode 141 Score = 10.0 Episode 142 Score = 8.0 Episode 143 Score = 10.0 Episode 144 Score = 10.0 Episode 145 Score = 8.0 Episode 146 Score = 9.0 Episode 147 Score = 9.0 Episode 148 Score = 10.0 Episode 149 Score = 10.0 Episode 150 Score = 9.0 Episode 151 Score = 10.0 Episode 152 Score = 8.0 Episode 153 Score = 10.0 Episode 154 Score = 10.0 Episode 155 Score = 9.0 Episode 156 Score = 10.0 Episode 157 Score = 9.0 Episode 158 Score = 9.0 Episode 159 Score = 10.0 Episode 160 Score = 9.0 Episode 161 Score = 9.0 Episode 162 Score = 10.0 Episode 163 Score = 10.0 Episode 164 Score = 9.0 Episode 165 Score = 10.0 Episode 166 Score = 10.0 Episode 167 Score = 8.0 Episode 168 Score = 10.0 Episode 169 Score = 10.0 Episode 170 Score = 10.0 Episode 171 Score = 9.0 Episode 172 Score = 9.0 Episode 173 Score = 11.0 Episode 174 Score = 8.0 Episode 175 Score = 10.0 Episode 176 Score = 9.0 Episode 177 Score = 10.0 Episode 178 Score = 11.0 Episode 179 Score = 9.0 Episode 180 Score = 10.0 Episode 181 Score = 9.0 Episode 182 Score = 9.0 Episode 183 Score = 10.0 Episode 184 Score = 10.0 Episode 185 Score = 10.0 Episode 186 Score = 8.0 Episode 187 Score = 9.0 Episode 188 Score = 9.0 Episode 189 Score = 9.0 Episode 190 Score = 9.0 Episode 191 Score = 10.0 Episode 192 Score = 9.0 Episode 193 Score = 9.0 Episode 194 Score = 9.0 Episode 195 Score = 9.0 Episode 196 Score = 8.0 Episode 197 Score = 9.0 Episode 198 Score = 11.0 Episode 199 Score = 9.0 </code> Any luck? No sweat if not! It turns out that by combining the power of both algorithms, we also combined some of their setbacks. For instance, actor-critic can fall into local minimums like Policy Gradients, and has a large number of hyperparameters to tune like DQNs. Time to check how our agents did [in the cloud](https://console.cloud.google.com/ai-platform/jobs)! Any lucky winners? Find it in [your bucket](https://console.cloud.google.com/storage/browser) to watch a recording of it play._____no_output_____Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License._____no_output_____
{ "repository": "ecuriotto/training-data-analyst", "path": "quests/rl/a2c/a2c_on_gcp.ipynb", "matched_keywords": [ "evolution" ], "stars": 4, "size": 65695, "hexsha": "486df6aab51df20493f9f4cbdcc3d6c2935865c1", "max_line_length": 484, "avg_line_length": 41.6582117945, "alphanum_fraction": 0.5599208463 }
# Notebook from suyunu/AL-BNMF Path: Aktif-Ogrenme-BNMF.ipynb # Bayesci Negatif Olmayan Matris Ayrışımı için Aktif Eleman Seçimi # Active Selection of Elements for Bayesian Nonnegative Matrix Factorization <br> <center> Burak Suyunu, Gönül Aycı, A.Taylan Cemgil </center> <center> * Bilgisayar Mühendisliği Bölümü, Boğaziçi Üniversitesi </center> <center> * {burak.suyunu, gonul.ayci, taylan.cemgil}@boun.edu.tr </center> ## Özetçe Klasik matris tamamlama problemlerinde elimizdeki matris, gözlemlenen ve bilinmeyen elemanlar olarak iki gruba ayrılabilir. Bu çalışmadaki yaklaşımda ise matrisler üç farklı gruptan oluşmaktadır: bilinen ve masrafsız olarak her zaman erişebildigimiz gözlemlenmiş olan veri, tahmin etmeye çalıştığımız bilinmeyen veri ve şu an bilinmeyen ancak istenildigi zaman sorgulanabilen veri. Son gruptaki veriler ilk kez sorgulandığında bir maliyet ortaya çıkmaktadır. Bu gözlemden yola çıkarak, mümkün olduğu kadar az sorgu yaparak ikinci gruptaki bilinmeyen verideki değerleri en az hata ile tahminlemek istiyoruz. Amacımız, sorgulamaya çalıştığımız gözlemleri akıllıca seçebilmek. Bu çalışmamızda, gözlem sırası seçme stratejileri tanımlanarak MovieLens veri setinde karşılaştırılmıştır. ## Abstract In classical matrix completion problems, you can divide the matrix into two groups as observed and unknown elements. In this study, the matrices are composed of three different groups: observed data that is known and accessible at any time without any expense, unknown data that we are trying to predict, and data that is currently unknown but can be queried when desired. When the data in the last group is queried for the first time, a cost arises. From this observation, we want to estimate the unknown data in the second group with the least error by making as few queries as possible. Our goal is to choose the observations we are trying to question wisely. In this study, observation sequence selections were defined and compared in the MovieLens data set._____no_output_____## Giriş Negatif olmayan matris ayrıştırma yöntemi (NOMA), ilk olarak Lee ve Seung [1] tarafından önerilmiştir. Negatif olmayan matris ayrıştırması, verilen negatif olmayan bir *X* matrisinin negatif olamayan değerler içeren *T* ve *V* çarpanlarına ayırma yöntemidir. Elde edilen iki matrisin çarpımlarının değeri ayrıştırılan matrisin değerine yaklaşık olarak eşittir. Öneri sistemlerinde ayrışım tabanlı yapılar sıklıkla kullanılmaktadır. Sistemin mantığı şu şekilde açıklanabilir. Bir *X* matrisi ile belirli bir süre aralıgında toplanmış olan kullanıcı-film ilişkisine dair verimizi gösterelim. Bu matrisin satırları filmleri, sütunları kullanıcıları, elemanları ise kullanıcıların filmlere vermiş oldugu puanlamaları temsil etmektedir. Eğer *X* matrisinin herhangi bir elemanının değeri yok ise, bu kullanıcı ve film arasında henüz bir ilişkinin olmadığını göstermektedir yani kullanıcı bu filme henüz herhangi bir oy vermemiştir. Kullanıcılara etkileşimli olarak film hakkındaki puanlamaları sorulabilir. Ancak herkesin her zaman her film hakkında puanlama yapması mümkün degildir. Dolayısıyla bir kişiye bir film hakkındaki puanını sorarken akıllıca bir yol izlemek gerekmektedir. Bu şekilde en az kişiden bilgi talep ederek, elde edilen bilgilerle kullanıcı-film arasındaki örüntüye ulaşmak ve bilmedigimiz veriler hakkındaki tahminlerimizi iyileştirmek istenmektedir. <img src="senaryo.png" width="400px"> <figcaption>Şekil 1: **Senaryonun işlenişi:** Burada mavi hücreler maskelenmiş, kırmızılar test ve beyazlar ise gözlemlenmiş veriyi göstermektedir. Kırmızının koyuluğu hatanın fazlalığına işaret etmektedir. Bir veri gözlemlendiginde en çok bulunduğu satır ve sütun hakkında bilgi vermesi beklenmektedir. *Senaryo B*’nin test hücreleri hakkında verdiği bilgi *Senaryo A*’dan fazla olduğu görülmektedir.</figcaption> **Senaryo:** Elimizde bir *X* matrisimiz olsun. Verilen *X* matrisine Gibbs örnekleyicisi ile negatif olmayan matris ayrıştırması yöntemi kullanarak yaklaşım yapıyoruz. Matrisimizi bildiğimiz, bilmediğimiz (tahmin etmeye çalıştığımız) ve zaman içinde açarak gözlemleyeceğimiz (maskelenmiş) üç çeşit veriden oluşturuyoruz. Üçüncü kategorideki veriden belli yığınlarda ve minimum sayıda veri açarak maksimum bilgi edinmeyi ve en iyi tahminleme yapmayı hedefliyoruz. Bu gruptaki veriyi nasıl seçecegimiz konusunda tanımladığımız çeşitli gözlem sırası seçme stratejilerimizi karşılaştırıyoruz._____no_output_____ <code> import numpy as np import scipy as sp import math import time from scipy import special from scipy.stats import gamma from scipy.stats import entropy from scipy.integrate import simps import matplotlib.pyplot as plt import matplotlib from sklearn import preprocessing import random import pandas as pd from IPython.html.widgets import * import operatorC:\Users\Burki\Anaconda3\lib\site-packages\IPython\html.py:14: ShimWarning: The `IPython.html` package has been deprecated. You should import from `notebook` instead. `IPython.html.widgets` has moved to `ipywidgets`. "`IPython.html.widgets` has moved to `ipywidgets`.", ShimWarning) </code> ## MovieLens Veri Seti <cite>[MovieLens][1]</cite> veri setinde 700 kullanıcının 9000 filme vermiş olduğu 0,5 ile 5 puan arasında degişen toplam 100.000 oy bulunmaktadır. Modelimizdeki çok terimli dagılıma uygun bir girdi oluşturmak için oy puan aralıgını 0,5-5 ten 1-10 aralığına eşledik. [1]:https://grouplens.org/datasets/movielens/latest/ _____no_output_____ <code> df_MovieLens = pd.read_csv('ratings.csv') df_MovieLens_____no_output_____df_MovieLens.describe()_____no_output_____movieWatchedCount = {} userWatchCount = {}_____no_output_____for m in df_MovieLens.movieId.unique(): movieWatchedCount[m] = len(df_MovieLens[df_MovieLens.movieId == m]) for u in df_MovieLens.userId.unique(): userWatchCount[u] = len(df_MovieLens[df_MovieLens.userId == u])_____no_output_____sorted_movieWatchedCount = sorted(movieWatchedCount.items(), key=operator.itemgetter(1), reverse=True) sorted_userWatchCount = sorted(userWatchCount.items(), key=operator.itemgetter(1), reverse=True)_____no_output_____ </code> ### Veri Matrisinin Oluşturulması Yoğun veya seyrek matris oluşturma isteğinize göre alttaki hücrelerden sadece birini çalıştırın. <br> Oluşturduğumuz matrisin satırları filmleri, sütunları kullanıcıları, elemanları ise kullanıcıların filmlere vermiş olduğu puanlamaları temsil etmektedir._____no_output_____#### Yoğun Matrisin Üretilmesi Yoğun veri seti için MovieLens’ten en çok film izlemiş 20 kullanıcı ve en çok izlenen 50 filmi kullandık._____no_output_____ <code> # Number of Rows # nu: 1 -> W movieCount = 50 # Number of Columns # tao: 1 -> K userCount = 20 topMovies = np.asarray(sorted_movieWatchedCount)[:movieCount,0] topUsers = np.asarray(sorted_userWatchCount)[:userCount,0] userMovie = np.zeros((movieCount, userCount)) for i, m in enumerate(topMovies): for j, u in enumerate(topUsers): if len(df_MovieLens[(df_MovieLens.userId == u) & (df_MovieLens.movieId == m)]) != 0: userMovie[i][j] = 2*float(df_MovieLens[(df_MovieLens.userId == u) & (df_MovieLens.movieId == m)]['rating'])_____no_output_____ </code> #### Seyrek Matrisin Üretilmesi Seyrek veri için ise her seferinde rastgele en çok izlenen 1000 filmden 50, en çok film izlemiş 200 kullanıcıdan 20 tanesini kullandık._____no_output_____ <code> # Number of Rows # nu: 1 -> W movieCount = 50 # Number of Columns # tao: 1 -> K userCount = 20 sparsityParameter = 13 movieIndex = np.random.permutation(movieCount*sparsityParameter)[:movieCount] userIndex = np.random.permutation(userCount*sparsityParameter)[:userCount] topMovies = [] topUsers = [] for mI in movieIndex: topMovies.append(sorted_movieWatchedCount[mI][0]) for uI in userIndex: topUsers.append(sorted_userWatchCount[uI][0]) topMovies = np.asarray(topMovies) topUsers = np.asarray(topUsers) userMovie = np.zeros((movieCount, userCount)) for i, m in enumerate(topMovies): for j, u in enumerate(topUsers): if len(df_MovieLens[(df_MovieLens.userId == u) & (df_MovieLens.movieId == m)]) != 0: userMovie[i][j] = 2*float(df_MovieLens[(df_MovieLens.userId == u) & (df_MovieLens.movieId == m)]['rating'])_____no_output_____ </code> ### Maskeleme Eksik veriyi yani $x_{\nu, \tau}$ gözlemlenmemiş değerlerini modellerken *X* matrisi ile aynı boyuta sahip olan $\textit{M} = \left \{ m_{\nu, \tau} \right \}$ *maske matrisi* aşağıdaki gibi tanımlanır: <center> $m_{\nu, \tau} = \left\{\begin{matrix} 0,& x_{\nu, \tau} \text{ gözlemlenmemişse},\\ 1,& x_{\nu, \tau} \text{ gözlemlenmişse}. \end{matrix}\right.$ </center> Bu maskeyi kullanarak olabilirlik fonksiyonunu şu şekilde yazılır: \begin{align*} p(X,S\mid T,V) = \prod_{\nu, \tau}\left ( p\left (x_{\nu, \tau}\mid s_{\nu, 1:I, \tau}\right ) p\left (s_{\nu, 1:I, \tau}\mid t_{\nu, 1:I}, v_{1:I, \tau} \right )\right )^{m_{\nu, \tau}} \end{align*} _____no_output_____### Maskeleme Metodları Bu çalışmamızda veriye *kısmi* ve *tam* olarak iki farklı açıdan yaklaşarak KL ıraksayı ve KOKH metrikleri (ileride açıklanacak) ile karşılaştırmalar yaptık. Kısmi yaklaşımda verimizi %30 test, %69 maskelenmiş, %1 başlangıçta bilinen olarak ayırdık. Maskelenmiş veriyi açtıkça değişen test üzerindeki hatayı ölçtük. Tam yaklaşımda ise %99 maskelenmiş, %1 başlangıçta bilinen olarak ayırdık ve hatayı bütün veri üzerinden ölçtük._____no_output_____ <code> def randomMasking(W, K, dataExistance): dataIndices = np.argwhere(dataExistance>0) #test, mask, known mask = np.zeros([W,K]) test = np.copy(dataExistance) np.random.shuffle(dataIndices) for i in range(30*len(dataIndices)//100): test[dataIndices[i][0], dataIndices[i][1]] = 0 for i in range(30*len(dataIndices)//100, 31*len(dataIndices)//100): mask[dataIndices[i][0], dataIndices[i][1]] = 1 return mask, test_____no_output_____ </code> ### GÖZLEM SIRASI SEÇME STRATEJİLERİ Bu çalışmada, maskelenmiş olan veriyi gözlemlemek için beş farklı gözlem sırası seçme stratejisi tanımladık. Amacımız, en hızlı şekilde bütün veriyi ögrenebileceğimiz (yakınsayabilecegimiz) bilgi elde etme stratejisini bulmak ve ögrenme sonucunda test verisi hakkında doğru tahminlemede bulunmak._____no_output_____#### Rastgele Strateji **Tanım**: Rastgele bir pozisyonda yer alan maskelenmiş veri gözlemlenir. <br> Gözlemlenmiş verideki bilgiyi kullanmadan maskelenmiş veriyi açar. Tanımlanan diğer stratejilerin işlevselliğini değerlendirmek için bir alt sınır oluşturur._____no_output_____ <code> def openRandom(W, K, mask, test): openOrder = np.arange(W*K) random.shuffle(openOrder) for i in openOrder: if mask[i//K][i%K] == 0 and test[i//K][i%K] == 1: return i//K, i%K return -1, -1_____no_output_____ </code> #### Satır-Sütun Stratejileri **Tanım**: Satır ve sütunlar en az veri gözlemlenenden en çok veri gözlemlenene dogru sıralanır. Satıra veya sütuna öncelik vererek kesişimlerindeki ilk maskelenmiş veri açılır. <br> Bir veriyi tahmin ederken o veri hakkında en çok bilgi edinebilecegimiz yerler o verinin bulunduğu satır ve sütunundaki diger verilerdir. Düzenli olarak satır ve sütunlardaki gözlemlenmiş veri sayısını artırarak bütün veri hakkındaki tahminimizi iyileştirebiliriz._____no_output_____ <code> def openMaxColMask(W, K, mask, test): colSum = mask.sum(0) rowSum = mask.sum(1) colMins = colSum.argsort() rowMins = rowSum.argsort() for c in colMins: for r in rowMins: if mask[r][c] == 0 and test[r][c] == 1: return r, c return openRandom(W, K, mask, test) _____no_output_____def openMaxRowMask(W, K, mask, test): colSum = mask.sum(0) rowSum = mask.sum(1) colMins = colSum.argsort() rowMins = rowSum.argsort() for r in rowMins: for c in colMins: if mask[r][c] == 0 and test[r][c] == 1: return r, c return openRandom(W, K, mask, test)_____no_output_____ </code> #### Maskelenmiş Verilerin Varyansı Stratejileri **Tanım**: Varyansın en küçük veya en büyük oldugu pozisyondaki veri gözlemlenir.<br> Gibbs örnekleyicisi sonucunda *T* ve *V* matrislerinin yanında bunları oluşturan *Gamma* dağılımı parametreleri de elde ediliyor. Bu parametreleri kullanarak belirlenen sayıda T ve V örnekleyerek tahminler üretilir. Bu tahminlerle, maskelenmiş olan kısımdaki her bir veri tahmininin varyansı hesaplanır. Bir pozisyondaki varyansın büyük olması, o pozisyon için üretilen tahmin değerinin belirsizliğinin fazla oldugunu göstermektedir. Küçük varyans ise belirsizliğin az olduğu yerdir._____no_output_____ <code> def openMinVar(W, K, mask, test, xCandidates): var_candidate = np.var(xCandidates, axis=0) ind = var_candidate.flatten().argsort() for i in ind: if mask[i//K][i%K] == 0 and test[i//K][i%K] == 1: return i//K, i%K return -1, -1_____no_output_____def openMaxVar(W, K, mask, test, xCandidates): var_candidate = np.var(xCandidates, axis=0) ind = var_candidate.flatten().argsort()[::-1] for i in ind: if mask[i//K][i%K] == 0 and test[i//K][i%K] == 1: return i//K, i%K return -1, -1_____no_output_____ </code> #### Gözlemlenmiş Satır Varyans Stratejisi **Tanım**: Her satır için o satırdaki açılmış olan veriler üzerinden o satırın varyansı hesaplanır. Satırlar bu varyans hesabına göre büyükten küçüge sıralanır. Sütunlar en az veri gözlemlenenden en çok veri gözlemlenene dogru sıralanır. Satıra öncelik vererek sütunlarla olan kesişimindeki ilk maskelenmiş veri açılır.<br> Veri setimizde satırlar filmlere karşılık gelmektedir. Bu stratejimizde her filme verilen puanın varyansı hesaplanır. Genellikle bir filme benzer puanlar verilmesi beklenir. Burada, varyansı büyük olan filmlerin puan aralığı daha geniştir. Bu filmlere verilen puanların tahmin edilmesi daha zordur._____no_output_____ <code> def openRowVarColMask(W, K, mask, test, xCandidate): mean_candidate = mask*xCandidate rowIndVarSorted = np.argsort(np.nan_to_num(np.nanvar(np.where(mean_candidate!=0,mean_candidate,np.nan), axis=1)))[::-1] colSum = mask.sum(0) colMins = colSum.argsort() for r in rowIndVarSorted: for c in colMins: if mask[r][c] == 0 and test[r][c] == 1: return r, c return openRandom(W, K, mask, test) _____no_output_____ </code> ## Başlatma_____no_output_____### Model seçimi Önerdigimiz yöntemi MovieLens verisi üzerinde deniyoruz. Burada $W = 50$ (satır/film sayısı), $K = 20$ (sütun/kullanıcı sayısı) ve kaynakların sayısı $I = 4$ olarak belirledik. Gerçek modelin hiperparametrelerini ise $a^{t} = b^{t} = 1$ ve $a^{v} = b^{v} = 1$ olarak aldık._____no_output_____ <code> # Number of Rows # nu: 1 -> W W = movieCount; # Number of Columns # tao: 1 -> K K = userCount # Number of templates I = 4; # Set prior parameters A_t = np.ones([W,I]) # Shape B_t = np.ones([W,I]) # Scale A_v = np.ones([I,K]) B_v = np.ones([I,K]) # Generate a random template and excitation orgT = T = np.random.gamma(A_t,B_t) orgV = V = np.random.gamma(A_v,B_v) x = userMovie.copy()_____no_output_____#strategyLabels = ["Random", "Row Mask Eq", "Min Var", "Max Var", "Row Var Col Mask"] strategyLabels = ["Rastgele", "Satır-Sütun", "Min Varyans", "Maks Varyans", "Satır Varyans"] strategyColors = ["b","r","y","k","m"] #errorMetricLabels = ["RMSE_Partial", "RMSE_Full", "KL_Partial", "KL_Full"] errorMetricLabels = ["KOKH Kısmi", "KOKH Tam", "KL Iraksayı Kısmi", "KL Iraksayı Tam"] # dataExistance: True if data exist, False othwerwise # mask: True if mask opened, False if masked # test: True if not test data and data exist, False if test data or no data exist # For testing we use dataExistance and test together # For cell opening we use test with mask dataExistance = userMovie > 0 mask, test = randomMasking(W,K,dataExistance) allLikelihood = [] allOpenedCells = [] allDiffRMSEPartial = [] allDiffRMSEFull = [] allDiffKLPartial = [] allDiffKLFull = [] allErrorDiffs = [] allRMSEPartial = [] allRMSEFull = [] allKLPartial = [] allKLFull = [] allError = [] allEstimationVariance = [] allStrategyLabels = [] allStrategyColors = [] KLsampleSize = 1000_____no_output_____ </code> #### Olabilirlik hesabı_____no_output_____ <code> def calculateLikelihood(x, xPredicted, test, dataExistance, W, K): lh = 0 for w in range(W): for k in range(K): lh += (dataExistance[w,k]^test[w,k]) * (x[w,k] * np.log(xPredicted[w,k]) - xPredicted[w,k] - special.gammaln(x[w,k]+1)) return lh_____no_output_____ </code> #### Adayların örneklenmesi_____no_output_____ <code> def sampleCandidates(a_t, b_t, a_v, b_v, sampleSize=100): xCandidates = [] for i in range(sampleSize): T = np.random.gamma(a_t,b_t) V = np.random.gamma(a_v,b_v) xCandidates.append(np.dot(T, V)) return np.asarray(xCandidates)_____no_output_____ </code> #### Varyansın hesaplanması ve örneklenen X adaylarından gelen hata_____no_output_____#### Değerlendirme Metrikleri Yaklaşımımızın performansını değerlendirmek için çeşitli yöntemler bulunmaktadır. Bu çalışmamızda, popüler olan Kullback-Leibler (KL) ıraksayı ve Kök Ortalama Kare Hata (KOKH) olmak üzere iki ayrı metrik kullandık. Bu metrikler sırasıyla aşağıdaki gibi tanımlanır: \begin{align*} D_{KL}\left ( X \parallel \widehat{X} \right ) = \sum_{i,j} X\left ( i,j \right ) \log \frac{X\left ( i,j \right )}{\widehat{X}\left ( i,j \right )} \end{align*} \begin{align*} KOKH = \sqrt{\frac{1}{X_{test}}\sum_{i,j}\left ( X(i,j) - \widehat{X}(i,j)\right )^{2}} \end{align*} Burada, $\widehat{X}(i,j)$ önerilen metod tarafından *i* filmi hakkında *j* kullanıcısının verdiği tahmin edilen oylama değerini ve $X_{test}$ ise, toplam test edilen oylama sayısını göstermektedir._____no_output_____ <code> def calculateVarianceAndError(x, test, dataExistance, xCandidates): mean_candidate = np.mean(xCandidates, axis=0) varEst = np.var(xCandidates, axis=0) varEst = 1.0 * (varEst - varEst.min()) / (varEst.max() - varEst.min()) diffMeanEst = abs((dataExistance^test)*mean_candidate - (dataExistance^test)*x) return varEst, diffMeanEst_____no_output_____ </code> #### Adayların dağılıma dönüştürülmesi_____no_output_____ <code> def transformCandidatesToDistributions(candidates, sampleSize, test, dataExistance): candidates = np.round(candidates) candidates = np.minimum(candidates,10*np.ones(candidates.shape)).astype(int) candidates = np.maximum(candidates,np.ones(candidates.shape)).astype(int) candidates = (dataExistance^test) * candidates candidateDistributions = [] for i in range(W): for j in range(K): if candidates[0,i,j] != 0: y = np.bincount(candidates[:,i,j]) c = np.zeros(11) c[:len(y)] += y c = np.maximum(c,0.00000001*np.ones(c.shape)) c /= sampleSize candidateDistributions.append(c[1:]) else: candidateDistributions.append(np.ones(10)) return candidateDistributions_____no_output_____ </code> #### KL ıraksayı hesabı_____no_output_____ <code> def calculateKLdivergence(xCandidates, bestCandidateDistributions, sampleSize, test, dataExistance): xCandidateDistributions = transformCandidatesToDistributions(xCandidates, sampleSize, test, dataExistance) return entropy(np.asarray(xCandidateDistributions).T, np.asarray(bestCandidateDistributions).T).reshape((W,K)) _____no_output_____ </code> ## Gibbs Örnekleyicisi *Monte Carlo* metodları [4, 5] beklentileri tahmin etmek için güçlü hesaplama teknikleridir. *Markov Zinciri Monte Carlo* teknikleri, geçiş çekirdeği $\mathcal{T}$ tarafından tanımlanan bir Markov zincirinden sonraki örnekleri üretir yani $x^{i}$'e şartlanmış $x^{i+1}$ şu şekilde üretilir: \begin{equation} x^{i+1} \sim \mathcal{T}(x\mid x^{i}) \end{equation} İstenen dağılım durgun dağılım olacak şekilde bir $\mathcal{T}$ geçiş çekirdeği tasarlamak. Özellikle kullanışlı ve basit bir prosedür olan Gibbs örnekleyicisi'nde her değişken *tam koşullu dağılımlardan* örneklenir. NOMA modeli için Gibbs örnekleyicisi, \begin{eqnarray} S^{n+1} &\sim & p(S\mid T^{n}, V^{n}, X, \theta), \nonumber \\ T^{n+1} &\sim & p(T\mid V^{n}, S^{n+1}, X, \theta), \\ V^{n+1} &\sim & p(V\mid S^{n+1}, T^{n+1}, X, \theta). \nonumber \end{eqnarray} Sabit nokta döngüsü saklı kaynaklar olarak adlandırdığımız $S_{i} = \left \{ s_{\nu,i,\tau} \right \}$ için $(m_{\nu, \tau}=1)$ aşağıdaki gibi bulunur: \begin{eqnarray} q(s_{\nu,1:I,\tau}) &=& \mathcal{M} (s_{\nu,1:I, \tau}; x_{\nu,\tau}, p_{\nu, 1:I, \tau}) \\ p_{\nu, i, \tau} &=& \frac{exp(\left \langle \log t_{\nu,i} \right \rangle + \left \langle \log v_{i,\tau} \right \rangle)}{\sum_{i} exp(\left \langle \log t_{\nu,i} \right \rangle + \left \langle \log v_{i, \tau} \right \rangle)} \\ \end{eqnarray} Şablon $T$ ve katsayı $V$ matrislerinin dağılımları ve onların yeterli istatistikleri Gamma dağılımının özelliklerini takip eder: \begin{eqnarray} q(t_{\nu,i}) &=& \mathcal{G} (t_{\nu,i}; \alpha_{\nu,i}^{t}, \beta_{\nu,i}^{t}) \\ \alpha_{\nu, i}^{t} &=& a^{t} + \sum_{\tau} m_{\nu, \tau} \left \langle s_{\nu, i, \tau} \right \rangle \\ \beta_{\nu, i}^{t} &=& \left ( \frac{a^{t}}{b^{t}} + \sum_{\tau } m_{\nu, \tau} \left \langle v_{i, \tau} \right \rangle \right )^{-1} \\ q(v_{i,\tau}) &=& \mathcal{G} (v_{i,\tau}; \alpha_{i,\tau}^{v}, \beta_{i,\tau}^{v}) \\ \alpha_{i, \tau}^{v} &=& a^{v} + \sum_{\nu } m_{\nu, \tau} \left \langle s_{\nu, i, \tau} \right \rangle \\ \beta_{i, \tau}^{v} &=& \left ( \frac{a^{v}}{b^{v}} + \sum_{\nu } m_{\nu, \tau} \left \langle t_{\nu, i} \right \rangle \right )^{-1} \end{eqnarray} _____no_output_____ <code> def gibbsSampler(x, T, V, maskX, MAXITER, likelihood = None, test = None, dataExistance = None): W = T.shape[0] K = V.shape[1] I = T.shape[1] tt = 0 t00 = time.time() for n in range(MAXITER): Tprev = T.copy() Vprev = V.copy() S = np.ones([I,W,K]) t0 = time.time() # Sample Sources TdotV = np.dot(Tprev, Vprev) p = np.einsum('i...,k...',Tprev, np.transpose(Vprev))/np.array([TdotV]*I) for nu in range(W): for tao in range(K): if maskX[nu,tao] == 0: S[:,nu,tao] = 0 else: S[:,nu,tao] = np.random.multinomial(x[nu,tao], p[:,nu,tao], size=1) sigmaT = np.transpose(np.sum(maskX*S, axis=2)) sigmaV = np.sum(maskX*S, axis=1) # Sample Templates a_t = A_t + sigmaT b_t = 1 / ( np.divide(A_t, B_t) + np.dot(maskX, np.transpose(Vprev)) ) T = np.random.gamma(a_t,b_t) # Sample Excitations a_v = A_v + sigmaV b_v = 1 / ( np.divide(A_v, B_v) + np.dot(np.transpose(Tprev),maskX) ) V = np.random.gamma(a_v,b_v) if likelihood != None: likelihood.append(calculateLikelihood(x, np.dot(T, V), test, dataExistance, W, K)) if likelihood == None: return T, V, a_t, b_t, a_v, b_v else: return T, V, a_t, b_t, a_v, b_v, likelihood_____no_output_____ </code> ## En iyi ayrıştırmanın hesaplanması_____no_output_____ <code> t0 = time.time() T = orgT.copy() V = orgV.copy() maskX = dataExistance.copy() MAXITER = 10000 T, V, a_t, b_t, a_v, b_v = gibbsSampler(x, T, V, maskX, MAXITER) bestCandidates = sampleCandidates(a_t, b_t, a_v, b_v, sampleSize=1000) bestCandidatesMean = np.mean(bestCandidates, axis=0) bestCandidateDistributionsFull = transformCandidatesToDistributions(bestCandidates, KLsampleSize, test&False, dataExistance|True) bestCandidateDistributionsPartial = transformCandidatesToDistributions(bestCandidates, KLsampleSize, test, dataExistance) _____no_output_____ </code> ### İlk-ısınma (Burn-in) periyodu 1000 adımlık bir ilk-ısınma devresi uyguladık._____no_output_____ <code> t0 = time.time() T = orgT.copy() V = orgV.copy() maskX = mask.copy() T, V, _, _, _, _ = gibbsSampler(x, T, V, maskX, MAXITER = 1000) modT = T.copy() modV = V.copy() print(time.time()-t0)1.1921601295471191 </code> ### Gibbs örnekleyicisi ile eğitim (daha çok veri açma)_____no_output_____ <code> for cellOpenStrategy in range(5): t0 = time.time() likelihood = [] openedCells = [0] diffErrorPartial = [] diffErrorFull = [] diffKLPartial = [] diffKLFull = [] varEst = [] T = modT.copy() V = modV.copy() maskX = mask.copy() cellOpenCount = 10 extraIter = 20 EPOCH = int((dataExistance.sum()-(dataExistance^test).sum()-mask.sum())//cellOpenCount + extraIter) MAXITER = 20 for nn in range(EPOCH): # Apply gibbs sampler if nn >= (dataExistance.sum()-(dataExistance^test).sum()-mask.sum())//cellOpenCount: MAXITER = 50 T, V, a_t, b_t, a_v, b_v, likelihood = gibbsSampler(x, T, V, maskX, MAXITER, likelihood, test, dataExistance) # Take Mean estimate and calculate diff from X xCandidates = sampleCandidates(a_t, b_t, a_v, b_v, sampleSize=100) ve, de = calculateVarianceAndError(bestCandidatesMean, test, dataExistance, xCandidates) varEst.append(ve) diffErrorPartial.append(de) _, de = calculateVarianceAndError(bestCandidatesMean, test&False, dataExistance|True, xCandidates) diffErrorFull.append(de) de = calculateKLdivergence(xCandidates, bestCandidateDistributionsPartial, KLsampleSize, test, dataExistance) diffKLPartial.append(de) de = calculateKLdivergence(xCandidates, bestCandidateDistributionsFull, KLsampleSize, test&False, dataExistance|True) diffKLFull.append(de) # Apply Cell Opening Strategy for co in range(cellOpenCount): if cellOpenStrategy == 0: row, col = openRandom(W, K, maskX, test) elif cellOpenStrategy == 1: if nn % 2 == 0: row, col = openMaxColMask(W, K, maskX, test) else: row, col = openMaxRowMask(W, K, maskX, test) elif cellOpenStrategy == 2: row, col = openMinVar(W, K, maskX, test, xCandidates) elif cellOpenStrategy == 3: row, col = openMaxVar(W, K, maskX, test, xCandidates) elif cellOpenStrategy == 4: row, col = openRowVarColMask(W, K, maskX, test, np.dot(T,V)) else: row, col = (-1, -1) # Remove mask from (row, col) if not (row == -1 and col == -1): maskX[row][col] = 1 openedCells.append((row,col)) allStrategyLabels.append(cellOpenStrategy) allStrategyColors.append(cellOpenStrategy) allLikelihood.append(likelihood) allOpenedCells.append(openedCells) allEstimationVariance.append(varEst) allDiffRMSEPartial.append(diffErrorPartial) allDiffRMSEFull.append(diffErrorFull) allDiffKLPartial.append(diffKLPartial) allDiffKLFull.append(diffKLFull) rmse = np.zeros(len(diffErrorPartial)) for i in range(len(diffErrorPartial)): # RMSE de2 = diffErrorPartial[i]*diffErrorPartial[i] rmse[i] = np.sqrt(np.nanmean(np.where(de2!=0,de2,np.nan))) allRMSEPartial.append(rmse) rmse = np.zeros(len(diffErrorFull)) for i in range(len(diffErrorFull)): # RMSE de2 = diffErrorFull[i]*diffErrorFull[i] rmse[i] = np.sqrt(np.nanmean(np.where(de2!=0,de2,np.nan))) allRMSEFull.append(rmse) kl = np.zeros(len(diffKLPartial)) for i in range(len(diffKLPartial)): kl[i] = np.mean(diffKLPartial[i]) allKLPartial.append(kl) kl = np.zeros(len(diffKLFull)) for i in range(len(diffKLFull)): kl[i] = np.mean(diffKLFull[i]) allKLFull.append(kl) print(strategyLabels[cellOpenStrategy] + " %0.3fs. de tamamlandı." % (time.time() - t0)) allErrorDiffs.append(allDiffRMSEPartial) allErrorDiffs.append(allDiffRMSEFull) allErrorDiffs.append(allDiffKLPartial) allErrorDiffs.append(allDiffKLFull) allError.append(allRMSEPartial) allError.append(allRMSEFull) allError.append(allKLPartial) allError.append(allKLFull)Rastgele compeleted in 25.083s. Satır-Sütun compeleted in 25.324s. Min Varyans compeleted in 25.153s. Maks Varyans compeleted in 25.120s. </code> ## Etkileşimli Hata ve Varyans Isı haritası Satır-sütun stratejisi ve KL ıraksayı hata metriği yoğun matris üzerinde kullanılmış bir deney süreci gösterilmektedir. Çıktıda hata ve varyans ısı haritasını görmekteyiz. Bu grafikte, kırmızı ile hata, mavi ile maskelenmiş verinin varyansı ve beyaz ile ise gözlemlenmiş veri gösterilmektedir. Kırmızı ve mavinin tonları hata ve varyansın şiddetini yansıtmaktadır. Yinelemeler süresince veriler gözlemlendikçe maskelenmiş (mavilikler) verinin kaybolduğunu (beyaza dönmesini), hatanın (kırmızılıkların) ise azaldığını grafikten görmekteyiz. Sağdaki grafikte KL ıraksayı kullanılarak elde edilen hata grafiğini ve ona oturtulan polinomu görmekteyiz._____no_output_____ <code> cmap = matplotlib.colors.LinearSegmentedColormap.from_list('my_colormap', ['blue','white', 'red'], 256) chosenStrategy = 1 chosenErrorMetric = 2 rmseHM = allError[chosenErrorMetric][chosenStrategy] matrixHM = allErrorDiffs[chosenErrorMetric][chosenStrategy] openedCellsHM = allOpenedCells[chosenStrategy] varEstHM = allEstimationVariance[chosenStrategy] xAxis = list(range(len(rmseHM))) xp = np.linspace(0, EPOCH-1, 1000) polDeg = 10 p30 = np.poly1d(np.polyfit(xAxis, rmseHM, polDeg)) vmax = rmseHM.max() #matrixHM = diffErrorHM.copy() for i in range(EPOCH-extraIter): #if openedCellsHM[i] == (-1,-1): # break for oc in openedCellsHM[1+(i*cellOpenCount):]: if oc[0] == -1: break matrixHM[i][oc[0],oc[1]] = -varEstHM[i][oc[0],oc[1]]*vmax def pltErrorVarianceHM(rc): fig = plt.figure(figsize=(8,8)) plt.subplot(1, 2, 1) if rc == 0: plt.title("İlk Durum") elif rc <= (EPOCH-extraIter): plt.title("Yineleme Sayısı: " + str(rc)) else: plt.title("Yineleme Sayısı: " + str(rc)) img = plt.imshow(matrixHM[rc],interpolation='nearest', cmap = cmap, origin='lower', vmax = vmax, vmin = -vmax) plt.colorbar(img,cmap=cmap) plt.subplot(1, 2, 2) plt.plot( range(rc+1), rmseHM[:rc+1]) try: xpRC = list(xp).index(xp[xp>=(rc+1)][0]) except: xpRC = list(xp).index(xp[xp>=(rc)][0]) plt.plot( xp[:xpRC], p30(xp)[:xpRC], "-", color='r', linewidth=2) plt.xlim(0,EPOCH) #plt.ylim(0,rmseHM.max()+1) font = {'size' : 15} #plt.title("KL : " + str(p30(xp)[xpRC-1])) plt.xlabel("Yineleme Sayısı", **font) plt.ylabel(errorMetricLabels[chosenErrorMetric], **font) plt.tight_layout() plt.show() interact(pltErrorVarianceHM, rc = (0, EPOCH-1, 1))_____no_output_____ </code> ## Yoğun ve seyrek veri üzerinde, tanımlanmış beş strateji için 10’lu açarak kısmi KL ıraksayı değişim grafiği Stratejilerin performanslarını, eşik değerine ne kadar hızlı ulaştıklarını karşılaştırarak ölçtük. Eşik değerine ulaşma metriği olarak *eğri altında kalan alanı* kullandık. En iyi strateji, eğri altında kalan alanı en az olandır. Hata fonksiyonlarının asıl davranışını gözlemleyebilmek için fonksiyonlara polinom oturttuk. Bu polinomların salınımdan daha az etkilenmesi için ise bütün veriler gözlemlendikten sonra 1000 yinelemeli Gibbs örnekleyicisi çalıştırdık._____no_output_____ <code> xAxis = list(range(len(allKLPartial[0]))) xp = np.linspace(0, EPOCH-1, 1000) polDeg = 15 chosenErrorMetric = 2 errorFunction = allError[chosenErrorMetric].copy() thr = 0 for rmse in errorFunction: thr += np.mean(rmse[-15:-5]) thr /= len(errorFunction) fig = plt.figure(figsize=(10,10)) plt.plot(range(EPOCH - extraIter+15), (EPOCH - extraIter+15)*[thr], "--", color='c', label="Eşik: "+str(thr)[:5], linewidth=2) aucTrapz = [] aucSimps = [] xpRC = list(xp).index(xp[xp>=(EPOCH - extraIter+10)][0]) for i, rmse in enumerate(errorFunction): p30 = np.poly1d(np.polyfit(xAxis, rmse, polDeg)) aucTrapz.append(np.trapz(p30(xp)[:xpRC]-thr, x=xp[:xpRC])) aucSimps.append(np.trapz(p30(xp)[:xpRC], x=xp[:xpRC])) print(np.trapz(p30(xp)[:xpRC]-thr, x=xp[:xpRC])) zz = i if i == 1: zz = 10 plt.plot( xp[:xpRC], p30(xp)[:xpRC], "-", label=strategyLabels[allStrategyLabels[i]], color=strategyColors[allStrategyColors[i]], linewidth=2, zorder=zz) plt.xlim(0,) #plt.ylim(0,) font = {'size' : 18} plt.xlabel("Yineleme Sayısı (" + str(cellOpenCount) + "'lu açma)", **font) plt.ylabel(errorMetricLabels[chosenErrorMetric], **font) plt.legend() plt.show() 51.27309890624195 44.32963005477559 81.05946289006324 54.52214938986193 78.09203997285168 </code> ## Vargılar Bu çalışmamızda negatif olmayan matris ayrışımı için eşlenik Gamma önselleri ile hiyerarşik bir model inceledik ve çıkarımlar için Gibbs örnekleyicisi kullandık. Buradan yola çıkarak aktif eleman seçimi [7] problemine çözüm önerdik. Tanımladığımız beş stratejiyi KL ıraksayı ve KOKH metrikleri üzerinden karşılaştırdık. Yaptığımız deneyler ile satır-sütun stratejisinin etkili bir aktif ögrenme tekniği olduğunu gösterdik. Satır-sütun stratejisinin başarısının gözlemlenmemiş veriyi dengeli bir biçimde açıyor olmasına bağlıyoruz. Ayrıca bu stratejinin verinin içeriğinden bağımsız olarak tanımlanmış olması bu stratejiyi diğer alanlara da uygulanabilir kılıyor._____no_output_____## Kaynaklar [1] D. D. Lee and H. S. Seung, "Learning the parts of objects with nonnegative matrix factorization.", Nature, 401:788–791, 1999. [2] A. T. Cemgil, "Bayesian inference in non-negative matrix factorisation models.", Technical Report CUED/FINFENG/TR.609, University of Cambridge, July 2008. Submitted for publication to Computational Intelligence and Neuroscience [3] D. D. Lee and H. S. Seung, "Algorithms for non-negative matrix factorization.", Advances in neural information processing systems. 2001. [4] J. S. Liu, "Monte Carlo Strategies in Scientific Computing", Springer, New York, NY, USA, 2004. [5] W. R. Gilks, S. Richardson, and D. J. Spiegelhalter, Eds., "Markov Chain Monte Carlo in Practice", CRC Press, London, UK, 1996. [6] Harper, F. Maxwell, and Joseph A. Konstan, "The movielens datasets: History and context.", ACM Transactions on Interactive Intelligent Systems (TiiS) 5.4 (2016): 19. [7] Silva, Jorge, and Lawrence Carin. "Active learning for online bayesian matrix factorization.", Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2012._____no_output_____
{ "repository": "suyunu/AL-BNMF", "path": "Aktif-Ogrenme-BNMF.ipynb", "matched_keywords": [ "neuroscience" ], "stars": null, "size": 259491, "hexsha": "486fb291ae36ad81e29290d1976755b523758fa2", "max_line_length": 129404, "avg_line_length": 125.0559036145, "alphanum_fraction": 0.8260671854 }
# Notebook from ageller/IDEAS_FSS-Vis_2017 Path: FinalStudentProjects/2022spring/ErinCox/FinalProject.ipynb <code> # Import needed libraries. import numpy as np import matplotlib.pyplot as plt import pandas as pd from bokeh.plotting import * from bokeh.layouts import row, column from bokeh.models import ColumnDataSource, Scatter, Select, CustomJS, Dropdown, Div from bokeh.models.widgets import DataTable, TableColumn from bokeh.models.tools import HoverTool #from bokeh.palettes import Viridis256 from bokeh.palettes import Category10 output_notebook() output_file("bokehdiskplot.html", title='Protoplanetary Disk Parameters')_____no_output_____# Variables PlotTitle = 'Protoplanetary Disk Parameters' X1 = 'DiskMass' Y1 = 'mmflux' clouds = ['Lupus','USco','ChamI','rOph','ChamII','Taurus','CrA'] #CLOUD_COLOR = Viridis256[len(clouds)] CLOUD_COLOR = Category10[len(clouds)] XAXIS1 = 'log' YAXIS1 = 'linear' # set the boundaries based on range of data BoundaryDict = { 'DiskMass' : (1,14000), 'mmflux': (0,2), 'submmflux': (0,11), 'dist' : (0,200), 'starmass' : (0.02,3.9), 'acc' : (10e-11,10e-6) } AxisTypeDict = { 'DiskMass' : 'log', 'mmflux' : 'linear', 'submmflux' : 'linear', 'dist' : 'linear', 'starmass' : 'linear', 'acc' : 'log' } list_labels = list(AxisTypeDict)_____no_output_____df = pd.read_csv('data/data.csv',index_col=0)_____no_output_____x = list(AxisTypeDict) x[0]_____no_output_____title = Div(text=f'<h1>{PlotTitle}</h1>', align='center', height_policy='min', margin=(-10,0,-10,0))_____no_output_____def createPlot(source,labels): # define the tools you want to use TOOLS = "pan,wheel_zoom,box_zoom,reset,save,box_select,lasso_select" # create a new plot and renderer #for count,label in enumerate(labels): for label in labels: f = figure(tools=TOOLS, width=350, height=350, title=None, x_axis_type='log', y_axis_type='log', y_range=BoundaryDict[label], x_range=BoundaryDict[label]) #renderer = f.scatter(x=label,y=label, source=source, color='black', alpha=0.5, size=5, marker='circle') #for count, cloud in enumerate(clouds): #renderer = f.scatter(x=X1,y=Y1, source=source, color=CLOUD_COLOR[count], alpha=0.5, size=5, marker='circle') renderer = f.scatter(x=X1,y=Y1, source=source, color='black', alpha=0.5, size=5, marker='circle') f.xaxis.axis_label = 'Disk mass [Earth masses]' f.yaxis.axis_label = '1.3 mm Flux [mJy/beam]' #f.xaxis.axis_label = labels[count] #f.yaxis.axis_label = labels[count] # (optional) define different colors for highlighted and non-highlighted markers renderer.selection_glyph = Scatter(fill_alpha=1, fill_color="firebrick", line_color=None) renderer.nonselection_glyph = Scatter(fill_alpha=0.2, fill_color="gray", line_color=None) return f_____no_output_____ def createPlot(source): # define the tools you want to use TOOLS = "pan,wheel_zoom,box_zoom,reset,save,box_select,lasso_select" # create a new plot and renderer f = figure(tools=TOOLS, width=350, height=350, title=None, x_axis_type='linear', y_axis_type='linear', y_range=(0,15), x_range=(0,10e4)) renderer = f.scatter('x','y', source=source, color='black', alpha=0.5, size=5, marker='circle') f.xaxis.axis_label = 'Disk mass [Earth masses]' f.yaxis.axis_label = '1.3 mm Flux [mJy/beam]' # (optional) define different colors for highlighted and non-highlighted markers renderer.selection_glyph = Scatter(fill_alpha=1, fill_color="firebrick", line_color=None) renderer.nonselection_glyph = Scatter(fill_alpha=0.2, fill_color="gray", line_color=None) return f_____no_output_____def createTable(source, table_labels, w=350, h=300): # create a table to hold the selections columns = [] for field in labels: columns.append(TableColumn(field=field, title=labels[field])) t = DataTable(source=source, columns=columns, width=w, height=h) return t_____no_output_____def attachSelectionHandlerJS(source): selectionHandlerJS = CustomJS(args=dict(s1=source), code=""" //get the indices //Bokeh creates "cb_obj", which is the object that the callback is attached to. const indices = cb_obj.indices; //execute a command in this notebook to set the indices IPython.notebook.kernel.execute("indices = " + indices); """) # attach the callback to the data source to be run when the selection indices change source.selected.js_on_change("indices", selectionHandlerJS) _____no_output_____def createDropdowns(source, f, labels): options = list(labels.values()) #print('options',options) keys = list(labels.keys()) bounds = []; for k in labels: #bounds.append([min(source.data[k]), max(source.data[k])]) bounds.append([BoundaryDict[k][0],BoundaryDict[k][1]]) xSelect = Select(title="x axis", value=options[0], options=options) ySelect = Select(title="y axis", value=options[1], options=options) callback = CustomJS(args=dict(source=source, keys=keys, options=options, bounds=bounds, axes={"x":f.xaxis[0], "y":f.yaxis[0]}, ranges={"x":f.x_range, "y":f.y_range}), #axistypes={"x":f.x_axis_type[0], "y":f.y_axis_type[0]}), code=""" //get the value from the dropdown //Note: "this" is like Python's "self"; here it will containt the select element. var val = this.value; //now find the index within the options array so that I can find the correct key to use var index = options.indexOf(val); var key = keys[index]; //check which axis this is var ax = "x"; if (this.title == "y axis") ax = "y"; console.log(this.title, ax) //change the data being plotted source.data[ax] = source.data[key]; source.change.emit(); //change the axis label axes[ax].axis_label = val; //change the bounds ranges[ax].start = bounds[index][0]; ranges[ax].end = bounds[index][1]; //change the x and y axies axistypes[ax] = AxisTypeDict[key]; """) #attach the callback to the Select widgets xSelect.js_on_change("value", callback) ySelect.js_on_change("value", callback) return xSelect, ySelect_____no_output_____data = dict(x=df['DiskMass'],y=df['mmflux'], name = df['Source'], ra = df['RA'],dec = df['Dec'], DiskMass = df['DiskMass'], mmflux = df['mmflux'], submmflux = df['submmflux'], dist = df['dist'], starmass = df['starmass'], classification = df['Disk'], cloud = df['Region'], acc = df['acc']) source = ColumnDataSource(data) f = createPlot(source) labels= dict(DiskMass = "disk mass [Earth masses]", mmflux = "1.3 mm Flux [mJy/beam]", submmflux = "0.89 mm Flux [mJy/beam]", dist = "distance [pc]", starmass = "star mass [Solar masses]", #classification = "Disk Class", #cloud = "Cloud", acc = "Accretion Rate [Solar masses/year]") #imagefile = 'images/J16000236.png' #psize = 100 #img = Image.open(imagefile)#.convert('RGBA') #p = figure(match_aspect=True) #p.image_rgba(image=img) #div_image_html = '<img src="/Users/erincox/opt/anaconda3/lib/python3.8/site-packages/bokeh/server/static/J162309.2-241705.png">' div_image_html = '<img src="bokeh/server/static/J162309.2-241705.png">' div_image = Div(text=div_image_html, width=350, height=350) #curdoc().add_root(div_image) #show(div_image) table_labels = dict(ra = "RA", dec= "Dec",DiskMass = "disk mass [Earth masses]", name = "Source", mmflux = "1.3 mm Flux [mJy/beam]", submmflux = "0.89 mm Flux [mJy/beam]", dist = "distance [pc]", starmass = "star mass [Solar masses]", classification = "Disk Class", cloud = "Cloud", acc = "Accretion Rate [Solar masses/year]") t = createTable(source, table_labels, w=900) xSelect, ySelect = createDropdowns(source, f, labels) attachSelectionHandlerJS(source) hover = HoverTool() hover.tooltips =""" <div> <h3><center>@name</center></h3> <div><strong>Source: </strong>@name</div> <div><strong>Cloud: </strong>@cloud</div> <div><strong>Distance: </strong>@dist pc </div> <div><strong>Classification: </strong>@classification</div> <div><strong>Disk Mass: </strong>@DiskMass M_Earth</div> <div><strong>Stellar Mass: </strong>@starmass M\u2609</div> </div> """ f.add_tools(hover) layout = column( #row(div_image,f), row(f), row(xSelect,ySelect), row(t) ) # show the plot show(layout) #curdoc().add_root(layout)_____no_output_____ </code>
{ "repository": "ageller/IDEAS_FSS-Vis_2017", "path": "FinalStudentProjects/2022spring/ErinCox/FinalProject.ipynb", "matched_keywords": [ "STAR" ], "stars": 1, "size": 185287, "hexsha": "48707d717a5a444805f074f931da5973cfb685a4", "max_line_length": 150780, "avg_line_length": 247.0493333333, "alphanum_fraction": 0.6983544447 }
# Notebook from Madmaxcoder2612/Programming-Codes Path: day19_recommenderSystem.ipynb <code> import pandas as pd_____no_output_____import matplotlib.pyplot as plt_____no_output_____import warnings warnings.filterwarnings('ignore')_____no_output_____df = pd.read_csv('ml-100k/u.data',sep='\t', names=['user_id','item_id','rating','ts']) df.head()_____no_output_____df.info()<class 'pandas.core.frame.DataFrame'> RangeIndex: 100000 entries, 0 to 99999 Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 user_id 100000 non-null int64 1 item_id 100000 non-null int64 2 rating 100000 non-null int64 3 ts 100000 non-null int64 dtypes: int64(4) memory usage: 3.1 MB cols = "item_id|title| release date | video release date |\ IMDb URL | unknown | Action | Adventure | Animation |\ Children's | Comedy | Crime | Documentary | Drama | Fantasy |\ Film-Noir | Horror | Musical | Mystery | Romance | Sci-Fi |\ Thriller | War | Western".split('|') movies = pd.read_csv('ml-100k/u.item',sep='|',names=cols) movies.head()_____no_output_____movies[['item_id','title']].head()_____no_output_____data = pd.merge(df.drop('ts',axis=1),movies[['item_id','title']], on='item_id') data.head()_____no_output_____data.info()_____no_output_____data.describe()_____no_output_____avg_rates = data.groupby('title')['rating'].mean() avg_rates.head(20)_____no_output_____rate_count = data.groupby('title')['rating'].count() rate_count.head(20)_____no_output_____rate_count.sort_values(ascending=False).head(20)_____no_output_____rate_count = data.groupby('title')['rating'].count() rate_count.head(20)_____no_output_____rate_count.sort_values(ascending=False).head(20)_____no_output_____plt.figure(figsize=(18,5)) rate_count.hist(bins=50) t = plt.xticks(range(0,601,20)) _____no_output_____plt.figure(figsize=(18,5)) avg_rates.hist(bins=50)_____no_output_____plt.scatter(x=avg_rates, y=rate_count) plt.xlabel('Avg Rating') plt.ylabel('Number of People rated') plt.grid()_____no_output_____df_pivot = data.pivot_table(index='user_id',columns='title',values='rating') df_pivot.head()_____no_output_____inp = 'Star Wars (1977)'_____no_output_____df_pivot[inp].head()_____no_output_____sim_inp = df_pivot.corrwith(df_pivot[inp])_____no_output_____sim_inp_____no_output_____sim_df = pd.DataFrame(sim_inp,columns=['Correlation']) sim_df.head()_____no_output_____sim_df.sort_values(by='Correlation',ascending=False)_____no_output_____sim_df['count'] = rate_count_____no_output_____sim_df['avg_rates'] = avg_rates_____no_output_____sim_df.head()_____no_output_____sim_df[(sim_df['count']>100)].sort_values('Correlation',ascending=False).head(10)_____no_output_____recom = sim_df[(sim_df['count']>100)].sort_values('Correlation',ascending=False) recom.drop(inp,axis=0, inplace=True) recommended = recom.index[:3] for r in recommended: print(r)_____no_output_____recom.head(3)_____no_output_____ </code>
{ "repository": "Madmaxcoder2612/Programming-Codes", "path": "day19_recommenderSystem.ipynb", "matched_keywords": [ "STAR" ], "stars": null, "size": 23844, "hexsha": "4870d1cad8937f056a8169a9e31321cb325d0b1b", "max_line_length": 1702, "avg_line_length": 46.4795321637, "alphanum_fraction": 0.6036738802 }
# Notebook from haenvely/deep_learning Path: 16.1 Productionize Embeddings.ipynb <code> import requests from bs4 import BeautifulSoup import os import time try: from urllib.request import urlretrieve except ImportError: from urllib import urlretrieve import xml.sax from sklearn import svm import subprocess import mwparserfromhell import json from collections import Counter from itertools import chain import numpy as np import random from keras.models import Model from keras.layers import Embedding, Input, Reshape from keras.layers.merge import Dot from sklearn.linear_model import LinearRegression from sklearn.neighbors import NearestNeighbors import pickle import gensim from sklearn.decomposition import TruncatedSVD import psycopg2_____no_output_____with open('data/wp_movies_10k.ndjson') as fin: movies = [json.loads(l) for l in fin]_____no_output_____link_counts = Counter() for movie in movies: link_counts.update(movie[2]) top_links = [link for link, c in link_counts.items() if c >= 3] link_to_idx = {link: idx for idx, link in enumerate(top_links)} movie_to_idx = {movie[0]: idx for idx, movie in enumerate(movies)} pairs = [] for movie in movies: pairs.extend((link_to_idx[link], movie_to_idx[movie[0]]) for link in movie[2] if link in link_to_idx) pairs_set = set(pairs) len(pairs), len(top_links), len(movie_to_idx)_____no_output_____def movie_embedding_model(embedding_size=30): link = Input(name='link', shape=(1,)) movie = Input(name='movie', shape=(1,)) link_embedding = Embedding(name='link_embedding', input_dim=len(top_links), output_dim=embedding_size)(link) movie_embedding = Embedding(name='movie_embedding', input_dim=len(movie_to_idx), output_dim=embedding_size)(movie) dot = Dot(name='dot_product', normalize=True, axes=2)([link_embedding, movie_embedding]) merged = Reshape((1,))(dot) model = Model(inputs=[link, movie], outputs=[merged]) model.compile(optimizer='nadam', loss='mse') return model model = movie_embedding_model()_____no_output_____import random random.seed(5) def batchifier(pairs, positive_samples=50, negative_ratio=5): batch_size = positive_samples * (1 + negative_ratio) batch = np.zeros((batch_size, 3)) while True: for idx, (link_id, movie_id) in enumerate(random.sample(pairs, positive_samples)): batch[idx, :] = (link_id, movie_id, 1) idx = positive_samples while idx < batch_size: movie_id = random.randrange(len(movie_to_idx)) link_id = random.randrange(len(top_links)) if not (link_id, movie_id) in pairs_set: batch[idx, :] = (link_id, movie_id, -1) idx += 1 np.random.shuffle(batch) yield {'link': batch[:, 0], 'movie': batch[:, 1]}, batch[:, 2] next(batchifier(pairs, positive_samples=3, negative_ratio=2))_____no_output_____positive_samples_per_batch=256 model.fit_generator( batchifier(pairs, positive_samples=positive_samples_per_batch, negative_ratio=5), epochs=10, steps_per_epoch=len(pairs) // positive_samples_per_batch, verbose=2 )Epoch 1/10 148s - loss: 0.5145 Epoch 2/10 152s - loss: 0.3522 Epoch 3/10 172s - loss: 0.3374 Epoch 4/10 182s - loss: 0.3289 Epoch 5/10 172s - loss: 0.3256 Epoch 6/10 167s - loss: 0.3235 Epoch 7/10 158s - loss: 0.3210 Epoch 8/10 152s - loss: 0.3204 Epoch 9/10 153s - loss: 0.3196 Epoch 10/10 148s - loss: 0.3196 movie = model.get_layer('movie_embedding') movie_weights = movie.get_weights()[0] movie_lengths = np.linalg.norm(movie_weights, axis=1) normalized_movies = (movie_weights.T / movie_lengths).T def similar_movies(movie): dists = np.dot(normalized_movies, normalized_movies[movie_to_idx[movie]]) closest = np.argsort(dists)[-10:] for c in reversed(closest): print(c, movies[c][0], dists[c]) similar_movies('Rogue One')29 Rogue One 1.0 101 Prometheus (2012 film) 0.95705 3349 Star Wars: The Force Awakens 0.955909 659 Rise of the Planet of the Apes 0.953989 25 Star Wars sequel trilogy 0.94565 61 Man of Steel (film) 0.943233 19 Interstellar (film) 0.942833 413 Superman Returns 0.940903 221 The Dark Knight Trilogy 0.94027 22 Jurassic World 0.938769 movie = model.get_layer('movie_embedding') movie_weights = movie.get_weights()[0] movie_lengths = np.linalg.norm(movie_weights, axis=1) normalized_movies = (movie_weights.T / movie_lengths).T nbrs = NearestNeighbors(n_neighbors=10, algorithm='ball_tree').fit(normalized_movies) with open('data/movie_model.pkl', 'wb') as fout: pickle.dump({ 'nbrs': nbrs, 'normalized_movies': normalized_movies, 'movie_to_idx': movie_to_idx, }, fout)_____no_output_____with open('data/movie_model.pkl', 'rb') as fin: m = pickle.load(fin) movie_names = [x[0] for x in sorted(movie_to_idx.items(), key=lambda t:t[1])] distances, indices = m['nbrs'].kneighbors( [m['normalized_movies'][m['movie_to_idx']['Rogue One']]]) for idx in indices[0]: print(movie_names[idx])Rogue One Prometheus (2012 film) Star Wars: The Force Awakens Rise of the Planet of the Apes Star Wars sequel trilogy Man of Steel (film) Interstellar (film) Superman Returns The Dark Knight Trilogy Jurassic World DB_NAME = 'douwe' USER = 'djangosite' PWD = 'z0g3h31m!' HOST = '127.0.0.1' connection_str = "dbname='%s' user='%s' password='%s' host='%s'" conn = psycopg2.connect(connection_str % (DB_NAME, USER, PWD, HOST))_____no_output_____with conn.cursor() as cursor: cursor.execute('INSERT INTO movie (movie_name, embedding) VALUES (%s, %s)', (movie_names[0], normalized_movies[0].tolist())) conn.commit()_____no_output_____with conn.cursor() as cursor: cursor.execute('DELETE FROM movie;') conn.commit()_____no_output_____with conn.cursor() as cursor: for movie, embedding in zip(movies, normalized_movies): cursor.execute('INSERT INTO movie (movie_name, embedding)' ' VALUES (%s, %s)', (movie[0], embedding.tolist())) conn.commit()_____no_output_____conn.rollback()_____no_output_____def recommend_movies(conn, q): with conn.cursor() as cursor: cursor.execute('SELECT movie_name, embedding FROM movie' ' WHERE lower(movie_name) LIKE %s' ' LIMIT 1', ('%' + q.lower() + '%',)) if cursor.rowcount == 0: return [] movie_name, embedding = cursor.fetchone() cursor.execute('SELECT movie_name, ' ' cube_distance(cube(embedding), ' ' cube(%s)) as distance ' ' FROM movie' ' ORDER BY distance' ' LIMIT 5', (embedding,)) return list(cursor.fetchall()) recommend_movies(conn, 'The Force Awakens')_____no_output_____with conn.cursor() as cursor: cursor.execute('SELECT movie_name, cube_distance(cube(embedding), cube(%s)) as distance ' ' FROM movie' ' ORDER BY distance' ' LIMIT 5', (emb,)) x = list(cursor) x_____no_output_____movies[0]_____no_output_____MODEL = 'GoogleNews-vectors-negative300.bin' model = gensim.models.KeyedVectors.load_word2vec_format(MODEL, binary=True)_____no_output_____model.most_similar(positive=['espresso'])_____no_output_____def most_similar(norm, positive): vec = norm[model.vocab[positive].index] dists = np.dot(norm, vec) most_extreme = np.argpartition(-dists, 10)[:10] res = ((model.index2word[idx], dists[idx]) for idx in most_extreme) return list(sorted(res, key=lambda t:t[1], reverse=True)) for word, score in most_similar(model.syn0norm, 'espresso'): print(word, score)espresso 1.0 cappuccino 0.688819 mocha 0.668621 coffee 0.661683 latte 0.653675 caramel_macchiato 0.649127 ristretto 0.648555 espressos 0.643863 macchiato 0.642825 chai_latte 0.630803 svd = TruncatedSVD(n_components=100, random_state=42, n_iter=40) reduced = svd.fit_transform(model.syn0norm)_____no_output_____reduced_lengths = np.linalg.norm(reduced, axis=1) normalized_reduced = (reduced.T / reduced_lengths).T normalized_reduced.shape_____no_output_____for word, score in most_similar(normalized_reduced, 'espresso'): print(word, score)espresso 1.0 cappuccino 0.856463080029 chai_latte 0.835657488972 latte 0.800340435865 macchiato 0.798796776324 espresso_machine 0.791469456128 Lavazza_coffee 0.790783985201 mocha 0.788645681469 espressos 0.78424218748 martini 0.784037414689 for idx in most_extreme: print(model.index2word[idx], dists[idx])espresso 1.0 mocha 0.668621 coffee 0.661683 cappuccino 0.688819 latte 0.653675 caramel_macchiato 0.649127 espressos 0.643863 ristretto 0.648555 macchiato 0.642825 chai_latte 0.630803 </code>
{ "repository": "haenvely/deep_learning", "path": "16.1 Productionize Embeddings.ipynb", "matched_keywords": [ "STAR" ], "stars": 668, "size": 32501, "hexsha": "4870d246b164de31407aa77c53e37c1c19bc1951", "max_line_length": 240, "avg_line_length": 31.0420248329, "alphanum_fraction": 0.4931540568 }
# Notebook from cdrakesmith/CGATPipelines Path: CGATPipelines/pipeline_docs/pipeline_peakcalling/notebooks/template_peakcalling_filtering_Report.ipynb Peakcalling Bam Stats and Filtering Report - Filtering Stats ============================================================ This notebook is for the analysis of outputs from the peakcalling pipeline There are severals stats that you want collected and graphed (topics covered in this notebook in bold). These are: - **how many reads input** - **how many reads removed at each step (numbers and percentages)** - **how many reads left after filtering** - insert size distribution pre filtering for PE reads - how many reads mapping to each chromosome before filtering? - how many reads mapping to each chromosome after filtering? - X:Y reads ratio - samtools flags - check how many reads are in categories they shouldn't be - picard stats - check how many reads are in categories they shouldn't be This notebook takes the sqlite3 database created by CGAT peakcalling_pipeline.py and uses it for plotting the above statistics It assumes a file directory of: location of database = project_folder/csvdb location of this notebook = project_folder/notebooks.dir/_____no_output_____ <code> import sqlite3 import pandas as pd import numpy as np %matplotlib inline import matplotlib import numpy as np import matplotlib.pyplot as plt import CGATPipelines.Pipeline as P import os import statistics import collections #load R and the R packages required # use these functions to display tables nicely as html from IPython.display import display, HTML plt.style.use('ggplot')_____no_output_____%load_ext rpy2.ipython %R require(ggplot2)_____no_output_____ </code> This is where and when the notebook was run_____no_output_____ <code> !pwd !date_____no_output_____ </code> First lets set the output path for where we want our plots to be saved and the database path and see what tables it contains_____no_output_____ <code> database_path = '../csvdb' output_path = '.'_____no_output_____ </code> This code adds a button to see/hide code in html _____no_output_____ <code> HTML('''<script> code_show=true; function code_toggle() { if (code_show){ $('div.input').hide(); } else { $('div.input').show(); } code_show = !code_show } $( document ).ready(code_toggle); </script> <form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''') _____no_output_____ </code> The code below provides functions for accessing the project database and extract a table names so you can see what tables have been loaded into the database and are available for plotting. It also has a function for geting table from the database and indexing the table with the track name_____no_output_____ <code> def getTableNamesFromDB(database_path): # Create a SQL connection to our SQLite database con = sqlite3.connect(database_path) cur = con.cursor() # the result of a "cursor.execute" can be iterated over by row cur.execute("SELECT name FROM sqlite_master WHERE type='table' ORDER BY name;") available_tables = (cur.fetchall()) #Be sure to close the connection. con.close() return available_tables db_tables = getTableNamesFromDB(database_path) print('Tables contained by the database:') for x in db_tables: print('\t\t%s' % x[0]) #This function retrieves a table from sql database and indexes it with track name def getTableFromDB(statement,database_path): '''gets table from sql database depending on statement and set track as index if contains track in column names''' conn = sqlite3.connect(database_path) df = pd.read_sql_query(statement,conn) if 'track' in df.columns: df.index = df['track'] return df_____no_output_____ </code> Number of reads per samples --------------------------- Firstly lets look at the size of our bam files pre and post filtering - hopefully the post-filtering bams will be smaller showing that some filtering has taken place_____no_output_____ <code> #get table of bam file size filtering_stats = getTableFromDB('select * from post_filtering_read_counts;',database_path) filtering_stats.index = filtering_stats['Input_Bam'] filtering_stats.drop('Input_Bam',1,inplace=True) #sort dataframe by values in rows to get order filters were applied #this is based on the number of reads in each row new_cols = filtering_stats.columns[filtering_stats.ix[filtering_stats.last_valid_index()].argsort()] filtering_stats = filtering_stats[new_cols[::-1]] #get number of reads in the bams before and after filtering - smallest_col = last filtering step applied smallest_col = filtering_stats.idxmin(axis=1)[1] #plot bar graph of pre vs post filtering sizes ax = filtering_stats[['pre_filtering',smallest_col]].divide(1000000).plot.bar() ax.set_ylabel('Million Reads (not pairs)') ax.legend(['pre_filtering','post_filtering'], loc=2,bbox_to_anchor=(1.05, 1),borderaxespad=0. ) ax.set_title('number of reads (not pairs) pre and post filtering')_____no_output_____ </code> This should give you a good idea of: 1) whether filtering has been applied - if post and pre filtering differ it has! 2) whether the proportion of filtering corresponds to the initial size of library 3) whether the proportion of filtering is consistent across samples 4) final bam size that is being taken forward to peakcalling - the requirements will differ for different technologies [Encode ATAC-Seq Guidelines](https://www.encodeproject.org/data-standards/atac-seq/): Each replicate should have 25 million non duplicate, non-mitochondrial aligned reads for single-end or 50 million paired-end reads (i.e. there should be 25 million fragments regardless of sequencing type) - Jan 2017_____no_output_____ <code> #get the order of filters applied def get_filter_order(dataframe): '''function to print out the order of filters in dataframe''' print('order of filters applied to bam file:') for x in list(dataframe): if x != 'pre_filtering': print ('\t%s' % x) return list(dataframe) filter_order = get_filter_order(filtering_stats)_____no_output_____print('Table of number of reads remaining at each state of filtering ') display(filtering_stats.T)_____no_output_____ </code> Lets graph the number of reads that remain at each step for each bam file_____no_output_____ <code> #plot how the reads have been filtered ax = filtering_stats.T.divide(1000000).plot(rot=90) ax.set_xlabel('filters') ax.set_ylabel('million reads (not pairs)') ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) ax.set_title('number of reads remaining at\neach stage of filtering')_____no_output_____ </code> Now lets look at the number of reads filtered at each step side by side - this uses R for plotting_____no_output_____ <code> filtered_df = filtering_stats.copy() filtered_df = filtered_df.divide(1000000) filtered_df['Name'] = filtered_df.index _____no_output_____%%R -i filtered_df -w 600 -h 600 -u px library("reshape2") filtered_df$Name <- factor(filtered_df$Name) df.m = melt(filtered_df) cbPalette <- c("#999999", "#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7") ## pink #CC79A7 # orange #D55E00 # 0072B2 blue # yellow #F0E442 # green #009E73 # light blue g = ggplot(data=df.m, aes(factor(Name), y=value,fill=variable)) + labs(title="Number of individual reads remaining after each filtering step") + geom_bar(stat="identity",position="dodge", width=0.7) + scale_fill_manual(values=cbPalette) + theme_bw() g + scale_y_continuous(name="million reads remaining \n (individual reads not pairs)") + theme(plot.title=element_text(size=16, hjust=0.5, face='bold'), legend.position="top",axis.text=element_text(size=15,face='bold'),axis.text.x=element_text(size=15,face='bold',angle=90),axis.title.x=element_blank(),axis.title.y=element_text(size=10,face='bold'))_____no_output_____ </code> Now have a look at the percentage of reads remaining at each stage of filtering The percentage dataframe is created in python but uses R and ggplot for plotting_____no_output_____ <code> #Make percentage of reads dataframe percentage_filtered_df = filtering_stats.copy() percentage_filtered_df = percentage_filtered_df.div(percentage_filtered_df.pre_filtering, axis='index')*100 percentage_filtered_df = percentage_filtered_df.round(3) percentage_filtered_df['Name']= percentage_filtered_df.index print('Table showing the percentage of reads remaining at each filtering step') percentage_filtered_df.T_____no_output_____%%R -i percentage_filtered_df -w 600 -h 600 -u px library("reshape2") percentage_filtered_df$Name <- factor(percentage_filtered_df$Name) df.m = melt(percentage_filtered_df) cbPalette <- c("#999999", "#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7") ## pink #CC79A7 # orange #D55E00 # 0072B2 blue # yellow #F0E442 # green #009E73 # light blue g = ggplot(data=df.m, aes(factor(Name), y=value,fill=variable)) + labs(title="Percentage of reads remaining after each filtering step") + geom_bar(stat="identity",position="dodge", width=0.7) + scale_fill_manual(values=cbPalette) + theme_bw() g + scale_y_continuous(name="Percentage reads remaining") + theme(plot.title=element_text(size=16, hjust=0.5, face='bold'), legend.position="top",axis.text=element_text(size=15,face='bold'),axis.text.x=element_text(size=15,face='bold',angle=90),axis.title.x=element_blank(),axis.title.y=element_text(size=10,face='bold'))_____no_output_____ </code> Now lets get the number of reads that are filtered out at each stage of the filtering by subtracting the number of reads at the filtering stage of interest from the number of reads in the stage prior to the filter of interst being applied _____no_output_____ <code> #Get number of reads removed by each stage of filtering order_of_filters = get_filter_order(filtering_stats) df_reads_removed = pd.DataFrame(index=filtering_stats.index) for loc in range(len(order_of_filters)): filt = order_of_filters[loc] if filt == 'pre_filtering': df_reads_removed['total_reads'] = filtering_stats['pre_filtering'] else: previous_filter_step = order_of_filters[loc-1] #print("calcultation number removed by %s filtering step by doing number of reads in %s - number of reads in %s column \n" % (filt, previous_filter_step, filt)) df_reads_removed['removed_by_%s_filter' % filt] = filtering_stats[previous_filter_step] - filtering_stats[filt] print('\n\nTable shown as million reads removed by each filter:') display(df_reads_removed.T.divide(1000000)) #plot how the reads have been filtered ax = df_reads_removed.divide(1000000).plot(rot=90) ax.set_xlabel('filters') ax.set_ylabel('million reads removed (not pairs)') ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) ax.set_title('number of reads removed at each stage of filtering') ax = df_reads_removed.T.divide(1000000).plot(rot=90) ax.set_xlabel('filters') ax.set_ylabel('million reads removed (not pairs)') ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) ax.set_title('number of reads removed at each stage of filtering') ax = df_reads_removed.T.divide(1000000).drop('total_reads').plot(rot=90,kind='bar') ax.set_xlabel('filters') ax.set_ylabel('million reads removed (not pairs)') ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) ax.set_title('number of reads removed at each stage of filtering') _____no_output_____ </code> Now lets plot the number of reads reamining at each filtering step side by side_____no_output_____ <code> df_reads_removed_mills = df_reads_removed.divide(1000000) df_reads_removed_mills['Name'] = df_reads_removed_mills.index_____no_output_____%%R -i df_reads_removed_mills -w 900 -h 800 -u px library("reshape2") df_reads_removed_mills$Name <- factor(df_reads_removed_mills$Name) df.m = melt(df_reads_removed_mills) cbPalette <- c("#999999", "#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7") ## pink #CC79A7 # orange #D55E00 # 0072B2 blue # yellow #F0E442 # green #009E73 # light blue g = ggplot(data=df.m, aes(factor(Name), y=value,fill=variable)) + labs(title="Number of reads remaining after each filtering step") + geom_bar(stat="identity",position="dodge", width=0.7) + scale_fill_manual(values=cbPalette) + theme_bw() g + scale_y_continuous(name="Number of reads filtered at each step") + theme(plot.title=element_text(size=16, hjust=0.5, face='bold'), legend.position='top',axis.text=element_text(size=15,face='bold'),axis.text.x=element_text(size=15,face='bold',angle=90),axis.title.x=element_blank(),axis.title.y=element_text(size=10,face='bold'))_____no_output_____ </code> Now lets get the percentage of reads removed at each filtering step _____no_output_____ <code> #Get number of reads removed by each stage of filtering percentage_filtered_df = percentage_filtered_df.drop('Name',axis=1) order_of_filters = get_filter_order(percentage_filtered_df) df_percentreads_removed = pd.DataFrame(index=percentage_filtered_df.index) for loc in range(len(order_of_filters)): filt = order_of_filters[loc] if filt == 'pre_filtering': df_percentreads_removed['total_reads'] = percentage_filtered_df['pre_filtering'] else: previous_filter_step = order_of_filters[loc-1] #print("calcultation number removed by %s filtering step by doing number of reads in %s - number of reads in %s column \n" % (filt, previous_filter_step, filt)) df_percentreads_removed['removed_by_%s_filter' % filt] = percentage_filtered_df[previous_filter_step] - percentage_filtered_df[filt] print('\n\nTable shown as million reads removed by each filter:') display(df_percentreads_removed.T) #plot how the reads have been filtered ax = df_percentreads_removed.plot(rot=90) ax.set_xlabel('bam file') ax.set_ylabel('percentage reads removed (not pairs)') ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) ax.set_title('percentage of reads removed at each stage of filtering') ax = df_percentreads_removed.T.plot(rot=90) ax.set_xlabel('filters') ax.set_ylabel('percentage reads removed (not pairs)') ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) ax.set_title('percentage of reads removed at each stage of filtering') ax = df_percentreads_removed.T.drop('total_reads').plot(rot=90,kind='bar') ax.set_xlabel('filters') ax.set_ylabel('percentage reads removed (not pairs)') ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) ax.set_title('percentage of reads removed at each stage of filtering') _____no_output_____df_percentreads_removed['Name'] = df_percentreads_removed.index _____no_output_____%%R -i df_percentreads_removed -w 900 -h 800 -u px library("reshape2") df_percentreads_removed$Name <- factor(df_percentreads_removed$Name) df.m = melt(df_percentreads_removed) cbPalette <- c("#999999", "#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7") ## pink #CC79A7 # orange #D55E00 # 0072B2 blue # yellow #F0E442 # green #009E73 # light blue g = ggplot(data=df.m, aes(factor(Name), y=value,fill=variable)) + labs(title="Percentage of reads remaining after each filtering step") + geom_bar(stat="identity",position="dodge", width=0.7) + scale_fill_manual(values=cbPalette) + theme_bw() g + scale_y_continuous(name="Number of reads filtered at each step") + theme(plot.title=element_text(size=16, hjust=0.5, face='bold'), legend.position='top',axis.text=element_text(size=15,face='bold'),axis.text.x=element_text(size=15,face='bold',angle=90),axis.title.x=element_blank(),axis.title.y=element_text(size=10,face='bold'))_____no_output_____ </code> Great thats all the filtering stats done by now you should have a good idea about: * the number of reads present in your origional bam file * the number of reads left after filtering * the proportion of reads filtered by each filter * whether the proportion of reads filtered at each stage looks quite consistent across samples_____no_output_____Calculate the nonredundant fraction (NRF) NRF = Number of distinct uniquely mapping reads (after removing duplicates)/Total number of reads for ChIP-Seq - NRF < 0.5 = concerning - 0.8 > NRF > 0.5 = Acceptable - 0.9 > NRF > 0.8 = compliant - NRF > 0.9 = Ideal for ATAC-Seq - NRF < 0.7 = Concerning - 0.9 > NRF > 0.7 = Acceptable - NRF > 0.9 = Ideal _____no_output_____ <code> filtering_stats['NRF'] = filtering_stats.duplicates/filtering_stats.pre_filtering filtering_stats_____no_output_____ </code>
{ "repository": "cdrakesmith/CGATPipelines", "path": "CGATPipelines/pipeline_docs/pipeline_peakcalling/notebooks/template_peakcalling_filtering_Report.ipynb", "matched_keywords": [ "ATAC-seq", "ChIP-seq" ], "stars": 49, "size": 22855, "hexsha": "48710ced4c69e6da27c4dec25ef7cbbacef1ca58", "max_line_length": 361, "avg_line_length": 36.568, "alphanum_fraction": 0.6160577554 }
# Notebook from wikistat/AI-Frameworks Path: IntroductionDeepReinforcementLearning/Deep_Q_Learning_CartPole.ipynb <a href="https://colab.research.google.com/github/wikistat/AI-Frameworks/blob/master/IntroductionDeepReinforcementLearning/Deep_Q_Learning_CartPole.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____# [IA Frameworks](https://github.com/wikistat/AI-Frameworks) - Introduction to Deep Reinforcement Learning _____no_output_____<center> <a href="http://www.insa-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo-insa.jpg" style="float:left; max-width: 120px; display: inline" alt="INSA"/></a> <a href="http://wikistat.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/wikistat.jpg" width=400, style="max-width: 150px; display: inline" alt="Wikistat"/></a> <a href="http://www.math.univ-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo_imt.jpg" width=400, style="float:right; display: inline" alt="IMT"/> </a> </center>_____no_output_____# Part 1b : Deep Q-Network on CartPole The objectives of this notebook are the following : * Discover AI Gym environment CartPole game. * Implement DQN to solve cart pole (a Pacman-like game). * Implement Experience Replay Buffer to improve performance_____no_output_____# Files & Data (Google Colab) If you're running this notebook on Google colab, you do not have access to the `solutions` folder you get by cloning the repository locally. The following lines will allow you to build the folders and the files you need for this TP. **WARNING 1** Do not run this line locally. **WARNING 2** The magic command `%load` does not work on google colab, you will have to copy-paste the solution on the notebook._____no_output_____ <code> ! mkdir solution ! wget -P solution https://github.com/wikistat/AI-Frameworks/raw/master/IntroductionDeepReinforcementLearning/solutions/push_cart_pole.py ! wget -P solution https://github.com/wikistat/AI-Frameworks/raw/master/IntroductionDeepReinforcementLearning/solutions/DNN_class.py ! wget -P solution https://github.com/wikistat/AI-Frameworks/raw/master/IntroductionDeepReinforcementLearning/solutions/DQN_cartpole_class.py ! wget -P solution https://github.com/wikistat/AI-Frameworks/raw/master/IntroductionDeepReinforcementLearning/solutions/play_cartpole_with_dnn.py ! wget -P solution https://github.com/wikistat/AI-Frameworks/raw/master/IntroductionDeepReinforcementLearning/solutions/DQN_cartpole_memory_replay_class.py ! wget -P . https://github.com/wikistat/AI-Frameworks/raw/master/IntroductionDeepReinforcementLearning/experience_replay.py _____no_output_____ </code> # Import librairies_____no_output_____ <code> import numpy as np from datetime import datetime import collections # Tensorflow import tensorflow.keras.models as km import tensorflow.keras.layers as kl import tensorflow.keras.optimizers as ko import tensorflow.keras.backend as K # To plot figures and animations import matplotlib.animation as animation import matplotlib.pyplot as plt from IPython.display import HTML import seaborn as sb sb.set_style("whitegrid") # Gym Library import gym_____no_output_____ </code> The following functions enable us to build a video from a list of images. <br> They will be used to build videos of your agent playing._____no_output_____ <code> def update_scene(num, frames, patch): patch.set_data(frames[num]) return patch, def plot_animation(frames, repeat=False, interval=400): plt.close() # or else nbagg sometimes plots in the previous cell fig = plt.figure() patch = plt.imshow(frames[0]) plt.axis('off') return animation.FuncAnimation(fig, update_scene, fargs=(frames, patch), frames=len(frames), repeat=repeat, interval=interval)_____no_output_____ </code> # AI Gym Librairie <a href="https://gym.openai.com/" ><img src="https://gym.openai.com/assets/dist/home/header/home-icon-54c30e2345.svg" style="float:left; max-width: 120px; display: inline" alt="INSA"/></a> <br> In this notebook, we will be using [OpenAI gym](https://gym.openai.com/), a great toolkit for developing and comparing Reinforcement Learning algorithms. <br> It provides many environments for your learning *agents* to interact with._____no_output_____# A simple environment: the Cart-Pole ## Description A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The pendulum starts upright, and the goal is to prevent it from falling over by increasing and reducing the cart's velocity. ### Observation Num | Observation | Min | Max ---|---|---|--- 0 | Cart Position | -2.4 | 2.4 1 | Cart Velocity | -Inf | Inf 2 | Pole Angle | ~ -41.8&deg; | ~ 41.8&deg; 3 | Pole Velocity At Tip | -Inf | Inf ### Actions Num | Action --- | --- 0 | Push cart to the left 1 | Push cart to the righ&t Note: The amount the velocity is reduced or increased is not fixed as it depends on the angle the pole is pointing. This is because the center of gravity of the pole increases the amount of energy needed to move the cart underneath it ### Reward Reward is 1 for every step taken, including the termination step ### Starting State All observations are assigned a uniform random value between ±0.05 ### Episode Termination 1. Pole Angle is more than ±12° 2. Cart Position is more than ±2.4 (center of the cart reaches the edge of the display) 3. Episode length is greater than 200 ### Solved Requirements Considered solved when the average reward is greater than or equal to 195.0 over 100 consecutive trials. The description above is part of the official description of this environment. Read full description [here](https://github.com/openai/gym/wiki/CartPole-v0). The following command will load the `CartPole` environment._____no_output_____ <code> env = gym.make("CartPole-v0")_____no_output_____ </code> The `reset` command initializes the environment and returns the first observation which is a 1D array of size 4. _____no_output_____ <code> obs = env.reset() env.observation_space, obs_____no_output_____ </code> **Q:** What are the four values above?_____no_output_____The `render` command allows vizualising the environment which is here a 400X600 RGB image. The `render` command for the `CartPole`environment also opens another window that we will close directly with the `env.close`command. It can produce disturbing behavior._____no_output_____ <code> img = env.render(mode = "rgb_array") env.close() print("Environemnt is a %dx%dx%d images" %img.shape)_____no_output_____ </code> The environment can then easily be displayed with matplotlib function. _____no_output_____ <code> plt.imshow(img) _ = plt.axis("off")_____no_output_____ </code> The action space is composed of two actions push to the left (0), push to the right (1)._____no_output_____ <code> env.action_space_____no_output_____ </code> The `step function` enables us to apply one of these actions and return multiple information : * The new observation after applying this action * The reward returned by the environment * A boolean that indicates if the experience is over or not. * Extra information that depends on the environment (CartPole environment does not provide anything). Let's push the cart pole to the left!_____no_output_____ <code> obs, reward, done, info = env.step(0) print("New observation : %s" %str(obs)) print("Reward : %s" %str(reward)) print("Is the experience over? : %s" %str(done)) print("Extra information : %s" %str(info))_____no_output_____img = env.render(mode = "rgb_array") env.close() plt.imshow(img) axs = plt.axis("off")_____no_output_____ </code> **Q** : What can you see? Does the output value seem normal to you?_____no_output_____**Exercise** : Reset the environment, and push the car to the left until the experience is over then display the final environment. **Q** : Why does the environment end? _____no_output_____ <code> # %load solutions/push_cart_pole.py_____no_output_____ </code> # Q network In **Q-learning** all the *Q-Values* are stored in a *Q-table*. The optimal value can be learned by playing the game and updating the Q-table with the following formula. $$target = R(s,a,s')+\gamma \max\limits_{a'}Q_k(s',a')$$ $$Q_{k+1}(s,a)\leftarrow(1-a)Q_k(s,a)+\alpha[target]$$ if the combinations of states and actions are too large, the memory and the computation requirement for the *Q-table* will be too high. Hence, in **Deep Q-learning** we use a function to generate the approximation of the *Q-value* rather than remembering the solutions. <br> As the input of the function, i.e, the *observation*, are vectors of four values, a simple **DNN** will be enough to approximate the q table Later, we will generate targets from experiences and train this **DNN**. $$target = R(s,a,s')+\gamma \max\limits_{a'}Q_k(s',a')$$ $$\theta_{k+1} \leftarrow \theta_k - \alpha\nabla_{\theta}\mathbb{E}_{s\sim'P(s'|s,a)} [(Q_{\theta}(s,a)-target(s'))^2]_{\theta=\theta_k} $$ The `DNN` class below defines the architecture of this *neural network*. **Exercise** The architecture of the *dnn* as been set for you<br> However, the shape of the input, as well as the number of neurons and the activation function of the last layer, are not filled.<br> Fill the gap so that this network can be used to approximate *Q-values*_____no_output_____ <code> class DNN: def __init__(self): self.lr = 0.001 self.model = km.Sequential() self.model.add(kl.Dense(150, input_dim=??, activation="relu")) self.model.add(kl.Dense(120, activation="relu")) self.model.add(kl.Dense(??, activation=??)) self.model.compile(loss='mse', optimizer=ko.Adam(lr=self.lr))_____no_output_____# %load solutions/DNN_class.py_____no_output_____ </code> # DEEP Q Learning on *Cartpole* The objective of this section is to implement a **Deep Q-learning** that will be able to solve the cartpole environment. For that 2 python class will be required: * `DNN`: A class that will enable us to use a function that approximate the Q-values * `DQN`: A class that will enable to train the Qnetowrk All the instructions for this section are in this notebook below. However, you will have the possibility to * Work with the scripts DQN_cartpole.py and DQN_cartpole_test.py that can be found in the `IntroductionDeepReinforcementLearning`folder * OR work with the codes in cells of this notebook. _____no_output_____### DQN Class The `DQN` class contains the implementation of the **Deep Q-Learning** algorithm. The code is incomplete and you will have to fill it!. **GENERAL INSTRUCTION**: * Read the init of the `DQN` class. * Various variables are set with their definition, make sure you understand all of them. * The *game environment*, the *memory of the experiences*, and the *DNN Q-network* are initialized. * Read the `train` method. It contains the main code corresponding to the **pseudo code** below. YOU DO NOT HAVE TO MODIFY IT! But make sure you understand it. * The `train` method uses methods that are not implemented. * You will have to complete the code of 4 functions. (read the instruction for each exercise below) * After the cell of the `DQN` class code below there are **test cells** for each of these exercises. <br> This cell should be executed after each exercise. This cell will check that the function you implemented takes input and output in the desired format. <br> DO NOT MODIFY this cell. They will work if you're code is good <br> **Warning** The test cell does not guarantee that your code is correct. It just tests that the inputs and outputs are in a good format. #### Pseudo code *We will consider that we reach the expected *goal* if achieve the max score (200 steps without falling) over ten games.* While you didn't reach the expected *goal* reward or the *max_num_episode* allow to be played: * Start a new episode and while the episode is not done: * At each step: * Run one step of the episode: (**Exercise 1**) * Save experience in memory: (**Exercise 2 & 3**) * If we have stored enough episode on the memory to train the batch: * train model over a batch of targets (**Exercise 4**) * Decrease probability to play random **Exercise 1**: Implement `save_experience`<br> &nbsp;&nbsp;&nbsp;&nbsp; This function saves each experience produce by a step on the `memory`of the class.<br> &nbsp;&nbsp;&nbsp;&nbsp; We do not use the experience replay buffer in this part, so you just have to save the last `batch_size`experience in order to use it at the next train step (https://keras.io/api/layers/) **Exercise 2**: Implement `choose_action`<br> &nbsp;&nbsp;&nbsp;&nbsp; This method chooses an action in *eploration* or *eploitation* mode randomly:<br> **Exercise 3**: Implement `run_one_dtep` <br> &nbsp;&nbsp;&nbsp;&nbsp; This method:<br> &nbsp;&nbsp;&nbsp;&nbsp; -> Choose an action<br> &nbsp;&nbsp;&nbsp;&nbsp; -> Apply the action on the environement.<br> &nbsp;&nbsp;&nbsp;&nbsp; -> return all element of the experience **Exercise 4**: Implement `generate_target_q`<br> This method is used within the `train_one_step` method (which is already implemented).This method:<br> &nbsp;&nbsp;&nbsp;&nbsp; -> Generate a batch of data for training using the `experience_replay` <br> &nbsp;&nbsp;&nbsp;&nbsp; -> Generate the targets from this batch using `generate_target_q` <br> &nbsp;&nbsp;&nbsp;&nbsp; -> Train the model using these targets. <br> <br> The `generate_target_q` is not implemented so you have to do it!<br> You have to generate targets according to the formula below <br> $$target = R(s,a,s')+\gamma \max\limits_{a'}Q_k(s',a';\theta) $$ **Tips** when the game is over, the target is equal to only the reward (Q-value of the next action does not exists if the game is over at action *a*.)_____no_output_____ <code> class DQN: """ Implementation of deep q learning algorithm """ def __init__(self): self.prob_random = 1.0 # Probability to play random action self.y = .99 # Discount factor self.batch_size = 64 # How many experiences to use for each training step self.prob_random_end = .01 # Ending chance of random action self.prob_random_decay = .996 # Decrease decay of the prob random self.max_episode = 300 # Max number of episodes you are allowes to played to train the game self.expected_goal = 200 # Expected goal self.dnn = DNN() self.env = gym.make('CartPole-v0') self.memory = [] self.metadata = [] # we will store here info score, at the end of each episode def save_experience(self, experience): #TODO return None def choose_action(self, state, prob_random): #TODO return action def run_one_step(self, state): #TODO return state, action, reward, next_state, done def generate_target_q(self, train_state, train_action, train_reward, train_next_state, train_done): #TODO return target_q def train_one_step(self): batch_data = self.memory train_state = np.array([i[0] for i in batch_data]) train_action = np.array([i[1] for i in batch_data]) train_reward = np.array([i[2] for i in batch_data]) train_next_state = np.array([i[3] for i in batch_data]) train_done = np.array([i[4] for i in batch_data]) # These lines remove useless dimension of the matrix train_state = np.squeeze(train_state) train_next_state = np.squeeze(train_next_state) # Generate target Q target_q = self.generate_target_q( train_state=train_state, train_action=train_action, train_reward=train_reward, train_next_state=train_next_state, train_done=train_done ) loss = self.dnn.model.train_on_batch(train_state, target_q) return loss def train(self): scores = [] for e in range(self.max_episode): # Init New episode state = self.env.reset() state = np.expand_dims(state, axis=0) episode_score = 0 while True: state, action, reward, next_state, done = self.run_one_step(state) self.save_experience(experience=[state, action, reward, next_state, done]) episode_score += reward state = next_state if len(self.memory) >= self.batch_size: self.train_one_step() if self.prob_random > self.prob_random_end: self.prob_random *= self.prob_random_decay if done: now = datetime.now() dt_string = now.strftime("%d/%m/%Y %H:%M:%S") self.metadata.append([now, e, episode_score, self.prob_random]) print( "{} - episode: {}/{}, score: {:.1f} - prob_random {:.3f}".format(dt_string, e, self.max_episode, episode_score, self.prob_random)) break scores.append(episode_score) # Average score of last 100 episode means_last_10_scores = np.mean(scores[-10:]) if means_last_10_scores == self.expected_goal: print('\n Task Completed! \n') break print("Average over last 10 episode: {0:.2f} \n".format(means_last_10_scores)) print("Maximum number of episode played: %d" % self.max_episode)_____no_output_____ </code> **Test `save_experience`** * Append element to the `memory`. * Never save more than `batch_size` element, keep the last `batch_size`._____no_output_____ <code> dqn = DQN() dqn.batch_size=2 dqn.save_experience(1) assert dqn.memory == [1] dqn.save_experience(2) assert dqn.memory == [1,2] dqn.save_experience(3) assert dqn.memory == [2,3]_____no_output_____ </code> **Test `choose_action`** This test can't be considered as a real test. <br> Indeed, if the actions are chosen randomly we can't expect fixed results. However, if your function is implemented correctly these test should word most of the time: * if `prob_random` = 1 -> play randomly * Over 100 play, each action should appears various time * If `prob_random` = 0 -> play in exploit mode * The same action is choosen all the time. * If `prob_random` = 0.5 -> play both exploration and exploit mode randomly. * All actions should be seen, but the action chosen in exploit mode is always the same and should be chosen more likely._____no_output_____ <code> dqn = DQN() state = np.expand_dims(dqn.env.reset(), axis=0) # Random action if prob random is equal to one actions = [dqn.choose_action(state=state, prob_random=1) for _ in range(100)] count_action = collections.Counter(actions) print(count_action) assert count_action[0]>35 assert count_action[1]>35 # Best action according to model if prob_random is 0 actions = [dqn.choose_action(state=state, prob_random=0) for _ in range(100)] count_action = collections.Counter(actions) print(count_action) assert(len(set(actions)))==1 main_action = list(set(actions))[0] # actions = [dqn.choose_action(state=state, prob_random=0.5) for _ in range(100)] count_action = collections.Counter(actions) assert(len(set(actions)))==2 print(count_action) assert sorted(count_action.items(), key=lambda x : x[1])[-1][0]==main_action_____no_output_____ </code> **Test `run_one_step`** This method play one step of an episode. The method return all element of an experience, i.e: * A *state*: a vector of size (1,4) * An *action*: an integer * A *reward*: a float * The *nex_state*: a vector of size (1,4) _____no_output_____ <code> dqn = DQN() state = np.expand_dims(dqn.env.reset(), axis=0) state, action, reward, next_state, done = dqn.run_one_step(state) assert state.shape == (1, 4) assert type(action) is int assert type(reward) is float assert next_state.shape == (1, 4) assert type(done) is bool_____no_output_____ </code> **Test `generate_target_q`** This method generates targets of q values. In this test we set the `batch_size`value is equal to 2. Hence the function take as an input: * train_state : An array of size (2,4) * train_action : An array of size (2,1) * train_reward : An array of size (2,1) * train_next_state : An array of size (2,4) * train_done : An array of size (2,1) And return as an output an Array of size (2,2), which is a target for each input of the batch. _____no_output_____ <code> dqn = DQN() dqn.batch_size=2 state = np.expand_dims(dqn.env.reset(), axis=0) target_q = dqn.generate_target_q( train_state = np.vstack([state,state]), train_action = [0,0], train_reward = [1.0,2.0], train_next_state = np.vstack([state,state]), train_done = [1, 1] ) assert target_q.shape == (2,2)_____no_output_____ </code> Here is the solution of the **DQN class**_____no_output_____ <code> # %load solutions/DQN_cartpole_class.py_____no_output_____ </code> Let's now train the model! (The training can be unstable)_____no_output_____ <code> dqn = DQN() dqn.train()_____no_output_____ </code> If you're DQN reached the target goal (or not) we would like to see it playing a game! **Exercise** Play a game exploiting the dnn trained with deep q learning and display a video of this game to check how it performs!_____no_output_____ <code> # %load solutions/play_cartpole_with_dnn.py_____no_output_____ </code> The code below enables to display the evolution of the score of each episode play during training._____no_output_____ <code> fig = plt.figure(figsize=(20,6)) ax = fig.add_subplot(1,1,1) ax.plot(list(range(len(dqn.metadata))),[x[2] for x in dqn.metadata]) ax.set_yticks(np.arange(0,210,10)) ax.set_xticks(np.arange(0,175,25)) ax.set_title("Score/Lenght of episode over Iteration withou Memory Replay", fontsize=20) ax.set_xlabel("Number of iteration", fontsize=14) plt.yticks(fontsize=12) plt.xticks(fontsize=12) ax.set_ylabel("Score/Length of episode", fontsize=16)_____no_output_____ </code> You might be lucky but it is highly possible that the training is quite unstable. As see in the course, this might be because the experiences on which the DNN is trained are not i.i.d. Let's try again with and **Experience Replay Buffer**_____no_output_____# DQN with Experience Replay Buffer The **Experience Replay Buffer** is where all the agent's experience will be stored and where *batch* will be generate from to train the *Q network* **Exercise** Let'us implement an `ExperienceReplay` class which will have the following characteristics The `buffer_size` argument represent the number of element that are kept in memory (in the `buffer`). <br> Even if 10Milions of games have been played, the `Experience Replay` will kept only the last `buffer_size` argument in memory. <br> Hence at the beginning the first batch of targets will be composed of randomly played experience. And during training, the probability that batch of targets will be compose of experience playe in exploitation mode will increase. The `add` method will add elements on the `buffer `. The `sample`method will generate a sample of `size`element._____no_output_____ <code> class ExperienceReplay: def __init__(self, buffer_size=50000): """ Data structure used to hold game experiences """ # Buffer will contain [state,action,reward,next_state,done] self.buffer = [] self.buffer_size = buffer_size def add(self, experiences): """ Adds list of experiences to the buffer """ # TODO def sample(self, size): """ Returns a sample of experiences from the buffer """ # TODO_____no_output_____# %load experience_replay.py_____no_output_____ </code> Let's see a simple example on how it works._____no_output_____ <code> # Instanciate an experience replay buffer with buffer_size 10 experience_replay = ExperienceReplay(buffer_size=10) # Add list of 100 integer in the buffer experience_replay.add(list(range(100))) # Check that it keeps only the las 10 element print(experience_replay.buffer) # Randomly sample 5 element from the buffer sample = experience_replay.sample(5) print(sample)_____no_output_____ </code> **Exercise** Now that you have implemented the `ExperienceReplay` class, modify the `DQN`you implemented above, and modify it to use this class as the memory instead of a simple python list and run again the model._____no_output_____ <code> # %load solutions/DQN_cartpole_memory_replay_class.py_____no_output_____ </code> Let's now train the model! (That should be much more stable)_____no_output_____ <code> dqn = DQN() dqn.train()_____no_output_____ </code> And once again let's play a game_____no_output_____ <code> state = env.reset() frames = [] num_step=0 done=False while not done: action=np.argmax(dqn.dnn.model.predict(np.expand_dims(state, axis=0)),axis=1)[0] next_state, reward, done, _ = env.step(action) frames.append(env.render(mode = "rgb_array")) state=next_state num_step+=1 HTML(plot_animation(frames).to_html5_video())_____no_output_____ </code> And observe the evolution of the score over iterations_____no_output_____ <code> fig = plt.figure(figsize=(20,6)) ax = fig.add_subplot(1,1,1) ax.plot(list(range(len(dqn.metadata))),[x[2] for x in dqn.metadata]) ax.set_yticks(np.arange(0,210,10)) ax.set_xticks(np.arange(0,175,25)) ax.set_title("Score/Lenght of episode over Iteration withou Memory Replay", fontsize=20) ax.set_xlabel("Number of iteration", fontsize=14) plt.yticks(fontsize=12) plt.xticks(fontsize=12) ax.set_ylabel("Score/Length of episode", fontsize=16)_____no_output_____ </code> **Q**: What can you say about the influence of the experience replay buffer over this training?_____no_output_____
{ "repository": "wikistat/AI-Frameworks", "path": "IntroductionDeepReinforcementLearning/Deep_Q_Learning_CartPole.ipynb", "matched_keywords": [ "evolution" ], "stars": 29, "size": 37749, "hexsha": "487256369f7b6eb5634d3e3d3c1e9cd79b8fd11b", "max_line_length": 372, "avg_line_length": 32.968558952, "alphanum_fraction": 0.5730482927 }
# Notebook from hatrungduc/spark-nlp-workshop Path: tutorials/streamlit_notebooks/healthcare/NER_HUMAN_PHENOTYPE_GENE_CLINICAL.ipynb ![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/NER_HUMAN_PHENOTYPE_GENE_CLINICAL.ipynb) _____no_output_____# **Detect genes and human phenotypes**_____no_output_____To run this yourself, you will need to upload your license keys to the notebook. Just Run The Cell Below in order to do that. Also You can open the file explorer on the left side of the screen and upload `license_keys.json` to the folder that opens. Otherwise, you can look at the example outputs at the bottom of the notebook. _____no_output_____## 1. Colab Setup_____no_output_____Import license keys_____no_output_____ <code> import os import json from google.colab import files license_keys = files.upload() with open(list(license_keys.keys())[0]) as f: license_keys = json.load(f) sparknlp_version = license_keys["PUBLIC_VERSION"] jsl_version = license_keys["JSL_VERSION"] print ('SparkNLP Version:', sparknlp_version) print ('SparkNLP-JSL Version:', jsl_version)_____no_output_____ </code> Install dependencies_____no_output_____ <code> %%capture for k,v in license_keys.items(): %set_env $k=$v !wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jsl_colab_setup.sh !bash jsl_colab_setup.sh # Install Spark NLP Display for visualization !pip install --ignore-installed spark-nlp-display_____no_output_____ </code> Import dependencies into Python and start the Spark session_____no_output_____ <code> import pandas as pd from pyspark.ml import Pipeline from pyspark.sql import SparkSession import pyspark.sql.functions as F import sparknlp from sparknlp.annotator import * from sparknlp_jsl.annotator import * from sparknlp.base import * import sparknlp_jsl spark = sparknlp_jsl.start(license_keys['SECRET']) # manually start session # params = {"spark.driver.memory" : "16G", # "spark.kryoserializer.buffer.max" : "2000M", # "spark.driver.maxResultSize" : "2000M"} # spark = sparknlp_jsl.start(license_keys['SECRET'],params=params)_____no_output_____ </code> ## 2. Select the NER model and construct the pipeline_____no_output_____Select the NER model For more details: https://github.com/JohnSnowLabs/spark-nlp-models#pretrained-models---spark-nlp-for-healthcare_____no_output_____ <code> MODEL_NAME = "ner_human_phenotype_gene_clinical"_____no_output_____ </code> Create the pipeline_____no_output_____ <code> document_assembler = DocumentAssembler() \ .setInputCol('text')\ .setOutputCol('document') sentence_detector = SentenceDetector() \ .setInputCols(['document'])\ .setOutputCol('sentence') tokenizer = Tokenizer()\ .setInputCols(['sentence']) \ .setOutputCol('token') word_embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models') \ .setInputCols(['sentence', 'token']) \ .setOutputCol('embeddings') clinical_ner = MedicalNerModel.pretrained(MODEL_NAME, "en", "clinical/models") \ .setInputCols(["sentence", "token", "embeddings"])\ .setOutputCol("ner") ner_converter = NerConverter()\ .setInputCols(['sentence', 'token', 'ner']) \ .setOutputCol('ner_chunk') nlp_pipeline = Pipeline(stages=[ document_assembler, sentence_detector, tokenizer, word_embeddings, clinical_ner, ner_converter])embeddings_clinical download started this may take some time. Approximate size to download 1.6 GB [OK!] ner_human_phenotype_gene_clinical download started this may take some time. Approximate size to download 14 MB [OK!] </code> ## 3. Create example inputs_____no_output_____ <code> # Enter examples as strings in this array input_list = [ """Herein, we report a third patient (not related to the previously reported family) with bilateral colobomatous microphthalmia and developmental delay in whom genetic studies identified a homozygous TENM3 splicing mutation c.2968-2AAAAAT (p.Val990Cysfs*13). This report supports the association of TENM3 mutations with colobomatous microphthalmia and expands the phenotypic spectrum associated with mutations in this gene. A third proband was doubly heterozygous for inherited rare variants in additional components of complex I, NDUFAF2 and NDUFB9, confirming that Histiocytoid CM is genetically heterogeneous. Our data indicate that 5-hmC might serve as a metastasis marker for cancer and that the decreased expression of LSH is likely one of the mechanisms of genome instability underlying 5-hmC loss in cancer Forty unique IFNGR1 mutations have been reported and they exert either an autosomal dominant or an autosomal recessive effect. We examined the diagnostic and prognostic value of altered reticulin framework and the immunoprofile of biomarkers including IGF-2, proteins involved in cell proliferation and mitotic spindle regulation (Ki67, p53, BUB1B, HURP, NEK2), DNA damage repair (PBK, -H2AX), telomere regulation (DAX, ATRX), wnt-signaling pathway (beta-catenin) and PI3K signaling pathway (PTEN, phospho-mTOR) in a tissue microarray of 50 adenomas and 43 carcinomas that were characterized for angioinvasion as defined by strict criteria, Weiss score, and mitotic rate-based tumor grade. IGF-2 and proteins involved in cell proliferation and mitotic spindle regulation (Ki67, p53, BUB1B, HURP, NEK2), DNA damage proteins (PBK, -H2AX), regulators of telomeres (DAXX, ATRX), and beta-catenin revealed characteristic expression profiles enabling the distinction of carcinomas from adenomas. Angioinvasion defined as tumor cells invading through a vessel wall and intravascular tumor cells admixed with thrombus proved to be the best prognostic parameter, predicting adverse outcome in the entire cohort as well as within low-grade ACCs. Low mitotic tumor grade, Weiss score, global loss of DAXX expression, and high phospho-mTOR expression correlated with disease-free survival, but Weiss score and biomarkers failed to predict adverse outcome in low-grade disease.""", ]_____no_output_____ </code> ## 4. Use the pipeline to create outputs_____no_output_____ <code> empty_df = spark.createDataFrame([['']]).toDF('text') pipeline_model = nlp_pipeline.fit(empty_df) df = spark.createDataFrame(pd.DataFrame({'text': input_list})) result = pipeline_model.transform(df)_____no_output_____ </code> ## 5. Visualize results_____no_output_____ <code> from sparknlp_display import NerVisualizer NerVisualizer().display( result = result.collect()[0], label_col = 'ner_chunk', document_col = 'document' )_____no_output_____ </code>
{ "repository": "hatrungduc/spark-nlp-workshop", "path": "tutorials/streamlit_notebooks/healthcare/NER_HUMAN_PHENOTYPE_GENE_CLINICAL.ipynb", "matched_keywords": [ "biomarkers" ], "stars": 687, "size": 36313, "hexsha": "4874918b9cd7def3e9a0f8a8c8845d40c7b6ee1d", "max_line_length": 12334, "avg_line_length": 75.3381742739, "alphanum_fraction": 0.671742902 }
# Notebook from markumreed/colab_sklearn Path: recommender_systems_sklearn_movie_data.ipynb <a href="https://colab.research.google.com/github/markumreed/colab_sklearn/blob/main/recommender_systems_sklearn_movie_data.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____# Recommender Systems: Movie Data_____no_output_____ <code> import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline_____no_output_____df = pd.read_csv("movie_data.csv")_____no_output_____df.head()_____no_output_____ </code> ## EDA_____no_output_____ <code> df.columns_____no_output_____df.groupby('title')['rating'].mean().sort_values(ascending=False).head()_____no_output_____df.groupby('title')['rating'].count().sort_values(ascending=False).head()_____no_output_____ratings = pd.DataFrame(df.groupby('title')['rating'].mean())_____no_output_____ratings.head()_____no_output_____ratings['count'] = pd.DataFrame(df.groupby('title')['rating'].count())_____no_output_____ratings.head()_____no_output_____ratings['count'].hist(bins=70, figsize=(10,5));_____no_output_____ratings['rating'].hist(bins=70, figsize=(10,5));_____no_output_____sns.jointplot(x="rating", y="count", data=ratings, alpha=0.4);_____no_output_____movie_mat = df.pivot_table(index="user_id", columns="title", values="rating")_____no_output_____# Most rated movies ratings.sort_values("count", ascending=False).head(10)_____no_output_____starwars_user_rating = movie_mat['Star Wars (1977)'] liar_user_rating = movie_mat['Liar Liar (1997)']_____no_output_____similar_to_starwars = pd.DataFrame(movie_mat.corrwith(starwars_user_rating), columns=["correlation"])/usr/local/lib/python3.7/dist-packages/numpy/lib/function_base.py:2551: RuntimeWarning: Degrees of freedom <= 0 for slice c = cov(x, y, rowvar) /usr/local/lib/python3.7/dist-packages/numpy/lib/function_base.py:2480: RuntimeWarning: divide by zero encountered in true_divide c *= np.true_divide(1, fact) similar_to_liar = pd.DataFrame(movie_mat.corrwith(liar_user_rating), columns=["correlation"])/usr/local/lib/python3.7/dist-packages/numpy/lib/function_base.py:2551: RuntimeWarning: Degrees of freedom <= 0 for slice c = cov(x, y, rowvar) /usr/local/lib/python3.7/dist-packages/numpy/lib/function_base.py:2480: RuntimeWarning: divide by zero encountered in true_divide c *= np.true_divide(1, fact) similar_to_liar.head()_____no_output_____similar_to_starwars.head()_____no_output_____similar_to_starwars.sort_values("correlation", ascending=False).head()_____no_output_____similar_to_liar.sort_values("correlation", ascending=False).head()_____no_output_____corr_starwars = similar_to_starwars.join(ratings["count"])_____no_output_____corr_liar = similar_to_liar.join(ratings['count'])_____no_output_____corr_starwars[corr_starwars['count'] > 100].sort_values('correlation', ascending=False).head()_____no_output_____corr_liar[corr_liar['count'] > 100].sort_values('correlation', ascending=False).head()_____no_output__________no_output_____ </code>
{ "repository": "markumreed/colab_sklearn", "path": "recommender_systems_sklearn_movie_data.ipynb", "matched_keywords": [ "STAR" ], "stars": null, "size": 116080, "hexsha": "487570de2d1c79305a8ba2f79b3a51ff2af915fb", "max_line_length": 58238, "avg_line_length": 87.4755086662, "alphanum_fraction": 0.7337525844 }
# Notebook from ingolia/mcb200-2020 Path: 0904_statistics/04_exercise_dinucleotides-updated.ipynb ## Dinucleotides and dipeptides We counted the occurrence of individual nucleotides in the genome and residues in the proteome. In real biological sequences, adjacent positions are rarely independent. We now have most of the tools to measure this directly. There are a couple of small, additional things we need to learn first, though. We know how to get _one_ nucleotide from a string, but we need a way to get two adjacent nucleotides out of a string. We could get each of the two letters separately and add them together. Here is the way we would get the 3rd and 4th letter of the alphabet out of a string that's the whole alphabet ``` alphabet='abcdefghijklmnopqrstuvwxyz' letter_2 = alphabet[2] letter_3 = alphabet[3] letters = letter_2 + letter_3 print(letters) ``` Try this out in the cell below, recalling that Python will start counting from 0._____no_output_____ <code> alphabet='abcdefghijklmnopqrstuvwxyz'_____no_output_____ </code> Alternately, we can **slice** out two letters at once from a string. Square brackets can extract a _range_ of values from a string or a list. To do this, we do `[start:end]` where the start is _included_ and the end is _excluded_. ``` alphabet[2:4] ``` This code goes from index 2 (the letter `c`) to index 3 (the letter `d`) and does not include index 4 (`e`). This can be a bit confusing, but one nice aspect of this is that it's easy to see the length of the slice. For instance, `alphabet[5:7]` is `7 - 5 = 2` nucleotides long. Try out this slicing in the cell below._____no_output_____ <code> alphabet[2:4]_____no_output_____ </code> Whether we index two nucleotides individually or slice a two-nucleotide piece out of our list, we need to loop over each possible starting position. We want to go all the way from `alphabet[0:2]` through `alphabet[24:26]`. The `range()` function allows us to iterate over a range of numbers. Just like in slices, we do `[start:end]` and the start is included while the end is not. Try out the example below to see how this runs. ``` for x in range(3,6): print(x) ```_____no_output_____ <code> for x in range(3,6): print(x)_____no_output_____ </code> Each chromosome is a different length, and so we want to compute the start and end using `len(...)` rather than picking a specific number. In the `alphabet` example, for a 2 letter slice we run from starting position 0 to starting position `len(alphabet) - 2 = 26 - 2 = 24`. However, the end of the range is not included, and so we want to do `range(0, 1 + len(alphabet)-2)`, which should use all twenty-five starting positions from 0 through 24 inclusive. The careful tracking of whether an endpoint is included or excluded, and whether you need to add or subtract 1 from a starting and ending point, shows up all the time in bioinformatics. It's often called a "fencepost" problem, referring to the fact that you need four fenceposts to hold up three sections of fence. It's also called an "off-by-one" problem, because we often find ourselves one position too long or too short. Try out this example below to confirm how we can get every two-letter pair out of a string. ``` for start in range(0, 1 + len(alphabet)-2): print(str(start) + ' ' + alphabet[start:start+2]) ```_____no_output_____ <code> for start in range(0, 1 + len(alphabet)-2): print(str(start) + ' ' + alphabet[start:start+2])_____no_output_____ </code> Now that we know how to use `range()` and slices, we are really equipped with all the tools we need to count dinucleotides in the yeast genome._____no_output_____### Yeast genome dinucleotides First we need to import the `Bio.SeqIO` module from `biopython` so we can read in our yeast sequences._____no_output_____ <code> import sys !{sys.executable} -m pip install biopython from Bio import SeqIO_____no_output_____ </code> Then we need to import the `pandas` module for our `Series` and `DataFrame` types, and the `matplotlib.pyplot` module to make graphs._____no_output_____ <code> import pandas as pd import matplotlib.pyplot as plt _____no_output_____ </code> Here is a copy of our code to 1. Create `chroms` as an iterator over all the chromosomes 2. Create an empty dictionary to hold single-nucleotide counts 3. Loop over each chromosome 1. Assign the sequence of the chromosome to `chrom_seq` 1. Loop over each position in the chromosome 1. Assign the nucleotide at that position to `nt` 1. Add that nucleotide to the running tally 4. Convert the count dictionary into a `Series` 5. Print the sorted version of our count series 6. Plot a bar graph of our counts_____no_output_____ <code> chroms = SeqIO.parse("../S288C_R64-2-1/S288C_reference_sequence_R64-2-1_20150113.fsa", "fasta") nt_count = {} for chrom in chroms: chrom_seq = str(chrom.seq) for position in range(0, len(chrom_seq)): nt = chrom_seq[position] nt_count[nt] = nt_count.get(nt, 0) + 1 nt_series = pd.Series(nt_count) print(nt_series.sort_index()) nt_series.sort_index().plot(kind='bar')_____no_output_____ </code> ### Dinucleotides Convert this to count every adjacent pair of dinucleotides. You'll need to "slice" these out of the the chromosome sequences._____no_output_____#### Probabilities Convert the counts to probabilities by 1. Using the `.sum()` method to find the total number of dinucleotides counted 2. Dividing the `nt_series` series by this sum to get "normalized" probabilities_____no_output_____#### Marginal probabilities The table of dinucleotide probabilities give the _joint_ distribution. There are two way to compute the _marginal_ probability of an `A`. Compute this both ways and compare it to the value we got from the single-nucleotide counting above._____no_output_____Write a `for` loop to compute all four marginal probabilities. It's probably easiest to create an empty dictionary, then loop over each nucleotide option, compute its marginal probability, and store it in the dictionary. There are many reasonable ways to approach this, though_____no_output_____#### Conditional probabilities Compute the _conditional_ probability of a `C` following a first `A`. Is this higher or lower than the unconditional (marginal) probability of a `C`?_____no_output_____If you want to take this a bit further: write a pair of nested for loops to compute all of the conditional probabilities for the 2nd nucleotide of a dinucleotide, conditional on the identity of the first. What nucleotide combinations have conditional probabilities that are very different from the marginal? Another way of looking at this is to compute the ratio `P(MN) / (P(M) * P(N))`, which is the ratio between the observed dinucleotide probability and the expected dinucleotide probabilty under the assumption of independence._____no_output_____#### Dipeptides If you want to take this a lot further, you can run the same sort of analysis on dipeptides in the yeast proteome. Here's a slightly updated version of our loop to count amino acid frequencies in the yeast proteome, if you want ot try this out._____no_output_____ <code> proteins = SeqIO.parse("../S288C_R64-2-1/orf_trans_all_R64-2-1_20150113.fasta", "fasta") aa_count = {} for protein in proteins: protseq = str(protein.seq) for pos in range(0, len(protseq)): aa = protseq[pos] aa_count[aa] = aa_count.get(aa, 0) + 1 print(aa_count)_____no_output_____ </code>
{ "repository": "ingolia/mcb200-2020", "path": "0904_statistics/04_exercise_dinucleotides-updated.ipynb", "matched_keywords": [ "BioPython", "bioinformatics" ], "stars": null, "size": 11279, "hexsha": "4875f65dcf909a9d42edbf02e8af70b4915fad23", "max_line_length": 437, "avg_line_length": 31.6825842697, "alphanum_fraction": 0.6021810444 }
# Notebook from czbiohub/scrnaseq-for-the-99-percent Path: notebooks/346_bat_unaligned_kmers_in_human.ipynb # Imports_____no_output_____ <code> import glob import os import pandas as pd import scanpy as sc import seaborn as sns_____no_output_____ </code> ## Def describe_____no_output_____ <code> def describe(df, random=False): print(df.shape) print("--- First 5 entries ---") display(df.head()) if random: print('--- Random subset ---') display(df.sample(5))_____no_output_____ </code> ## Figure folder_____no_output_____ <code> figure_folder = '/home/olga/googledrive/kmer-homology-paper/figures/unaligned_kmers/' ! mkdir -p $figure_folder_____no_output_____ </code> ## Read one2one h5ad_____no_output_____ <code> adata = sc.read( "/home/olga/data_lg/data_sm_copy/immune-evolution/h5ads/human-lemur-mouse-bat/human-lemur-mouse-bat__lung_only.h5ad" ) adata.obs = adata.obs.reset_index().set_index('cell_id') print(adata) adata.obs.head()AnnData object with n_obs × n_vars = 126745 × 10560 obs: 'index', 'age', 'cell_barcode', 'cell_ontology_class', 'cell_ontology_id', 'channel', 'free_annotation', 'individual', 'sample', 'sequencing_run', 'sex', 'species', 'species_batch', 'species_latin', 'tissue', 'narrow_group', 'broad_group', 'compartment_group', 'compartment_narrow', 'channel_cleaned', 'batch', 'n_genes', 'n_counts', 'species_batch_v2', 'compartment_broad', 'compartment_broad_narrow', 'compartment_species', 'compartment_narrow_species', 'common_individual_id' var: 'bat__gene_name', 'mouse_lemur__gene_name-bat', 'mouse__gene_name-bat', 'mouse_lemur__gene_name_x-hlm', 'mouse__gene_name_x-hlm', 'gene_ids-lemur-hlm', 'n_cells-mouse-hlm', 'mouse_lemur__gene_name_y-hlm', 'mouse__gene_name_y-hlm' adata.var_____no_output_____ </code> # Read parquet files_____no_output_____## File paths_____no_output_____ <code> sketch_id = 'alphabet-dayhoff__ksize-51__scaled-10' sig_outdir_base = "/home/olga/data_lg/data_sm_copy/immune-evolution/kmer-signatures" parquets = glob.glob(f'/home/olga/data_lg/data_sm_copy/immune-evolution/kmer-signatures/*/2--single-cell-kmers/{sketch_id}/hash2kmer__unique_kmers_per_celltype.parquet') parquets_____no_output_____ </code> ### Bat_____no_output_____ <code> bat = pd.read_parquet('/home/olga/data_lg/data_sm_copy/immune-evolution/kmer-signatures/3--test-bat/2--single-cell-kmers/alphabet-dayhoff__ksize-51__scaled-10/hash2kmer__unique_kmers_per_celltype.parquet') describe(bat)(69988437, 13) --- First 5 entries --- </code> ### Human_____no_output_____ <code> human = pd.read_parquet('/home/olga/data_lg/data_sm_copy/immune-evolution/kmer-signatures/2--test-human/2--single-cell-kmers/alphabet-dayhoff__ksize-51__scaled-10/hash2kmer__unique_kmers_per_celltype.parquet') describe(human)(33366578, 13) --- First 5 entries --- bat.groupby('broad_group').cell_id.nunique()_____no_output_____human.groupby('broad_group').cell_id.nunique()_____no_output_____bat.groupby('broad_group').apply(lambda x: x.hashval.nunique()/x.cell_id.nunique())_____no_output_____bat_unaligned = bat.query('alignment_status == "unaligned"') bat_unaligned.head()_____no_output_____bat_aligned = bat.query('alignment_status == "aligned"') describe(bat_aligned)(66637862, 13) --- First 5 entries --- bat_aligned.dtypes_____no_output_____1+1_____no_output_____bat_hashval_to_ncells = bat_aligned.groupby(['hashval', 'broad_group']).cell_id.nunique() bat_hashval_to_ncells.name = 'n_cells_bat' bat_hashval_to_ncells.head()_____no_output_____bat_unaligned.dtypes_____no_output_____bat_unaligned_hashes = set(bat_unaligned.hashval) len(bat_unaligned_hashes)_____no_output_____human.hashval = human.hashval.astype(str)_____no_output_____human.hashval.dtypes_____no_output_____%%time human_hashvals_bat_unaligned = human.query( "hashval in @bat_unaligned_hashes" ) describe(human_hashvals_bat_unaligned)(8003714, 13) --- First 5 entries --- human_hashvals_bat_unaligned_gene_names = human_hashvals_bat_unaligned.groupby('broad_group').gene_name.value_counts() human_hashvals_bat_unaligned_gene_names.name = 'n_hashes' human_hashvals_bat_unaligned_gene_names = human_hashvals_bat_unaligned_gene_names.reset_index() describe(human_hashvals_bat_unaligned_gene_names)(33379, 3) --- First 5 entries --- </code> ### Read bat unannotated genes_____no_output_____ <code> bat_unannotated = pd.read_csv('/home/olga/data_lg/data_sm_copy/immune-evolution/databases/unannotated-genes-in-bat-compared-to-human-GRCh38/unannotated_gene_in_bat_compared_to_GRCH38p13.csv') bat_unannotated = bat_unannotated.dropna(how='all', axis=1) bat_unannotated = {k: v for k, v in bat_unannotated.iteritems()} # print(bat_unannotated.shape) # bat_unannotated.head() bat_unannotated.keys()_____no_output_____dfs = [] for gene_category, gene_names in bat_unannotated.items(): # print(f'--- {gene_category} ---') df = human_hashvals_bat_unaligned_gene_names.query('gene_name in @gene_names') df['gene_category'] = gene_category dfs.append(df) bat_unaligned_in_human = pd.concat(dfs) describe(bat_unaligned_in_human) (2593, 4) --- First 5 entries --- bat_unaligned_in_human.to_csv( os.path.join(figure_folder, "bat_unaligned_in_human.csv"), index=False )_____no_output_____bat_unaligned_in_human.query('gene_category == "ISG" and broad_group == "Alveolar Epithelial Type 2"')_____no_output_____bat_unaligned_in_human_n_genes = bat_unaligned_in_human.groupby(['gene_category', 'broad_group']).size() bat_unaligned_in_human_n_genes.name = 'n_genes' bat_unaligned_in_human_n_genes = bat_unaligned_in_human_n_genes.reset_index() bat_unaligned_in_human_n_genes = bat_unaligned_in_human_n_genes.sort_values(['gene_category', 'n_genes'], ascending=False) bat_unaligned_in_human_n_genes_____no_output_____ </code> ### Write unannotated genes to csv_____no_output_____### Plot number of unannoted genes per category_____no_output_____ <code> g = sns.catplot( y="broad_group", x="n_genes", col="gene_category", sharex=False, data=bat_unaligned_in_human_n_genes, kind='bar', height=3, palette='Dark2' # sharey=False, ) g.set_titles("{col_name}")_____no_output_____ </code> ## Celltype palette_____no_output_____ <code> figure_folder_____no_output_____celltype_palette = dict( zip( sorted(bat_unaligned_in_human_n_genes.broad_group.unique()), sns.color_palette("Dark2", n_colors=10), ), )_____no_output_____ </code> ### one plot per category_____no_output_____ <code> for gene_group, df in bat_unaligned_in_human_n_genes.groupby("gene_category"): df = df.sort_values("n_genes", ascending=False) g = sns.catplot( y="broad_group", x="n_genes", col="gene_category", sharex=False, data=df, order=df.broad_group, palette=celltype_palette, kind="bar", height=3, aspect=1.5, # sharey=False, ) g.set_titles("{col_name}") pdf = os.path.join(figure_folder, f'barplot__unannotated_genes_found_in_unaligned_kmers__{gene_group}.pdf') g.savefig(pdf) png = pdf.replace('.pdf', '.png') g.savefig(png)_____no_output_____bat_unaligned_in_human_with_hashvals = bat_unaligned_in_human.merge( human_celltype_unique_hashvals_bat_unaligned, on=["broad_group", "gene_name"] ) bat_unaligned_in_human_with_hashvals = bat_unaligned_in_human_with_hashvals.merge( bat_with_celltypes_unaligned, on=[ "broad_group", "kmer_in_sequence", "kmer_in_alphabet", "hashval", "sketch_id", 'ksize', "moltype", "scaled", ], suffixes=('_human', '_bat') ) # bat_unaligned_in_human_with_hashvals = bat_unaligned_in_human_with_hashvals.join( # bat_hashval_to_ncells, on=["hashval", "broad_group"] # ) # bat_unaligned_in_human_with_hashvals = bat_unaligned_in_human_with_hashvals.drop( # ["read_name", "species", "cell_id", "alignment_status"], axis=1 # ) describe(bat_unaligned_in_human_with_hashvals)(3170, 20) --- First 5 entries --- n_cells_per_celltype_bat = bat_with_celltypes.groupby('broad_group').cell_id.nunique() n_cells_per_celltype_bat.name = 'n_cells_bat_per_celltype' n_cells_per_celltype_bat_____no_output_____bat_unaligned_in_human_with_hashvals_n_cells_per_gene = ( bat_unaligned_in_human_with_hashvals.groupby( ["broad_group", "gene_category", "gene_name_human"], observed=True ).cell_id_bat.nunique() ) bat_unaligned_in_human_with_hashvals_n_cells_per_gene.name = "n_cells_bat" bat_unaligned_in_human_with_hashvals_n_cells_per_gene = ( bat_unaligned_in_human_with_hashvals_n_cells_per_gene.reset_index() ) bat_unaligned_in_human_with_hashvals_n_cells_per_gene = ( bat_unaligned_in_human_with_hashvals_n_cells_per_gene.sort_values( ["n_cells_bat"], ascending=False ) ) bat_unaligned_in_human_with_hashvals_n_cells_per_gene = ( bat_unaligned_in_human_with_hashvals_n_cells_per_gene.sort_values( [ "broad_group", ], ascending=True, ) ) bat_unaligned_in_human_with_hashvals_n_cells_per_gene = ( bat_unaligned_in_human_with_hashvals_n_cells_per_gene.join( n_cells_per_celltype_bat, on="broad_group" ) ) bat_unaligned_in_human_with_hashvals_n_cells_per_gene[ "bat_percent_cells_per_celltype" ] = 100 * bat_unaligned_in_human_with_hashvals_n_cells_per_gene.n_cells_bat.divide( bat_unaligned_in_human_with_hashvals_n_cells_per_gene.n_cells_bat_per_celltype ) bat_unaligned_in_human_with_hashvals_n_cells_per_gene = ( bat_unaligned_in_human_with_hashvals_n_cells_per_gene.sort_values( ["broad_group", "n_cells_bat"] ) )_____no_output_____bat_unaligned_in_human_with_hashvals_n_cells_per_gene.query('n_cells_bat > 20')_____no_output_____ </code> ### Write genes that are unaligned and the number of cells per celltype_____no_output_____ <code> bat_unaligned_in_human_with_hashvals_n_cells_per_gene.to_csv( os.path.join(figure_folder, "bat_unaligned_in_human__with_n_cells_expressing.csv"), index=False )_____no_output_____bat_unaligned_in_human_with_hashvals_n_cells_per_gene.shape_____no_output_____bat_unaligned_in_human_with_hashvals_n_cells_per_gene.to_csv('')_____no_output_____bat_with_celltypes.groupby('broad_group').cell_id.nunique()_____no_output_____bat_unaligned_in_human_with_hashvals_n_cells_per_gene.query('broad_group == "Alveolar Epithelial Type 2" and gene_category != "unannotated_all"').sort_values('n_cells_bat', ascending=False)_____no_output_____ _____no_output_____human_celltype_unique_hashvals_bat_unaligned_gene_names.loc[human_celltype_unique_hashvals_bat_unaligned_gene_names.gene_name.str.startswith("GZM")]_____no_output_____for name, df in human_celltype_unique_hashvals_bat_unaligned_gene_names.groupby('broad_group'): describe(df)(3934, 3) --- First 5 entries --- human_with_celltypes.groupby('broad_group').apply(lambda x: x.hashval.nunique()/x.cell_id.nunique())_____no_output_____bat_aligned_kmers = bat_celltype_unique_hashvals.groupby('broad_group').alignment_status.value_counts() bat_aligned_kmers = bat_aligned_kmers.unstack() bat_aligned_kmers_____no_output_____bat_aligned_kmers_percentage = 100 * bat_aligned_kmers.divide(bat_aligned_kmers.sum(axis=1), axis=0) bat_aligned_kmers_percentage_____no_output_____human_aligned_kmers = human_celltype_unique_hashvals.groupby('broad_group').alignment_status.value_counts() human_aligned_kmers = human_aligned_kmers.unstack() human_aligned_kmers_____no_output_____human_aligned_kmers_percentage = 100 * human_aligned_kmers.divide(human_aligned_kmers.sum(axis=1), axis=0) human_aligned_kmers_percentage_____no_output_____bat_celltype_unique_hashvals.to_parquet( "/home/olga/data_lg/data_sm_copy/immune-evolution/kmer-signatures/3--test-bat/2--single-cell-kmers/alphabet-dayhoff__ksize-51__scaled-10/hash2kmer__unique_kmers_per_celltype.parquet" )_____no_output_____ </code> ## Generalize to a script_____no_output_____ <code> %%file get_unique_kmers_per_celltype.py import argparse import glob import os import pandas as pd import scanpy as sc from joblib import Parallel, delayed from IPython.display import display from tqdm import tqdm SHARED_CELLTYPES = [ "Alveolar Epithelial Type 2", "B cell", "Capillary", "Dendritic", "Fibroblast", "Macrophage", "Monocyte", "Natural Killer T cell", "Smooth Muscle and Myofibroblast", "T cell", ] def describe(df, random=False): print(df.shape) print("--- First 5 entries ---") display(df.head()) if random: print("--- Random subset ---") display(df.sample(5)) def process_hash2kmer(parquet, adata_shared, celltype_col): hash2kmer = pd.read_parquet(parquet) describe(hash2kmer) hash2kmer_with_celltypes = hash2kmer.join( adata_shared.obs[celltype_col], on="cell_id" ) hash2kmer_celltype_unique_hashvals = hash2kmer_with_celltypes.drop_duplicates( [ "kmer_in_sequence", "kmer_in_alphabet", "hashval", "gene_name", "alignment_status", "broad_group", "cell_id", ] ) describe(hash2kmer_celltype_unique_hashvals) parquet_out = parquet.replace(".parquet", "__unique_kmers_per_celltype.parquet") hash2kmer_celltype_unique_hashvals.to_parquet(parquet_out) # Show number of aligned/unaligned k-mers per celltype per_celltype_alignment_status_kmers = hash2kmer_celltype_unique_hashvals.groupby( celltype_col, observed=True ).alignment_status.value_counts() print(per_celltype_alignment_status_kmers) def main(): p = argparse.ArgumentParser() # base directory containing a 2--single-cell-kmers folder which contains sketch id directories with sig2kmer csvs p.add_argument("species_base_dir") p.add_argument( "--kmer-subdir", default="2--single-cell-kmers", type=str, help="Subdirectory containing csvs within each per-sketch id subdirectory", ) p.add_argument( "--h5ad", default="/home/olga/data_lg/data_sm_copy/immune-evolution/h5ads/human-lemur-mouse-bat/human-lemur-mouse-bat__lung_only.h5ad", help=("Location of the AnnData h5ad object of single-cell data"), ) p.add_argument( "--n-jobs", default=3, type=int, help=( "Number of jobs to do in parallel. By default, 3 for the 3 molecule types (DNA, protein, Dayhoff)" ), ) p.add_argument( "--celltype-col", default="broad_group", help=( "Column name endcoding the cell type in the h5ad AnnData object, i.e. an adata.obs column" ), ) args = p.parse_args() adata = sc.read(args.h5ad) adata.obs = adata.obs.reset_index().set_index("cell_id") adata_shared = adata[adata.obs[args.celltype_col].isin(SHARED_CELLTYPES)] parquets = glob.iglob( os.path.join( args.species_base_dir, args.kmer_subdir, "*", # This is the sketch_id, e.g. alphabet-DNA__ksize-21__scaled-10 "hash2kmer.parquet", ) ) if args.n_jobs > 1: Parallel(n_jobs=args.n_jobs)( delayed(process_hash2kmer)(parquet, adata_shared, args.celltype_col) for parquet in parquets ) else: for parquet in tqdm(parquets): print("hash2kmer parquet:", parquet) process_hash2kmer(parquet, adata_shared, args.celltype_col) if __name__ == "__main__": main()Overwriting get_unique_kmers_per_celltype.py </code> ## Write out commands_____no_output_____ <code> PYTHON = '/home/olga/miniconda3/envs/immune-evolution/bin/python' PWD = '/home/olga/code/immune-evolution--olgabot/analyze-kmermaid-bladder/notebooks' GET_UNIQUE_KMERS = f"{PWD}/get_unique_kmers_per_celltype.py" template = f"{PYTHON} {GET_UNIQUE_KMERS} " + r"{species_dir}" template_____no_output_____species_globber = os.path.join('/home/olga/data_lg/data_sm_copy/immune-evolution/kmer-signatures/', '*--t*') for species_dir in glob.glob(species_globber): print(template.format(species_dir=species_dir))/home/olga/miniconda3/envs/immune-evolution/bin/python /home/olga/code/immune-evolution--olgabot/analyze-kmermaid-bladder/notebooks/get_unique_kmers_per_celltype.py /home/olga/data_lg/data_sm_copy/immune-evolution/kmer-signatures/2--test-human /home/olga/miniconda3/envs/immune-evolution/bin/python /home/olga/code/immune-evolution--olgabot/analyze-kmermaid-bladder/notebooks/get_unique_kmers_per_celltype.py /home/olga/data_lg/data_sm_copy/immune-evolution/kmer-signatures/3--test-bat /home/olga/miniconda3/envs/immune-evolution/bin/python /home/olga/code/immune-evolution--olgabot/analyze-kmermaid-bladder/notebooks/get_unique_kmers_per_celltype.py /home/olga/data_lg/data_sm_copy/immune-evolution/kmer-signatures/4--test-lemur /home/olga/miniconda3/envs/immune-evolution/bin/python /home/olga/code/immune-evolution--olgabot/analyze-kmermaid-bladder/notebooks/get_unique_kmers_per_celltype.py /home/olga/data_lg/data_sm_copy/immune-evolution/kmer-signatures/1--train-mouse </code> ## Read mouse diagnostic k-mer csvs_____no_output_____ <code> mouse_dir = '/home/olga/data_lg/data_sm_copy/immune-evolution/kmer-signatures/1--train-mouse/' celltype_kmer_subdir = '5--celltype-kmers--merged-celltype-remove-common-kmers--min-kmer-count--5-percent' mouse_celltype_kmer_csvs = glob.glob(os.path.join(mouse_dir, celltype_kmer_subdir, sketch_id, 'csvs', '*csv'))_____no_output_____ </code> # Gene orthology_____no_output_____## Load orthology from MGI_____no_output_____ <code> mgi_orthology = pd.read_csv('http://www.informatics.jax.org/downloads/reports/HOM_MouseHumanSequence.rpt ', sep='\t') describe(mgi_orthology)(40015, 13) --- First 5 entries --- human_orthologous_genes = set(mgi_orthology.loc[mgi_orthology['Common Organism Name'] == 'human'].Symbol) len(human_orthologous_genes)_____no_output_____mouse_orthologous_genes = set(mgi_orthology.loc[mgi_orthology['Common Organism Name'] == 'mouse, laboratory'].Symbol) len(mouse_orthologous_genes)_____no_output_____ </code> ### Get only 1:1 orthologs_____no_output_____ <code> mgi_one2one = mgi_orthology.groupby("HomoloGene ID").filter( lambda x: len(x) == 2 and len(x["Common Organism Name"].unique()) == 2 ) describe(mgi_one2one)(32934, 13) --- First 5 entries --- mgi_one2one_2d = mgi_one2one.pivot(index='HomoloGene ID', values='Symbol', columns='Common Organism Name') describe(mgi_one2one_2d)(16467, 2) --- First 5 entries --- </code> # Bat-mouse_____no_output_____## Read one2one orthologs_____no_output_____ <code> one2one = pd.read_csv('/home/olga/data_lg/data_sm_copy/immune-evolution/h5ads/bat/bat_lung__one2one_orthologs_var.csv') one2one.head()_____no_output_____ </code> ## Assign k-mer type based on alignment status and gene name_____no_output_____ <code> PER_SPECIES_ORTHOLOGOUS_GENES = { "bat": set(one2one.bat__gene_name), "human": set(mgi_one2one_2d['human']) & set(one2one.human__gene_name), 'mouse': set(mgi_one2one_2d.query('human in @one2one.human__gene_name')['mouse, laboratory']) } for k, v in PER_SPECIES_ORTHOLOGOUS_GENES.items(): print(k, len(v))bat 11379 human 11366 mouse 11366 </code>
{ "repository": "czbiohub/scrnaseq-for-the-99-percent", "path": "notebooks/346_bat_unaligned_kmers_in_human.ipynb", "matched_keywords": [ "Scanpy", "single-cell" ], "stars": 2, "size": 267358, "hexsha": "4876186d004d57930c159ae6fb7104672f1ce536", "max_line_length": 19740, "avg_line_length": 46.6430565248, "alphanum_fraction": 0.5708937081 }
# Notebook from immersinn/rssfeed_link_collector Path: notebooks/explore/Log Investigate 2017-04-24.ipynb <code> main_repo_dir = os.path.abspath(os.path.join('../..')) sys.path.append(os.path.join(main_repo_dir, 'src'))_____no_output_____import datetime_____no_output_____import pandas_____no_output_____import utils_____no_output_____with open(os.path.join(main_repo_dir, "main_run.log"), "r") as f: log = f.readlines()_____no_output_____dates, entries = zip(*[(' '.join(line.split()[:3]), ' '.join(line.split()[3:])) for line in log])_____no_output_____dates = [datetime.datetime.strptime(date, '%m/%d/%Y %I:%M:%S %p') for date in dates]_____no_output_____log = pandas.DataFrame(data = {'entry' : entries}, index=dates)_____no_output_____log['day'] = [ent.strftime('%Y-%m-%d') for ent in log.index]_____no_output_____log = log['2017-03-14':]_____no_output_____log.shape_____no_output_____# Define which entry we're in # There's got to be a better way to do this with an iterator... entry_ind = -1 def update_entry(x): global entry_ind if x['entry'] == "Retrieving contents...": entry_ind += 1 return(entry_ind) # Flag Errors def flag_error(x): text = x.entry.lower() if text.find('warning') > -1 or text.find('error') > -1: return(True) else: return(False) def extract_err_class(error_text): start = error_text.find('<class ') end = error_text.find("'>:") if start > -1 and end > -1: err_class = error_text[start : (end + 2)] else: err_class = '' return(err_class)_____no_output_____log['entry_ind'] = log.apply(lambda x: update_entry(x), axis=1) log['is_error'] = log.apply(lambda x: flag_error(x), axis=1) log['err_class'] = log.entry.apply(lambda x: extract_err_class(x))_____no_output_____log.shape_____no_output_____log.head()_____no_output_____log.tail()_____no_output_____ </code> ## Investiate Errors_____no_output_____ <code> errors = log[log.is_error==True].copy()_____no_output_____errors.shape_____no_output_____errors.head()_____no_output_____errors.tail()_____no_output_____for e in errors.err_class.unique(): print(e)<class 'urllib.error.HTTPError'> <class 'http.client.IncompleteRead'> <class 'mysql.connector.errors.DataError'> <class 'UnboundLocalError'> <class 'urllib.error.URLError'> <class 'KeyError'> <class 'mysql.connector.errors.DatabaseError'> </code> # DB Errors_____no_output_____ <code> ke = errors[errors.err_class=="<class 'mysql.connector.errors.DatabaseError'>"].copy() ke.shape_____no_output_____ke_____no_output_____ke.entry.unique()_____no_output_____ </code> #### Format to different text type?_____no_output_____### Key Errors_____no_output_____ <code> te = errors[errors.err_class == "<class 'KeyError'>"].copy() te.shape_____no_output_____te_____no_output_____te.entry.unique()_____no_output_____ </code> #### Not worth dealing with.._____no_output_____### MySQL Connector Errors_____no_output_____ <code> err01 = "<class 'mysql.connector.errors.DataError'>" err02 = "<class 'mysql.connector.errors.DatabaseError'>"_____no_output_____ </code> #### Data Error_____no_output_____ <code> dee = errors[errors.err_class==err01] dee.shape_____no_output_____dee.tail()_____no_output_____ </code> Only 1, not a big deal..._____no_output_____ <code> dee.entry[-1]_____no_output_____ </code> #### DB Error_____no_output_____ <code> dee = errors[errors.err_class==err02] dee.shape_____no_output_____dee.tail().index_____no_output_____len(dee.entry.unique())_____no_output_____dee.entry.unique()_____no_output_____ </code> Strange, non-unicode string errors again. Not worth fixing, perhaps..?_____no_output_____ <code> sec = pandas.tslib.Timedelta('1 second')_____no_output_____def get_err_contexts(errs): contexts = [] for ind in errs.index: i = 1 success = False while not success: sub = list(log[str(ind - i*sec)]['entry']) if len(sub)== 0: i += 1 if i > 3: success=True else: success = True contexts.append({'index' : str(ind), 'context' :sub}) return(contexts)_____no_output_____cons = get_err_contexts(dee)_____no_output_____cons[-5:]_____no_output_____for i in range(-5,0): print(dee[cons[i]['index']]['entry'][0])Error encountered: <class 'mysql.connector.errors.DatabaseError'>: (1366, "1366 (HY000): Incorrect string value: '\\xF0\\x9F\\x90\\xB8 (...' for column 'summary' at row 1", 'HY000') Error encountered: <class 'mysql.connector.errors.DatabaseError'>: (1366, "1366 (HY000): Incorrect string value: '\\xF0\\x9F\\x90\\xB8 (...' for column 'summary' at row 1", 'HY000') Error encountered: <class 'mysql.connector.errors.DatabaseError'>: (1366, "1366 (HY000): Incorrect string value: '\\xF0\\x9F\\x90\\xB8 (...' for column 'summary' at row 1", 'HY000') Error encountered: <class 'mysql.connector.errors.DatabaseError'>: (1366, "1366 (HY000): Incorrect string value: '\\xF0\\x9F\\x90\\xB8 (...' for column 'summary' at row 1", 'HY000') Error encountered: <class 'mysql.connector.errors.DatabaseError'>: (1366, "1366 (HY000): Incorrect string value: '\\xF0\\x9F\\x90\\xB8 (...' for column 'summary' at row 1", 'HY000') feed_data = utils.load_feedlist_data('breitbart_feedlist.xml')_____no_output_____for i,f in enumerate(feed_data): print(str(i) + ' ' + f['Link'])0 http://feeds.feedburner.com/breitbart?format=xml rss_entry = feed_data[0] rss_entry_____no_output_____import scrape_feeds_____no_output_____contents = scrape_feeds.get_feed_contents(rss_entry)_____no_output_____from bs4 import BeautifulSoup as bs_____no_output_____for c in contents: flag = False if len(c['title']) > 200: flag = True if len(c['link']) > 200: flag = True if len(c['summary']) > 5000: flag = True if flag: print(c)_____no_output_____for i in range(len(contents)): print(contents[i]['summary']) print('\n') Advocates who want to see America's immigration laws enforced have found a home in President Donald Trump's administration, the New York Times' Nicholas Kulish reports. Harmeet Dhillon, a San Francisco lawyer representing the UCB College Republicans, held a press conference Monday to discuss the case being brought against UC Berkeley and suggested deploying the National Guard to provide security if Berkeley's mayor could not maintain control of the city. Less than 100 days since leaving office, the former president has already arranged for a big Wall Street payday. Outsourcing firms, which supply major U.S. companies with thousands of foreign workers, are being called-out by the Trump White House. Actress Rachel Bloom has teamed up with "Science Guy" Bill Nye to debut an LGBT sex anthem on the new Netflix series 'Bill Nye Saves the World.' On the Tuesday edition of Breitbart News Daily, broadcast live on SiriusXM Patriot Channel 125 from 6AM to 9AM Eastern, Breitbart editor-in-chief Alex Marlow will continue our discussion of the first 100 days of the Trump administration. He’ll be joined by the renowned conservative writer and historian Patrick J. Buchanan. Buchanan, who was the subject of a recent Politico profile, will weigh in on Trump’s first 100 days. As the opening paragraph of Politico’s article details, Buchanan is nothing short of a living legend whose keen insights into the populist nationalist movement predate Trump’s political ascendancy by decades: His first date with his future wife was spent in a New Hampshire motel room drinking Wild Turkey into the wee hours with Hunter S. Thompson. He stood several feet away from Martin Luther King Jr. during the “I Have a Dream” speech. He went to China with Richard M. Nixon and walked away from Watergate unscathed. He survived Iran-Contra, too, and sat alongside Ronald Reagan at the Reykjavík Summit. He invaded America’s living rooms and pioneered the rhetorical combat that would power the cable news age. He defied the establishment by challenging a sitting president of his own party. He captured the fear and frustration of The Washington Examiner's Byron York evaluates President Donald Trump's accomplishment and failures in his first three months in office, walking through the promises made in Trump's "Contract with the American Voter." On Monday’s broadcast of “MSNBC Live,” NBC reporter Ron Allen stated, “Democrats are feeling somewhat desperate, you might even say. They’re demoralized by the results of the election, they see a hero in President Obama.” Allen said, “[H]is supporters, advocates, those who backed President Obama, they really want to hear from him. Democrats are feeling somewhat desperate, you might even say. They’re demoralized by the results of the election, they see a hero in President Obama. And the question is, whether he will play that big role, that big opposition leader role that so many of his supporters want.” (h/t NewsBusters) Follow Ian Hanchett on Twitter @IanHanchett Pop superstar Rihanna is hitting back against snowflake "haters" who criticized her for daring to Photoshop images of Queen Elizabeth II on to photos of herself in fashion-forward get-ups. Elton John has cancelled all of his upcoming concerts in Las Vegas after contracting a rare and "potentially deadly" infection while touring in South America. Former President Barack Obama admitted that if his high school activities were online somewhere, he probably wouldn’t have been president. Rep. Ted Lieu (D-CA) called Attorney General Jeff Sessions "a racist and a liar" on Sunday, repeating discredited "fake news" accusations. President Donald Trump and Vice President Mike Pence have steered a course during the administration’s first 100 days that anti-abortion activists have been dreaming about for decades. With about a week to go until his administration reaches the 100-day mark, President Trump's campaign promises to halt Syrian refugee resettlement and tighten up the refugee vetting process remain largely unfulfilled. Asked if the plan would be revenue neutral, meaning any reductions in expected revenue from tax cuts would be offset from increases in revenue or spending cuts elesewhere, Mnuchin instead said the plan would "pay for itself" through economic growth. Chicago Mayor Rahm Emanuel (D) is pushing more gun control for Federal Firearms License holders (FFLs) in the city. Attorney Harmeet Dhillon, who is representing the Berkeley College Republicans in their free speech case against the University of California, Berkeley over failing to allow Ann Coulter to speak on campus on equal terms, slammed the American Civil Liberties Union (ACLU) for ignoring the case." Harmeet Dhillon, a San Francisco lawyer representing the UCB College Republicans, held a press conference today to discuss the case being brought against the college due to their mishandling of an event hosting conservative speaker Ann Coulter. The Berkeley College Republicans and the Young America's Foundation have filed a lawsuit against members of the University of California system for their role in restricting an upcoming speaking event featuring Ann Coulter. Former Fox News host Megyn Kelly is set to make her long-anticipated debut at NBC News in June, according to a report. A group on a Delta flight going from Tampa to Los Angeles was treated to a Kenny G performance Saturday morning. Per ABC Action News, the person sitting next to the award-winning saxophonist was an off-duty flight attendant whose daughter had died of brain cancer. She asked Kenny G to play. The head flight attendant then told passengers that Kenny G would play for them if they donated $1,000 to cancer charity Relay for Life. Passengers rose to the challenge and then some, raising about $2,000. Kenny G lived up to his promise, performing while walking up and down the aisle. Follow Breitbart.tv on Twitter @BreitbartVideo Actress Sarah Wynter has warned that Republicans' push for national reciprocity of concealed carry permits will mean the blind and mentally ill will be carrying guns. If you want to know the likely result of next month's French presidential election run off, just look at how the markets responded. The euro and the French markets both jumped dramatically. An up-and-coming leader of Finland’s only Eurosceptic party has proposed a national referendum on leaving the European Union as well as on abandoning the euro in favor of a national currency. UKIP’s leadership has slammed the press for “trivialising” serious issues by mocking the party's integration policies, with a UKIP Lord claiming the BBC “hadn’t a clue” about some dangerous Islamist doctrines. Britain's governing Conservative Party has hired Barack Obama's former Deputy Chief of Staff to help run its election campaign. WASHINGTON, D.C. -- Sen. Elizabeth Warren’s 20-year tenure in public service is rife with contradictions, as an examination of her current positions juxtaposed with her past behavior reveals. The State of California issued the first tranche of taxable construction bonds last Thursday for the High Speed Rail Project, making it clear that it is determined to go ahead with the unpopular project despite numerous obstacles, including federal funding roadblocks thrown up by President Donald Trump. A viable high-speed rail system that actually serves consumer interest may be built in California after all. Monday on Fox News Channel’s “Fox & Friends,” Attorney General Jeff Sessions revealed he sent a letter to 10 cities functioning as so-called “sanctuary cities,” which may be in violation of federal law. Sessions warned a failure to respond to those letters could result in those cities losing their federal funding. “Last year, the Obama administration sent out notices that people had to comply with this cooperative language in the law that was passed several years ago,” Sessions explained. “And we sent out a letter today to 10 cities that the Inspector General’s office last year said were potentially in violation of the law involving deportation in sanctuary cities. We expect them to respond. If they don’t respond, they should not receive the grants because the grants were issued on condition of cooperation.” Sessions went on to add the Department of Justice hoped to get a response certifying those cities were not in violation of the law by June. Follow Jeff Poor on Twitter @jeff_poor A review of the Drug Enforcement Agency’s (DEA) wanted fugitive list revealed that less than 16 percent of DEA fugitives are from the United States. Three Mexican nationals have been charged with transporting and harboring five illegal aliens across the U.S.-Mexico Border. Monday on MSNBC's "Morning Joe," while discussing the Department of Justice saying that New York's police department was "soft on crime" as fake news, Senate Minority Leader Chuck Schumer (D-NY) took a shot at Breitbart President Donald Trump again leveled criticism about NAFTA in an interview with the Associated Press, promising to either renegotiate the trade deal or terminate it. Monday on MSNBC’s “Morning Joe,” Rep. Tom Cole (R-OK) downplayed the possibility of a government shutdown as the deadline to pass a measure to fund the federal government approaches. Cole told host Joe Scarborough with the 60-vote rule in the U.S. Senate, the effort to prevent a shutdown must be bipartisan and therefore trying to impose the majority party’s will for a partisan victory would fail. “You know, I don’t think we’ll have a shutdown,” he said. “There’s certainly a chance we could a have a short-term continuing resolution, but we’re within striking distance of getting this done. I hope we do get it done. If you leave it to the appropriators we will get it done. The real question is whether or not outside groups in the extremes are both caucus. You know, we’ll use the last minute to try and tack things on that make it extremely difficult to have bipartisan cooperation. In the end, to fund the government because of the 60-rule requirement in the United States Senate you really do have to have bipartisan cooperation. So, if anybody doesn’t realize that and tries to, you know, impose a 100 percent partisan victory it’s going to fail.” A new Washington Post-ABC News poll shows that since President Donald Trump took office on January 20 the number of people who believe the economy is improving is at the highest level in 15 years, The Post reported on Sunday. A new poll shows that Americans overwhelmingly support President Donald Trump's "Buy American, Hire American" policy, echoing similar surveys conducted in 2014 by campaign pollster Kellyanne Conway. NEW ORLEANS, Louisiana -- The removal of Civil War-era monuments is underway after almost two years of calls by Mayor Mitch Landrieu (D) to remove four that he deemed "racist". An alleged robber threatened a cab driver at knifepoint in the Bronx, according to surveillance footage released by the New York Police Department. Two people were arrested after a 4-year-old boy in Milwaukee, Wisconsin, died from a possible overdose of opioids on Saturday. Jon Ungoed-Thomas and Mason Boycott-Owen report in The Sunday Times that cereal giant Kellogg’s has been funding studies that undermine official government warnings about the link between sugary cereals and obesity. The left wing mayor of the northern French town of Annezin, Daniel Delomez, has announced his resignation after residents in his town largely voted for anti-mass migration Front National leader Marine Le Pen. UKIP have been accused of “full-throated Islamophobia” after ratcheting up their rhetoric on multiculturalism and Islam, praising “superior” British values and describing Islam as “regressive”. Bloomberg Politics reports on the developing political landscape in France after Sunday’s vote, and how the coming choice for that nation is one of Nationalism versus Globalism. In the coming two weeks of the French campaign, Marine Le Pen’s challenge is to break through a wall of voter antipathy that she inherited from her father. Emmanuel Macron’s task is to persuade the French he has the gravitas and experience to be president. The far-right Le Pen and centrist Macron both took just under a quarter of the vote in a contest with 11 candidates. Now they must convince the rest of the population that they have what it takes to lead the country after the May 7 runoff. The next round will see two radically different visions. Macron embraces globalization and European integration, Le Pen channels the forces of discontent that triggered Brexit and brought Donald Trump to power. The runoff will also be unique in that it will be the first contested by neither of the major parties, giving Macron, 39, and Le Pen, 48, space to try to forge alliances that might have seemed unlikely until recently. “Marine Le Pen’s toughest job is to break the traditional glass In an open letter to Donald Trump, climate expert Dr. Duane Thresher has urged the President not to give in to his daughter Ivanka’s misguided views on global warming and her insistence that the U.S. remain in the Paris climate agreement ratified by Barack Obama last August. PARIS (AP) — French authorities have filed preliminary charges against two people of plotting an attack days before a tense presidential election. The Paris prosecutor’s office said Sunday the two men are being kept in custody pending further investigation. They were given preliminary charges Sunday of “association with a terrorist enterprise with plans to prepare one or several attacks,” and weapons and explosives charges. The two suspected Islamic radicals were arrested Tuesday in Marseille and police seized guns and explosives. The target of their potential attack is unclear, though presidential campaign teams were warned about the threat. Meanwhile, prosecutors said that investigators have released three people without charge after they were detained in an attack on Paris’ Champs-Elysees. The attacker was killed but three people in his entourage were detained for two days. Former premier Francois Fillon was one of the early favourites to become president, but in the end he led the French right to a historic defeat, as a fake jobs scandal engulfed his campaign. In a victory speech Sunday evening, Front National anti-mass migration candidate Marine Le Pen called on patriots in France to support her against Emmanuel Macron, who she called the heir of unpopular French President Francois Hollande. Ms Le Pen, who exit polls show finished second in the first round of the French presidential election, told a large crowd gathered in the north of France that the battle was now between supporters of globalism and those against. "Ce n'est pas avec l'héritier de François #Hollande que cette alternance tant attendue viendra." #Présidentielle2017 — Marine Le Pen (@MLP_officiel) April 23, 2017 Calling the first round projected result, which will see her face off against Emmanuel Macron, “historic” she thanked her supporters for bringing her to the second round which is set to take place on May 7th. Le Pen outlined her main priority as president as to “defend the French nation” emphasising her previous campaign rhetoric which focused on restoring order to a country plagued by Islamic extremist terrorism and riots in various Paris suburbs. The Front National leader hit out against the French political and media establishment, saying that the system had attempted to “choke political debate”. Her victory ensured that the debate could now The euro surged Monday after moderate Emmanuel Macron won the first round in France's presidential election and looked set to triumph in the run-off against right wing candidate Marine Le Pen next month. </code> Only a single article. No longer on the feed page._____no_output_____### URL Errors_____no_output_____ <code> urle = errors[errors.err_class=="<class 'urllib.error.URLError'>"] urle.shape_____no_output_____urle.tail()_____no_output_____urle.entry.unique()[:10]_____no_output_____roots = urle.entry.apply(lambda x: x[5:22])_____no_output_____len(roots.unique())_____no_output_____for e in urle.entry.unique(): print(e) print('\n')Feed http://feeds.bbci.co.uk/news/world/latin_america/rss.xml: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed http://feeds.bbci.co.uk/news/world/middle_east/rss.xml: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed http://feeds.bbci.co.uk/news/world/us_and_canada/rss.xml: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed http://feeds.bbci.co.uk/news/video_and_audio/news_front_page/rss.xml?edition=uk: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed http://feeds.bbci.co.uk/news/video_and_audio/world/rss.xml: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/editorials/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/earth-news/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/earth-news/earth-sciences/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/earth-news/environment/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/science-news/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/science-news/archaeology-fossils/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/science-news/economics-business/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/science-news/mathematics/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/science-news/sci-other/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/science-news/social-sciences/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/nanotech-news/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/nanotech-news/bio-medicine/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/nanotech-news/nano-materials/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/nanotech-news/nano-physics/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/physics-news/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/physics-news/materials/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/physics-news/physics/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/physics-news/optics-photonics/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/physics-news/plasma/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/physics-news/quantum-physics/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/physics-news/soft-matter/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/physics-news/superconductivity/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/space-news/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/space-news/astronomy/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/space-news/space-exploration/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/technology-news/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/technology-news/business-tech/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/technology-news/computer-sciences/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/technology-news/consumer-gadgets/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/technology-news/energy-green-tech/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/technology-news/engineering/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/technology-news/hardware/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/technology-news/hi-tech/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/technology-news/internet/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/technology-news/other/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/technology-news/robotics/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/technology-news/security/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/technology-news/semiconductors/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/technology-news/software/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/technology-news/telecom/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/biology-news/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/biology-news/biotechnology/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/biology-news/microbiology/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/biology-news/ecology/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/biology-news/evolution/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/biology-news/biology-other/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/biology-news/plants-animals/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/chemistry-news/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/chemistry-news/analytical-chemistry/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/chemistry-news/biochemistry/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/chemistry-news/materials-science/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/chemistry-news/chemistry-other/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) Feed https://phys.org/rss-feed/chemistry-news/polymers/: <class 'urllib.error.URLError'>: (GeneralProxyError(),) feed_data = utils.load_feedlist_data('physorg_feedlist.xml')_____no_output_____for i,f in enumerate(feed_data[:15]): print(str(i) + ' ' + f['Link'])0 https://phys.org/rss-feed/ 1 https://phys.org/rss-feed/editorials/ 2 https://phys.org/rss-feed/earth-news/ 3 https://phys.org/rss-feed/earth-news/earth-sciences/ 4 https://phys.org/rss-feed/earth-news/environment/ 5 https://phys.org/rss-feed/science-news/ 6 https://phys.org/rss-feed/science-news/archaeology-fossils/ 7 https://phys.org/rss-feed/science-news/economics-business/ 8 https://phys.org/rss-feed/science-news/mathematics/ 9 https://phys.org/rss-feed/science-news/sci-other/ 10 https://phys.org/rss-feed/science-news/social-sciences/ 11 https://phys.org/rss-feed/nanotech-news/ 12 https://phys.org/rss-feed/nanotech-news/bio-medicine/ 13 https://phys.org/rss-feed/nanotech-news/nano-materials/ 14 https://phys.org/rss-feed/nanotech-news/nano-physics/ urle.tail(10)_____no_output_____ </code> No issues since beginning of March; this issue seems to be fixed now._____no_output_____### HTTP Errors_____no_output_____ <code> htpe = errors['2017-03-02 00:00:00':].copy() htpe = htpe[htpe.err_class == "<class 'urllib.error.HTTPError'>"] htpe.shape_____no_output_____htpe.tail()_____no_output_____roots = htpe.entry.apply(lambda x: x[5:22])_____no_output_____roots.unique()_____no_output_____len(htpe.entry.unique())_____no_output_____htpe.entry.unique()_____no_output_____ </code> This is strange. Links seem to work. May be a tor thing? Let's see if this continues with change in settings for main...?_____no_output_____#### Incomplete Read Error_____no_output_____ <code> htpce = errors['2017-03-14 00:00:00':].copy() htpce = htpce[htpce.err_class == "<class 'http.client.IncompleteRead'>"] htpce.shape_____no_output_____htpce_____no_output_____ </code> Not a big deal..._____no_output_____
{ "repository": "immersinn/rssfeed_link_collector", "path": "notebooks/explore/Log Investigate 2017-04-24.ipynb", "matched_keywords": [ "evolution", "biology", "ecology" ], "stars": null, "size": 103686, "hexsha": "4876b9933b979db59e466fd62797d46cad0145f6", "max_line_length": 1296, "avg_line_length": 38.3312384473, "alphanum_fraction": 0.5137337731 }
# Notebook from michael-swift/seqclone Path: notebooks/SwitchTX_Figure.ipynb <code> import switchy.CloneStats as cs import switchy.util as ut import pandas as pd import numpy as np import sys import os import time import random import copy import math import scanpy as sc %matplotlib inline from matplotlib import pyplot as plt import matplotlib as mpl import seaborn as sns import autoreload import scipy params = { 'font.size': 12, 'axes.titlesize': 12, 'axes.labelsize': 12, 'legend.fontsize': 12, 'xtick.labelsize': 8, 'ytick.labelsize': 10, 'font.family': "Helvetica", 'pdf.fonttype': 42, 'ps.fonttype': 42, 'figure.dpi': 100 } from nheatmap import nhm mpl.rcParams.update(params) sns.set_style("ticks") sns.set_context(context='paper') savefig_args = {"dpi": 300, "bbox_inches": "tight", "pad_inches": 0, "transparent": False} mpl.rc('savefig', dpi=300) output_dir='figures/9.17.20_PaperDraft/' output_suffix = "" output_formats = [".png", ".pdf"] def save_figure(fig, name, output_dir=output_dir, output_suffix=output_suffix, output_formats=output_formats, savefig_args=savefig_args): for output_format in output_formats: fig.savefig(output_dir + "/" + name + output_suffix + output_format, **savefig_args) return None pd.set_option('display.max_rows', 50) pd.set_option('display.max_columns', 20) pd.set_option('display.width', 100) %load_ext autoreload %autoreload 2 cfgFile = '../switchy/Prototyping.ini'The autoreload extension is already loaded. To reload it, use: %reload_ext autoreload # Load SJout Data ab_tx, switch_tx = ut.loadSJoutIGH('../../../SharedData/SJ/CombinedSJouts_chr14_IGH.fthr')filtering SJout to just IGH locus making SJTable human readable parameters, io, config = cs.readConfig(cfgFile) adata, df = cs.prepareData(io['CountsFile'], parameters['datatype'], parameters.getboolean('highly_variable'), int(parameters['n_highly_variable']), parameters.getboolean('onlyClones'), parameters.getboolean('remove_immune_receptors'), parameters.getboolean('normalize'), parameters.getboolean('filterCells'))(1350, 10) (14714, 2) shape of adata after filtering # Filter Sjout to be only cells which pass QC ab_tx = ab_tx[ab_tx.cell.isin(adata.obs.index)] switch_tx = switch_tx[switch_tx.cell.isin(adata.obs.index)] switch_tx['exon_start'] = switch_tx['exon_start'].str.split('_', expand = True)[0] ab_tx['exon_start'] = ab_tx['exon_start'].str.split('_', expand = True)[0]_____no_output_____IgH_genes = ['IGHM', 'IGHD', 'IGHG3', 'IGHG1', 'IGHA1', "IGHG2", 'IGHG4','IGHE', 'IGHA2'] _____no_output_____def clustermap(df): sum_df = df.groupby(['cell', 'exon_start']).sum() sum_df['uniquelog2'] = np.log10(sum_df['unique_mapping']) ret_df = sum_df.uniquelog2.unstack().fillna(np.log10(1)) ret_df = ret_df[IgH_genes] sns.clustermap(ret_df) return ret_df_____no_output_____cellbygeneAb = clustermap(ab_tx)_____no_output_____cellbygeneSwitch = clustermap(switch_tx)_____no_output_____def clean_df(dataframe): columns = dataframe.columns.to_list() rows = dataframe.index.to_list() return pd.DataFrame(data = dataframe.values, index=rows, columns=columns)_____no_output_____ab = clean_df(cellbygeneAb)_____no_output_____dfr = adata.obs[['Treatment', 'Division_Number', 'ISOTYPE']] dfr.columns = ['Treatment', 'Division Number', 'Isotype'] _dfr = clean_df(dfr) _dfr['Division Number'] = _dfr['Division Number'].replace('None', '0') _dfr = _dfr[_dfr.Isotype.isin(IgH_genes)] _df = df[df.index.isin(_dfr.index)]_____no_output_____cmaps={'Treatment':'Set1', 'PC1':'RdYlGn', 'gene cluster':'inferno', 'Division Number':'Reds', 'Isotype':'Paired'}_____no_output_____g = nhm(data=_df[IgH_genes], dfr=_dfr,figsize=(15, 10), linewidths=0, showxticks=True, cmaps=cmaps, srot = 90) fig, plots = g.run()_____no_output_____g.hcluster() fig, plots = g.run()_____no_output_____save_figure(fig, 'nheatmap_IgH')_____no_output______df = cellbygeneAb _df = pd.merge(_dfr, _df, left_index=True, right_index=True) _dfr = _df.iloc[:,:3] data = _df.iloc[:,3:] g = nhm(data=data, dfr=_dfr,figsize=(15, 10), linewidths=0, showxticks=True, cmaps=cmaps, srot = 90, ) fig, plots = g.run()_____no_output_____g.hcluster() fig, plots = g.run()_____no_output_____save_figure(fig, "nheatmap_AbTx")_____no_output______df = cellbygeneSwitch _df = pd.merge(_dfr, _df, left_index=True, right_index=True) _dfr = _df.iloc[:,:3] data = _df.iloc[:,3:] g = nhm(data=data, dfr=_dfr,figsize=(15, 10), linewidths=0, showxticks=True, cmaps=cmaps, srot = 90, ) fig, plots = g.run()_____no_output_____g.hcluster() fig, plots = g.run()_____no_output_____save_figure(fig, "nheatmap_SwitchTX")_____no_output_____ </code>
{ "repository": "michael-swift/seqclone", "path": "notebooks/SwitchTX_Figure.ipynb", "matched_keywords": [ "Scanpy" ], "stars": null, "size": 996601, "hexsha": "4876d8ec05a50f6a58b75070e3934fb5babc05d7", "max_line_length": 165816, "avg_line_length": 2244.5968468468, "alphanum_fraction": 0.9624433449 }
# Notebook from ivirshup/scanpy-interactive Path: notebooks/gene_selection.ipynb # Gene selection widget prototype This implemets a searchable list of genes, of which multiple can me selected (Cmd-click). ## Possible extensions * Speed up updates to options in each selector. Takes a while when it's a long list. * Figure out a better api for formatting gene options_____no_output_____# Setup_____no_output_____ <code> import matplotlib as mpl mpl.use("Agg") # Only for output example import matplotlib.pyplot as plt_____no_output_____import pandas as pd import numpy as np import scanpy.api as sc import ipywidgets from functools import partial from itertools import repeat import io_____no_output_____from ipywidgets import Text, SelectMultiple, Button, Image, HBox, VBox, Output from ipywidgets import GridBox, Layout_____no_output_____ </code> Preprocess data_____no_output_____ <code> # An already processed anndata # adata = sc.read("../data/CellBench10X.h5ad") adata = sc.read("../data/CellBench10X_noraw.h5ad", backed="r") # also works with a backed anndata_____no_output_____# Something I'd like to handle better adata.var["search_field"] = adata.var["gene_symbol"].astype(str) + " (" + adata.var_names.values + ")"_____no_output_____ </code> # Basic example_____no_output_____**NOTE:** These require the mpl inline backend, but importing that makes the caching example not work._____no_output_____## Callbacks_____no_output_____ <code> def sorting_callback(search): options = pd.Series(selection.options) new_options = options.copy() is_match = options.str.contains(search.new, case=False) found = options[is_match].values found.sort() new_options.iloc[:len(found)] = found new_options.iloc[len(found):] = options[~is_match].values selection.options = new_options_____no_output_____def plot_selected(selected): selected_indices = adata.var_names[adata.var["search_field"].isin(selected.new)] out.clear_output() with out: sc.pl.umap(adata, color=selected_indices)_____no_output_____ </code> ## Plotting_____no_output_____ <code> # Widgets out = ipywidgets.Output() searchbar = ipywidgets.Text(value="search here", continuous_update=False) selection = ipywidgets.SelectMultiple(options=adata.var["search_field"]) # Callbacks searchbar.observe(sorting_callback, names=["value"]) selection.observe(plot_selected, names=["value"]) # Output ipywidgets.VBox([searchbar, selection, out])_____no_output_____def gene_selector(adata): left_search = Text("search here", continuous_update=False) right_search = Text("seach here", continuous_update=False) left_options = SelectMultiple(options=adata.var["search_field"]) right_options = SelectMultiple(options=[]) move_right = Button(description=">>") move_left = Button(description="<<") plots = Output() def sorting_callback(search, selection): options = pd.Series(selection.options) new_options = options.copy() is_match = options.str.contains(search.new, case=False) found = options[is_match].values found.sort() new_options.iloc[:len(found)] = found new_options.iloc[len(found):] = options[~is_match].values selection.options = new_options def move_selection(button, orig, dest): """ Args: button: Button which triggers callback orig: Selector options are moving from dest: Selector options are moving to """ dest_new_opts = list(orig.value) orig_new_opts = list() for option in orig.options: if option not in orig.value: orig_new_opts.append(option) dest_new_opts.extend(dest.options) dest.values = [] dest.options = dest_new_opts orig.options = orig_new_opts def plot_selected(selected, out): selected_indices = adata.var_names[adata.var["search_field"].isin(selected.new)] out.clear_output() if len(selected_indices) > 0: with out: sc.pl.umap(adata, color=selected_indices) left_search.observe(partial(sorting_callback, selection=left_options), names=["value"]) right_search.observe(partial(sorting_callback, selection=right_options), names=["value"]) move_right.on_click(partial(move_selection, orig=left_options, dest=right_options)) move_left.on_click(partial(move_selection, orig=right_options, dest=left_options)) right_options.observe(partial(plot_selected, out=plots), names=["options"]) layout = VBox([ HBox([ VBox([left_search, left_options]), VBox([move_right, move_left]), VBox([right_search, right_options]) ]), plots ]) return layout_____no_output_____gene_selector(adata)_____no_output_____ </code> ## Experiment: filtering search instead of sorting (failed)_____no_output_____In this version, I'll only update with values that match the search, possibly speeding up the process. This adds a a lot of complication to the code. For example, if move some values over and there is a search applied, do I show them? Do I now have to retrigger the search? It might not be worth it. Moving the search to javascript might be the way to go._____no_output_____ <code> def gene_selector(adata): left_search = Text("search here", continuous_update=False) right_search = Text("seach here", continuous_update=False) left_options = adata.var["search_field"].values right_options = pd.Series() left_selector = SelectMultiple(options=left_options) right_selector = SelectMultiple(options=[]) move_right = Button(description=">>") move_left = Button(description="<<") plots = Output() def search_callback(search, options, selection): """ Update selection with fields from options which contain search. Args: search (str): Search term/ regex. options (Sequence[str]) selection (SelectMultiple) """ options = pd.Series(options) is_match = options.str.contains(search.new, case=False) new_options = options[is_match] new_options.sort_values() selection.options = new_options def move_selection(button, orig, dest, orig_options, dest_options): """ Args: button: Button which triggers callback orig: Selector options are moving from dest: Selector options are moving to orig_options: Options for selector orig dest_options: Options for selector dest """ dest_new_opts = dest_options orig_new_opts = orig_options for option in orig.options: if option not in orig.value: orig_new_opts.append(option) dest_new_opts.extend(dest.options) dest.values = [] dest.options = dest_new_opts orig.options = orig_new_opts def plot_selected(selected, out): selected_indices = adata.var_names[adata.var["search_field"].isin(selected.new)] out.clear_output() if len(selected_indices) > 0: with out: sc.pl.umap(adata, color=selected_indices) left_search.observe(partial(search_callback, options=left_options, selection=left_selector), names=["value"]) right_search.observe(partial(search_callback, options=right_options, selection=right_selector), names=["value"]) move_right.on_click(partial(move_selection, orig=left_selector, dest=right_selector, orig_options=left_options, dest_options=left_options)) move_left.on_click(partial(move_selection, orig=right_selector, dest=left_selector, orig_options=right_options, dest_options=left_options)) right_selector.observe(partial(plot_selected, out=plots), names=["options"]) layout = VBox([ HBox([ VBox([left_search, left_selector]), VBox([move_right, move_left]), VBox([right_search, right_selector]) ]), plots ]) return layout_____no_output_____gene_selector(adata)_____no_output_____ </code> ## Experiment: Caching plots_____no_output_____* I would like to cache plots between selections, this should make displaying them faster especially when I'm working with more cells. * This was reaaaaally slow once. Not sure what to make of that. Generally, this is much faster. * This one requires non-inline backend, or you'll get plots returned multiple times. Not sure what to do about that._____no_output_____ <code> # Define callbacks def sorting_callback(search, selection): """ Args: search (str): Search term/ regex. selection (SelectMultiple) """ options = pd.Series(selection.options) new_options = options.copy() is_match = options.str.contains(search.new, case=False) found = options[is_match].values found.sort() new_options.iloc[:len(found)] = found new_options.iloc[len(found):] = options[~is_match].values selection.options = new_options def move_selection(button, orig, dest): """ Args: button: Button which triggers callback orig: Selector options are moving from dest: Selector options are moving to """ dest_new_opts = list(orig.value) orig_new_opts = list() for option in orig.options: if option not in orig.value: orig_new_opts.append(option) dest_new_opts.extend(dest.options) dest.values = [] dest.options = dest_new_opts orig.options = orig_new_opts def plot_selected(adata, selected, plot_grid, plot_cache): """ Args: adata (anndata.AnnData): AnnData object to be plotting from. selected: Object from selection callback plot_grid (ipywidgets.GridBox): Grid box to put plots in. plot_cache (dict): Cache of previously rendered plots. """ selected_items = adata.var.loc[adata.var["search_field"].isin(selected.new), "search_field"] for index, option in selected_items.iteritems(): if option not in plot_cache: fig = sc.pl.umap(adata, color=index, show=False, title=option).figure with io.BytesIO() as byteio: fig.savefig(byteio, format="png") img = ipywidgets.Image(value=byteio.getvalue(), format="png") plt.close(fig) plot_cache[option] = img plotlist = [] if len(selected_items) > 0: for option in selected_items: plot = plot_cache[option] plotlist.append(plot) plot_grid.children = plotlist_____no_output_____def gene_selector(adata, ncols=3): # Define elements left_search = Text("search here", continuous_update=False) right_search = Text("seach here", continuous_update=False) left_selector = SelectMultiple(options=adata.var["search_field"]) right_selector = SelectMultiple(options=[]) move_right = Button(description=">>") move_left = Button(description="<<") plot_grid = GridBox(layout=Layout(grid_template_columns=" ".join(repeat("1fr", ncols)))) plot_cache = {} # Register callbacks left_search.observe(partial(sorting_callback, selection=left_selector), names=["value"]) right_search.observe(partial(sorting_callback, selection=right_selector), names=["value"]) move_right.on_click(partial(move_selection, orig=left_selector, dest=right_selector)) move_left.on_click(partial(move_selection, orig=right_selector, dest=left_selector)) right_selector.observe(partial(plot_selected, adata, plot_grid=plot_grid, plot_cache=plot_cache), names=["options"]) # Define layout layout = VBox([ HBox([ VBox([left_search, left_selector]), VBox([move_right, move_left]), VBox([right_search, right_selector]) ]), plot_grid ]) return layout_____no_output_____gene_selector(adata, ncols=2)_____no_output_____ </code>
{ "repository": "ivirshup/scanpy-interactive", "path": "notebooks/gene_selection.ipynb", "matched_keywords": [ "Scanpy" ], "stars": null, "size": 18339, "hexsha": "4877d7dad5f5b73cce712e4961485072ac5918d2", "max_line_length": 258, "avg_line_length": 32.8655913978, "alphanum_fraction": 0.5349255685 }
# Notebook from cuttlefishh/papers Path: palmyra-corals/notebooks/taxa_heatmaps.ipynb ## carter_taxa_heatmaps.ipynb_____no_output_____ <code> from qiime2 import Artifact from qiime2.plugins import feature_table import pandas as pd import numpy as np import re import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline_____no_output_____sns.set(style='whitegrid') plt.rcParams["font.family"] = "Times New Roman"_____no_output_____ </code> ### Prepare data_____no_output_____#### Paths_____no_output_____ <code> path_b_map = '/Users/luke.thompson/carter/metadata/14693_analysis_mapping_cleaned_bleaching.txt' path_c_map = '/Users/luke.thompson/carter/metadata/14693_analysis_mapping_cleaned_corallimorph.txt' path_b_biom = '/Users/luke.thompson/carter/post-tourmaline/bleaching_biom_dada2_pe_filtered.qza' path_c_biom = '/Users/luke.thompson/carter/post-tourmaline/corallimorph_biom_dada2_pe_filtered.qza' path_b_tax = '/Users/luke.thompson/carter/tourmaline-bleaching/03-repseqs/dada2-pe/taxonomy.qza' path_c_tax = '/Users/luke.thompson/carter/tourmaline-corallimorph/03-repseqs/dada2-pe/taxonomy.qza'_____no_output_____ </code> #### Import metadata_____no_output_____ <code> df_b_map = pd.read_csv(path_b_map, sep='\t', index_col=0) df_c_map = pd.read_csv(path_c_map, sep='\t', index_col=0)_____no_output_____ </code> #### Import taxonomy, convert to DataFrame_____no_output_____ <code> b_tax_artifact = Artifact.load(path_b_tax) b_tax = b_tax_artifact.view(view_type=pd.DataFrame)_____no_output_____c_tax_artifact = Artifact.load(path_c_tax) c_tax = c_tax_artifact.view(view_type=pd.DataFrame)_____no_output_____ </code> #### Import DADA2 filtered biom tables, convert to relative frequency, convert to DataFrame_____no_output_____ <code> b_table_filt = Artifact.load(path_b_biom) b_table_filt_rel_result = feature_table.methods.relative_frequency(table=b_table_filt) b_table_filt_rel = b_table_filt_rel_result.relative_frequency_table df_b5_filt_rel = b_table_filt_rel.view(pd.DataFrame)_____no_output_____c_table_filt = Artifact.load(path_c_biom) c_table_filt_rel_result = feature_table.methods.relative_frequency(table=c_table_filt) c_table_filt_rel = c_table_filt_rel_result.relative_frequency_table df_c5_filt_rel = c_table_filt_rel.view(pd.DataFrame)_____no_output_____ </code> #### Add metadata, average over species and treatment groups_____no_output_____ <code> df_b5_filt_rel = df_b5_filt_rel.join(df_b_map[['host_scientific_name', 'coral_health_plus_year']]) df_b5_filt_rel_group = df_b5_filt_rel.groupby(['host_scientific_name', 'coral_health_plus_year']).mean() _____no_output_____df_c5_filt_rel = df_c5_filt_rel.join(df_c_map[['host_scientific_name', 'zone_plus_year']]) df_c5_filt_rel_group = df_c5_filt_rel.groupby(['host_scientific_name', 'zone_plus_year']).mean() df_c5_filt_rel_group = df_c5_filt_rel_group[df_c5_filt_rel_group.sum().sort_values(ascending=False).index]_____no_output_____ </code> ### Heatmaps_____no_output_____#### Split by species, sort ASVs by total abundance, plot heatmap_____no_output_____#### Bleaching_____no_output_____ <code> ax.set_xticklabels?Object `ax.set_xticklabels` not found. for species in df_b5_filt_rel_group.index.levels[0]: df_b5_filt_rel_group_sp = df_b5_filt_rel_group.loc[species] df_b5_filt_rel_group_sp = df_b5_filt_rel_group_sp[df_b5_filt_rel_group_sp.sum().sort_values(ascending=False).index] df_plot = df_b5_filt_rel_group_sp.iloc[:,:10] fig, ax = plt.subplots() sns.heatmap(df_plot.T, cmap='PuBuGn') labels_rename = {'bleached_2015': 'Bleached\n2015', 'bleached_2016': 'Recovered\n2016', 'healthy_2015': 'Healthy\n2015', 'healthy_2016': 'Healthy\n2016'} labels = [labels_rename[x] for x in df_plot.index] ax.set_xticks([0.7, 1.7, 2.7, 3.7]) ax.set_xticklabels(labels, size=12, rotation=90, horizontalalignment='right') ax.set_xlabel('') taxa = [b_tax.loc[x]['Taxon'].split(';')[-1] for x in df_plot.columns] ax.set_yticklabels(taxa, size=12) fig.savefig('../figures/heatmap_bleaching_%s.pdf' % re.sub(' ', '_', species), bbox_inches = 'tight')_____no_output_____ </code> #### Corallimorph_____no_output_____ <code> for species in df_c5_filt_rel_group.index.levels[0]: df_c5_filt_rel_group_sp = df_c5_filt_rel_group.loc[species] df_c5_filt_rel_group_sp = df_c5_filt_rel_group_sp[df_c5_filt_rel_group_sp.sum().sort_values(ascending=False).index] df_plot = df_c5_filt_rel_group_sp.iloc[:,:10] fig, ax = plt.subplots() sns.heatmap(df_plot.T, cmap='PuBuGn') labels_rename = {'AB_2015': 'Healthy\n2015', 'AB_2016': 'Healthy\n2016', 'CD_2015': 'Interaction Zone\n2015', 'CD_2016': 'Interaction Zone\n2016', 'EF_2015': 'Invaded\n2015', 'EF_2016': 'Invaded\n2016'} labels = [labels_rename[x] for x in df_plot.index] ax.set_xticks([0.7, 1.7, 2.7, 3.7, 4.7, 5.7]) ax.set_xticklabels(labels, size=12, rotation=90, horizontalalignment='right') ax.set_xlabel('') taxa = [c_tax.loc[x]['Taxon'].split(';')[-1] for x in df_plot.columns] ax.set_yticklabels(taxa, size=12) fig.savefig('../figures/heatmap_corallimorph_%s.pdf' % re.sub(' ', '_', species), bbox_inches = 'tight')_____no_output_____ </code>
{ "repository": "cuttlefishh/papers", "path": "palmyra-corals/notebooks/taxa_heatmaps.ipynb", "matched_keywords": [ "QIIME2" ], "stars": 3, "size": 389738, "hexsha": "4878dc1bbc2f708189b734e8adaee787ec06583e", "max_line_length": 42788, "avg_line_length": 962.3160493827, "alphanum_fraction": 0.9549492223 }
# Notebook from janeite/course-content Path: tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial2.ipynb <a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____# Tutorial 2: Wilson-Cowan Model **Week 2, Day 4: Dynamic Networks** **By Neuromatch Academy** __Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva __Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Spiros Chavlis, Michael Waskom, Siddharth Suresh_____no_output_____**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** <p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>_____no_output_____--- # Tutorial Objectives *Estimated timing of tutorial: 1 hour, 35 minutes* In the previous tutorial, you became familiar with a neuronal network consisting of only an excitatory population. Here, we extend the approach we used to include both excitatory and inhibitory neuronal populations in our network. A simple, yet powerful model to study the dynamics of two interacting populations of excitatory and inhibitory neurons, is the so-called **Wilson-Cowan** rate model, which will be the subject of this tutorial. The objectives of this tutorial are to: - Write the **Wilson-Cowan** equations for the firing rate dynamics of a 2D system composed of an excitatory (E) and an inhibitory (I) population of neurons - Simulate the dynamics of the system, i.e., Wilson-Cowan model. - Plot the frequency-current (F-I) curves for both populations (i.e., E and I). - Visualize and inspect the behavior of the system using **phase plane analysis**, **vector fields**, and **nullclines**. Bonus steps: - Find and plot the **fixed points** of the Wilson-Cowan model. - Investigate the stability of the Wilson-Cowan model by linearizing its dynamics and examining the **Jacobian matrix**. - Learn how the Wilson-Cowan model can reach an oscillatory state. Bonus steps (applications): - Visualize the behavior of an Inhibition-stabilized network. - Simulate working memory using the Wilson-Cowan model. \\ Reference paper: _[Wilson H and Cowan J (1972) Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal 12](https://doi.org/10.1016/S0006-3495(72)86068-5)______no_output_____ <code> # @title Tutorial slides # @markdown These are the slides for the videos in all tutorials today from IPython.display import IFrame IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/nvuty/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)_____no_output_____ </code> --- # Setup_____no_output_____ <code> # Imports import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt # root-finding algorithm_____no_output_____# @title Figure Settings import ipywidgets as widgets # interactive display %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")_____no_output_____# @title Plotting Functions def plot_FI_inverse(x, a, theta): f, ax = plt.subplots() ax.plot(x, F_inv(x, a=a, theta=theta)) ax.set(xlabel="$x$", ylabel="$F^{-1}(x)$") def plot_FI_EI(x, FI_exc, FI_inh): plt.figure() plt.plot(x, FI_exc, 'b', label='E population') plt.plot(x, FI_inh, 'r', label='I population') plt.legend(loc='lower right') plt.xlabel('x (a.u.)') plt.ylabel('F(x)') plt.show() def my_test_plot(t, rE1, rI1, rE2, rI2): plt.figure() ax1 = plt.subplot(211) ax1.plot(pars['range_t'], rE1, 'b', label='E population') ax1.plot(pars['range_t'], rI1, 'r', label='I population') ax1.set_ylabel('Activity') ax1.legend(loc='best') ax2 = plt.subplot(212, sharex=ax1, sharey=ax1) ax2.plot(pars['range_t'], rE2, 'b', label='E population') ax2.plot(pars['range_t'], rI2, 'r', label='I population') ax2.set_xlabel('t (ms)') ax2.set_ylabel('Activity') ax2.legend(loc='best') plt.tight_layout() plt.show() def plot_nullclines(Exc_null_rE, Exc_null_rI, Inh_null_rE, Inh_null_rI): plt.figure() plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') plt.show() def my_plot_nullcline(pars): Exc_null_rE = np.linspace(-0.01, 0.96, 100) Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rI = np.linspace(-.01, 0.8, 100) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) plt.plot(Exc_null_rE, Exc_null_rI, 'b', label='E nullcline') plt.plot(Inh_null_rE, Inh_null_rI, 'r', label='I nullcline') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') plt.legend(loc='best') def my_plot_vector(pars, my_n_skip=2, myscale=5): EI_grid = np.linspace(0., 1., 20) rE, rI = np.meshgrid(EI_grid, EI_grid) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = my_n_skip plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=myscale, facecolor='c') plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def my_plot_trajectory(pars, mycolor, x_init, mylabel): pars = pars.copy() pars['rE_init'], pars['rI_init'] = x_init[0], x_init[1] rE_tj, rI_tj = simulate_wc(**pars) plt.plot(rE_tj, rI_tj, color=mycolor, label=mylabel) plt.plot(x_init[0], x_init[1], 'o', color=mycolor, ms=8) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def my_plot_trajectories(pars, dx, n, mylabel): """ Solve for I along the E_grid from dE/dt = 0. Expects: pars : Parameter dictionary dx : increment of initial values n : n*n trjectories mylabel : label for legend Returns: figure of trajectory """ pars = pars.copy() for ie in range(n): for ii in range(n): pars['rE_init'], pars['rI_init'] = dx * ie, dx * ii rE_tj, rI_tj = simulate_wc(**pars) if (ie == n-1) & (ii == n-1): plt.plot(rE_tj, rI_tj, 'gray', alpha=0.8, label=mylabel) else: plt.plot(rE_tj, rI_tj, 'gray', alpha=0.8) plt.xlabel(r'$r_E$') plt.ylabel(r'$r_I$') def plot_complete_analysis(pars): plt.figure(figsize=(7.7, 6.)) # plot example trajectories my_plot_trajectories(pars, 0.2, 6, 'Sample trajectories \nfor different init. conditions') my_plot_trajectory(pars, 'orange', [0.6, 0.8], 'Sample trajectory for \nlow activity') my_plot_trajectory(pars, 'm', [0.6, 0.6], 'Sample trajectory for \nhigh activity') # plot nullclines my_plot_nullcline(pars) # plot vector field EI_grid = np.linspace(0., 1., 20) rE, rI = np.meshgrid(EI_grid, EI_grid) drEdt, drIdt = EIderivs(rE, rI, **pars) n_skip = 2 plt.quiver(rE[::n_skip, ::n_skip], rI[::n_skip, ::n_skip], drEdt[::n_skip, ::n_skip], drIdt[::n_skip, ::n_skip], angles='xy', scale_units='xy', scale=5., facecolor='c') plt.legend(loc=[1.02, 0.57], handlelength=1) plt.show() def plot_fp(x_fp, position=(0.02, 0.1), rotation=0): plt.plot(x_fp[0], x_fp[1], 'ko', ms=8) plt.text(x_fp[0] + position[0], x_fp[1] + position[1], f'Fixed Point1=\n({x_fp[0]:.3f}, {x_fp[1]:.3f})', horizontalalignment='center', verticalalignment='bottom', rotation=rotation)_____no_output_____# @title Helper Functions def default_pars(**kwargs): pars = {} # Excitatory parameters pars['tau_E'] = 1. # Timescale of the E population [ms] pars['a_E'] = 1.2 # Gain of the E population pars['theta_E'] = 2.8 # Threshold of the E population # Inhibitory parameters pars['tau_I'] = 2.0 # Timescale of the I population [ms] pars['a_I'] = 1.0 # Gain of the I population pars['theta_I'] = 4.0 # Threshold of the I population # Connection strength pars['wEE'] = 9. # E to E pars['wEI'] = 4. # I to E pars['wIE'] = 13. # E to I pars['wII'] = 11. # I to I # External input pars['I_ext_E'] = 0. pars['I_ext_I'] = 0. # simulation parameters pars['T'] = 50. # Total duration of simulation [ms] pars['dt'] = .1 # Simulation time step [ms] pars['rE_init'] = 0.2 # Initial value of E pars['rI_init'] = 0.2 # Initial value of I # External parameters if any for k in kwargs: pars[k] = kwargs[k] # Vector of discretized time points [ms] pars['range_t'] = np.arange(0, pars['T'], pars['dt']) return pars def F(x, a, theta): """ Population activation function, F-I curve Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: f : the population activation response f(x) for input x """ # add the expression of f = F(x) f = (1 + np.exp(-a * (x - theta)))**-1 - (1 + np.exp(a * theta))**-1 return f def dF(x, a, theta): """ Derivative of the population activation function. Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: dFdx : Derivative of the population activation function. """ dFdx = a * np.exp(-a * (x - theta)) * (1 + np.exp(-a * (x - theta)))**-2 return dFdx_____no_output_____ </code> The helper functions included: - Parameter dictionary: `default_pars(**kwargs)`. You can use: - `pars = default_pars()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. - `pars = default_pars(T=T_sim, dt=time_step)` to set a different simulation time and time step - After `pars = default_pars()`, use `par['New_para'] = value` to add a new parameter with its value - Pass to functions that accept individual parameters with `func(**pars)` - F-I curve: `F(x, a, theta)` - Derivative of the F-I curve: `dF(x, a, theta)`_____no_output_____--- # Section 1: Wilson-Cowan model of excitatory and inhibitory populations _____no_output_____ <code> # @title Video 1: Phase analysis of the Wilson-Cowan E-I model from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1CD4y1m7dK", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="GCpQmh45crM", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out)_____no_output_____ </code> This video explains how to model a network with interacting populations of excitatory and inhibitory neurons (the Wilson-Cowan model). It shows how to solve the network activity vs. time and introduces the phase plane in two dimensions._____no_output_____## Section 1.1: Mathematical description of the WC model *Estimated timing to here from start of tutorial: 12 min* <details> <summary> <font color='blue'>Click here for text recap of relevant part of video </font></summary> Many of the rich dynamics recorded in the brain are generated by the interaction of excitatory and inhibitory subtype neurons. Here, similar to what we did in the previous tutorial, we will model two coupled populations of E and I neurons (**Wilson-Cowan** model). We can write two coupled differential equations, each representing the dynamics of the excitatory or inhibitory population: \begin{align} \tau_E \frac{dr_E}{dt} &= -r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E)\\ \tau_I \frac{dr_I}{dt} &= -r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I) \qquad (1) \end{align} $r_E(t)$ represents the average activation (or firing rate) of the excitatory population at time $t$, and $r_I(t)$ the activation (or firing rate) of the inhibitory population. The parameters $\tau_E$ and $\tau_I$ control the timescales of the dynamics of each population. Connection strengths are given by: $w_{EE}$ (E $\rightarrow$ E), $w_{EI}$ (I $\rightarrow$ E), $w_{IE}$ (E $\rightarrow$ I), and $w_{II}$ (I $\rightarrow$ I). The terms $w_{EI}$ and $w_{IE}$ represent connections from inhibitory to excitatory population and vice versa, respectively. The transfer functions (or F-I curves) $F_E(x;a_E,\theta_E)$ and $F_I(x;a_I,\theta_I)$ can be different for the excitatory and the inhibitory populations. _____no_output_____### Coding Exercise 1.1: Plot out the F-I curves for the E and I populations Let's first plot out the F-I curves for the E and I populations using the helper function `F` with default parameter values._____no_output_____ <code> help(F)_____no_output_____pars = default_pars() x = np.arange(0, 10, .1) print(pars['a_E'], pars['theta_E']) print(pars['a_I'], pars['theta_I']) ################################################################### # TODO for students: compute and plot the F-I curve here # # Note: aE, thetaE, aI and theta_I are in the dictionray 'pars' # raise NotImplementedError('student exercise: compute F-I curves of excitatory and inhibitory populations') ################################################################### # Compute the F-I curve of the excitatory population FI_exc = ... # Compute the F-I curve of the inhibitory population FI_inh = ... # Visualize plot_FI_EI(x, FI_exc, FI_inh)_____no_output_____ </code> [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_043dd600.py) *Example output:* <img alt='Solution hint' align='left' width=1120.0 height=832.0 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W2D4_DynamicNetworks/static/W2D4_Tutorial2_Solution_043dd600_1.png> _____no_output_____## Section 1.2: Simulation scheme for the Wilson-Cowan model *Estimated timing to here from start of tutorial: 20 min* Once again, we can integrate our equations numerically. Using the Euler method, the dynamics of E and I populations can be simulated on a time-grid of stepsize $\Delta t$. The updates for the activity of the excitatory and the inhibitory populations can be written as: \begin{align} r_E[k+1] &= r_E[k] + \Delta r_E[k]\\ r_I[k+1] &= r_I[k] + \Delta r_I[k] \end{align} with the increments \begin{align} \Delta r_E[k] &= \frac{\Delta t}{\tau_E}[-r_E[k] + F_E(w_{EE}r_E[k] -w_{EI}r_I[k] + I^{\text{ext}}_E[k];a_E,\theta_E)]\\ \Delta r_I[k] &= \frac{\Delta t}{\tau_I}[-r_I[k] + F_I(w_{IE}r_E[k] -w_{II}r_I[k] + I^{\text{ext}}_I[k];a_I,\theta_I)] \end{align}_____no_output_____### Coding Exercise 1.2: Numerically integrate the Wilson-Cowan equations We will implemenent this numerical simulation of our equations and visualize two simulations with similar initial points._____no_output_____ <code> def simulate_wc(tau_E, a_E, theta_E, tau_I, a_I, theta_I, wEE, wEI, wIE, wII, I_ext_E, I_ext_I, rE_init, rI_init, dt, range_t, **other_pars): """ Simulate the Wilson-Cowan equations Args: Parameters of the Wilson-Cowan model Returns: rE, rI (arrays) : Activity of excitatory and inhibitory populations """ # Initialize activity arrays Lt = range_t.size rE = np.append(rE_init, np.zeros(Lt - 1)) rI = np.append(rI_init, np.zeros(Lt - 1)) I_ext_E = I_ext_E * np.ones(Lt) I_ext_I = I_ext_I * np.ones(Lt) # Simulate the Wilson-Cowan equations for k in range(Lt - 1): ######################################################################## # TODO for students: compute drE and drI and remove the error raise NotImplementedError("Student exercise: compute the change in E/I") ######################################################################## # Calculate the derivative of the E population drE = ... # Calculate the derivative of the I population drI = ... # Update using Euler's method rE[k + 1] = rE[k] + drE rI[k + 1] = rI[k] + drI return rE, rI pars = default_pars() # Simulate first trajectory rE1, rI1 = simulate_wc(**default_pars(rE_init=.32, rI_init=.15)) # Simulate second trajectory rE2, rI2 = simulate_wc(**default_pars(rE_init=.33, rI_init=.15)) # Visualize my_test_plot(pars['range_t'], rE1, rI1, rE2, rI2)_____no_output_____ </code> [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_15eff812.py) *Example output:* <img alt='Solution hint' align='left' width=1120.0 height=832.0 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W2D4_DynamicNetworks/static/W2D4_Tutorial2_Solution_15eff812_0.png> _____no_output_____The two plots above show the temporal evolution of excitatory ($r_E$, blue) and inhibitory ($r_I$, red) activity for two different sets of initial conditions._____no_output_____### Interactive Demo 1.2: population trajectories with different initial values In this interactive demo, we will simulate the Wilson-Cowan model and plot the trajectories of each population. We change the initial activity of the excitatory population. What happens to the E and I population trajectories with different initial conditions? _____no_output_____ <code> # @title # @markdown Make sure you execute this cell to enable the widget! def plot_EI_diffinitial(rE_init=0.0): pars = default_pars(rE_init=rE_init, rI_init=.15) rE, rI = simulate_wc(**pars) plt.figure() plt.plot(pars['range_t'], rE, 'b', label='E population') plt.plot(pars['range_t'], rI, 'r', label='I population') plt.xlabel('t (ms)') plt.ylabel('Activity') plt.legend(loc='best') plt.show() _ = widgets.interact(plot_EI_diffinitial, rE_init=(0.30, 0.35, .01))_____no_output_____ </code> [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_50331264.py) _____no_output_____### Think! 1.2 It is evident that the steady states of the neuronal response can be different when different initial states are chosen. Why is that? We will discuss this in the next section but try to think about it first._____no_output_____--- # Section 2: Phase plane analysis *Estimated timing to here from start of tutorial: 45 min* Just like we used a graphical method to study the dynamics of a 1-D system in the previous tutorial, here we will learn a graphical approach called **phase plane analysis** to study the dynamics of a 2-D system like the Wilson-Cowan model. You have seen this before in the [pre-reqs calculus day](https://compneuro.neuromatch.io/tutorials/W0D4_Calculus/student/W0D4_Tutorial3.html#section-3-2-phase-plane-plot-and-nullcline) and on the [Linear Systems day](https://compneuro.neuromatch.io/tutorials/W2D2_LinearSystems/student/W2D2_Tutorial1.html#section-4-stream-plots) So far, we have plotted the activities of the two populations as a function of time, i.e., in the `Activity-t` plane, either the $(t, r_E(t))$ plane or the $(t, r_I(t))$ one. Instead, we can plot the two activities $r_E(t)$ and $r_I(t)$ against each other at any time point $t$. This characterization in the `rI-rE` plane $(r_I(t), r_E(t))$ is called the **phase plane**. Each line in the phase plane indicates how both $r_E$ and $r_I$ evolve with time._____no_output_____ <code> # @title Video 2: Nullclines and Vector Fields from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV15k4y1m7Kt", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="V2SBAK2Xf8Y", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out)_____no_output_____ </code> ## Interactive Demo 2: From the Activity - time plane to the **$r_I$ - $r_E$** phase plane In this demo, we will visualize the system dynamics using both the `Activity-time` and the `(rE, rI)` phase plane. The circles indicate the activities at a given time $t$, while the lines represent the evolution of the system for the entire duration of the simulation. Move the time slider to better understand how the top plot relates to the bottom plot. Does the bottom plot have explicit information about time? What information does it give us?_____no_output_____ <code> # @title # @markdown Make sure you execute this cell to enable the widget! pars = default_pars(T=10, rE_init=0.6, rI_init=0.8) rE, rI = simulate_wc(**pars) def plot_activity_phase(n_t): plt.figure(figsize=(8, 5.5)) plt.subplot(211) plt.plot(pars['range_t'], rE, 'b', label=r'$r_E$') plt.plot(pars['range_t'], rI, 'r', label=r'$r_I$') plt.plot(pars['range_t'][n_t], rE[n_t], 'bo') plt.plot(pars['range_t'][n_t], rI[n_t], 'ro') plt.axvline(pars['range_t'][n_t], 0, 1, color='k', ls='--') plt.xlabel('t (ms)', fontsize=14) plt.ylabel('Activity', fontsize=14) plt.legend(loc='best', fontsize=14) plt.subplot(212) plt.plot(rE, rI, 'k') plt.plot(rE[n_t], rI[n_t], 'ko') plt.xlabel(r'$r_E$', fontsize=18, color='b') plt.ylabel(r'$r_I$', fontsize=18, color='r') plt.tight_layout() plt.show() _ = widgets.interact(plot_activity_phase, n_t=(0, len(pars['range_t']) - 1, 1))_____no_output_____ </code> [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_5d1fcb72.py) _____no_output_____## Section 2.1: Nullclines of the Wilson-Cowan Equations *Estimated timing to here from start of tutorial: 1 hour, 3 min* An important concept in the phase plane analysis is the "nullcline" which is defined as the set of points in the phase plane where the activity of one population (but not necessarily the other) does not change. In other words, the $E$ and $I$ nullclines of Equation $(1)$ are defined as the points where $\displaystyle{\frac{dr_E}{dt}}=0$, for the excitatory nullcline, or $\displaystyle\frac{dr_I}{dt}=0$ for the inhibitory nullcline. That is: \begin{align} -r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E) &= 0 \qquad (2)\\[1mm] -r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I) &= 0 \qquad (3) \end{align}_____no_output_____### Coding Exercise 2.1: Compute the nullclines of the Wilson-Cowan model In the next exercise, we will compute and plot the nullclines of the E and I population. _____no_output_____Along the nullcline of excitatory population Equation $2$, you can calculate the inhibitory activity by rewriting Equation $2$ into \begin{align} r_I = \frac{1}{w_{EI}}\big{[}w_{EE}r_E - F_E^{-1}(r_E; a_E,\theta_E) + I^{\text{ext}}_E \big{]}. \qquad(4) \end{align} where $F_E^{-1}(r_E; a_E,\theta_E)$ is the inverse of the excitatory transfer function (defined below). Equation $4$ defines the $r_E$ nullcline._____no_output_____Along the nullcline of inhibitory population Equation $3$, you can calculate the excitatory activity by rewriting Equation $3$ into \begin{align} r_E = \frac{1}{w_{IE}} \big{[} w_{II}r_I + F_I^{-1}(r_I;a_I,\theta_I) - I^{\text{ext}}_I \big{]}. \qquad (5) \end{align} shere $F_I^{-1}(r_I; a_I,\theta_I)$ is the inverse of the inhibitory transfer function (defined below). Equation $5$ defines the $I$ nullcline._____no_output_____Note that, when computing the nullclines with Equations 4-5, we also need to calculate the inverse of the transfer functions. \\ The inverse of the sigmoid shaped **f-I** function that we have been using is: $$F^{-1}(x; a, \theta) = -\frac{1}{a} \ln \left[ \frac{1}{x + \displaystyle \frac{1}{1+\text{e}^{a\theta}}} - 1 \right] + \theta \qquad (6)$$ The first step is to implement the inverse transfer function:_____no_output_____ <code> def F_inv(x, a, theta): """ Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: F_inverse : value of the inverse function """ ######################################################################### # TODO for students: compute F_inverse raise NotImplementedError("Student exercise: compute the inverse of F(x)") ######################################################################### # Calculate Finverse (ln(x) can be calculated as np.log(x)) F_inverse = ... return F_inverse # Set parameters pars = default_pars() x = np.linspace(1e-6, 1, 100) # Get inverse and visualize plot_FI_inverse(x, a=1, theta=3)_____no_output_____ </code> [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_f3500f59.py) *Example output:* <img alt='Solution hint' align='left' width=1116.0 height=828.0 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W2D4_DynamicNetworks/static/W2D4_Tutorial2_Solution_f3500f59_1.png> _____no_output_____Now you can compute the nullclines, using Equations 4-5 (repeated here for ease of access): \begin{align} r_I = \frac{1}{w_{EI}}\big{[}w_{EE}r_E - F_E^{-1}(r_E; a_E,\theta_E) + I^{\text{ext}}_E \big{]}. \qquad(4) \end{align} \begin{align} r_E = \frac{1}{w_{IE}} \big{[} w_{II}r_I + F_I^{-1}(r_I;a_I,\theta_I) - I^{\text{ext}}_I \big{]}. \qquad (5) \end{align}_____no_output_____ <code> def get_E_nullcline(rE, a_E, theta_E, wEE, wEI, I_ext_E, **other_pars): """ Solve for rI along the rE from drE/dt = 0. Args: rE : response of excitatory population a_E, theta_E, wEE, wEI, I_ext_E : Wilson-Cowan excitatory parameters Other parameters are ignored Returns: rI : values of inhibitory population along the nullcline on the rE """ ######################################################################### # TODO for students: compute rI for rE nullcline and disable the error raise NotImplementedError("Student exercise: compute the E nullcline") ######################################################################### # calculate rI for E nullclines on rI rI = ... return rI def get_I_nullcline(rI, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """ Solve for E along the rI from dI/dt = 0. Args: rI : response of inhibitory population a_I, theta_I, wIE, wII, I_ext_I : Wilson-Cowan inhibitory parameters Other parameters are ignored Returns: rE : values of the excitatory population along the nullcline on the rI """ ######################################################################### # TODO for students: compute rI for rE nullcline and disable the error raise NotImplementedError("Student exercise: compute the I nullcline") ######################################################################### # calculate rE for I nullclines on rI rE = ... return rE # Set parameters pars = default_pars() Exc_null_rE = np.linspace(-0.01, 0.96, 100) Inh_null_rI = np.linspace(-.01, 0.8, 100) # Compute nullclines Exc_null_rI = get_E_nullcline(Exc_null_rE, **pars) Inh_null_rE = get_I_nullcline(Inh_null_rI, **pars) # Visualize plot_nullclines(Exc_null_rE, Exc_null_rI, Inh_null_rE, Inh_null_rI)_____no_output_____ </code> [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_db10856b.py) *Example output:* <img alt='Solution hint' align='left' width=1120.0 height=832.0 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W2D4_DynamicNetworks/static/W2D4_Tutorial2_Solution_db10856b_0.png> _____no_output_____Note that by definition along the blue line in the phase-plane spanned by $r_E, r_I$, $\displaystyle{\frac{dr_E(t)}{dt}} = 0$, therefore, it is called a nullcline. That is, the blue nullcline divides the phase-plane spanned by $r_E, r_I$ into two regions: on one side of the nullcline $\displaystyle{\frac{dr_E(t)}{dt}} > 0$ and on the other side $\displaystyle{\frac{dr_E(t)}{dt}} < 0$. The same is true for the red line along which $\displaystyle{\frac{dr_I(t)}{dt}} = 0$. That is, the red nullcline divides the phase-plane spanned by $r_E, r_I$ into two regions: on one side of the nullcline $\displaystyle{\frac{dr_I(t)}{dt}} > 0$ and on the other side $\displaystyle{\frac{dr_I(t)}{dt}} < 0$. _____no_output_____## Section 2.2: Vector field *Estimated timing to here from start of tutorial: 1 hour, 20 min* How can the phase plane and the nullcline curves help us understand the behavior of the Wilson-Cowan model? <details> <summary> <font color='blue'>Click here for text recap of relevant part of video </font></summary> The activities of the $E$ and $I$ populations $r_E(t)$ and $r_I(t)$ at each time point $t$ correspond to a single point in the phase plane, with coordinates $(r_E(t),r_I(t))$. Therefore, the time-dependent trajectory of the system can be described as a continuous curve in the phase plane, and the tangent vector to the trajectory, which is defined as the vector $\bigg{(}\displaystyle{\frac{dr_E(t)}{dt},\frac{dr_I(t)}{dt}}\bigg{)}$, indicates the direction towards which the activity is evolving and how fast is the activity changing along each axis. In fact, for each point $(E,I)$ in the phase plane, we can compute the tangent vector $\bigg{(}\displaystyle{\frac{dr_E}{dt},\frac{dr_I}{dt}}\bigg{)}$, which indicates the behavior of the system when it traverses that point. The map of tangent vectors in the phase plane is called **vector field**. The behavior of any trajectory in the phase plane is determined by i) the initial conditions $(r_E(0),r_I(0))$, and ii) the vector field $\bigg{(}\displaystyle{\frac{dr_E(t)}{dt},\frac{dr_I(t)}{dt}}\bigg{)}$. In general, the value of the vector field at a particular point in the phase plane is represented by an arrow. The orientation and the size of the arrow reflect the direction and the norm of the vector, respectively._____no_output_____### Coding Exercise 2.2: Compute and plot the vector field $\displaystyle{\Big{(}\frac{dr_E}{dt}, \frac{dr_I}{dt} \Big{)}}$ Note that \begin{align} \frac{dr_E}{dt} &= [-r_E + F_E(w_{EE}r_E -w_{EI}r_I + I^{\text{ext}}_E;a_E,\theta_E)]\frac{1}{\tau_E}\\ \frac{dr_I}{dt} &= [-r_I + F_I(w_{IE}r_E -w_{II}r_I + I^{\text{ext}}_I;a_I,\theta_I)]\frac{1}{\tau_I} \end{align}_____no_output_____ <code> def EIderivs(rE, rI, tau_E, a_E, theta_E, wEE, wEI, I_ext_E, tau_I, a_I, theta_I, wIE, wII, I_ext_I, **other_pars): """Time derivatives for E/I variables (dE/dt, dI/dt).""" ###################################################################### # TODO for students: compute drEdt and drIdt and disable the error raise NotImplementedError("Student exercise: compute the vector field") ###################################################################### # Compute the derivative of rE drEdt = ... # Compute the derivative of rI drIdt = ... return drEdt, drIdt # Create vector field using EIderivs plot_complete_analysis(default_pars())_____no_output_____ </code> [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_92ba9d03.py) *Example output:* <img alt='Solution hint' align='left' width=1071.0 height=832.0 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W2D4_DynamicNetworks/static/W2D4_Tutorial2_Solution_92ba9d03_0.png> _____no_output_____ The last phase plane plot shows us that: - Trajectories seem to follow the direction of the vector field - Different trajectories eventually always reach one of two points depending on the initial conditions. - The two points where the trajectories converge are the intersection of the two nullcline curves. _____no_output_____## Think! 2.2: Analyzing the vector field There are, in total, three intersection points, meaning that the system has three fixed points. 1. One of the fixed points (the one in the middle) is never the final state of a trajectory. Why is that? 2. Why the arrows tend to get smaller as they approach the fixed points?_____no_output_____[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial2_Solution_a5cd9b5e.py) _____no_output_____--- # Summary *Estimated timing of tutorial: 1 hour, 35 minutes* Congratulations! You have finished the fourth day of the second week of the neuromatch academy! Here, you learned how to simulate a rate based model consisting of excitatory and inhibitory population of neurons. In the last tutorial on dynamical neuronal networks you learned to: - Implement and simulate a 2D system composed of an E and an I population of neurons using the **Wilson-Cowan** model - Plot the frequency-current (F-I) curves for both populations - Examine the behavior of the system using phase **plane analysis**, **vector fields**, and **nullclines**. Do you have more time? Have you finished early? We have more fun material for you! In the bonus tutorial, there are some, more advanced concepts on dynamical systems: - You will learn how to find the fixed points on such a system, and to investigate its stability by linearizing its dynamics and examining the **Jacobian matrix**. - You will see identify conditions under which the Wilson-Cowan model can exhibit oscillations. If you need even more, there are two applications of the Wilson-Cowan model: - Visualization of an Inhibition-stabilized network - Simulation of working memory_____no_output_____
{ "repository": "janeite/course-content", "path": "tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial2.ipynb", "matched_keywords": [ "evolution" ], "stars": 2294, "size": 47770, "hexsha": "48790316b55789717495eb518d7c68efafde7ad3", "max_line_length": 798, "avg_line_length": 37.1173271173, "alphanum_fraction": 0.566882981 }
# Notebook from chandrabsingh/learnings Path: cs221_ai/lec06-Search2-Astar.ipynb >>> Work in Progress (Following are the lecture notes of Prof Percy Liang/Prof Dorsa Sadigh - CS221 - Stanford. This is my interpretation of his excellent teaching and I take full responsibility of any misinterpretation/misinformation provided herein.)_____no_output_____## Lecture 6: Search 2 - A* | Stanford CS221 _____no_output_____### Uniform Cost Search - Last Lecture - util function - ProrityQueue <img src="images/06_uniformCostAlgoUtil.png" width=400 height=400> $\tiny{\text{YouTube-Stanford-CS221-Dorsa Sadigh}}$ -----_____no_output_____- algorithm implementation <img src="images/06_uniformCostAlgo.png" width=400 height=400> $\tiny{\text{YouTube-Stanford-CS221-Dorsa Sadigh}}$ -----_____no_output_____- algorithm implementation <img src="images/06_uniformCostAlgo.png" width=400 height=400> $\tiny{\text{YouTube-Stanford-CS221-Dorsa Sadigh}}$ -----_____no_output_____- what is the runtime of UCS - $O(n\log n)$ - log n is because of book-keeping of priorityQueue - n is all the states we have explored_____no_output_____### Why UCS returns the best minimum cost path ### DP vs UCS_____no_output_____### Roadmap - Learning costs - 3rd Paradigm - A* search - ways of making search faster - Relaxation - type of strategy_____no_output_____### Learning - Transporation Search - Transportation example - Start state: 1 - Walk action: from s to s+1 - Tram action: from s to 2s - End state: n - Solution - we learnt earlier how to find the best path to get from state x to state y - but we dont know if this is the most optimal cost - we need to learn the costs to walk and in tram - we know the path/trajectory based on data - but we dont know the cost function that was being optimized to get to the best solution - this is learning - this cost function can then be applied say to robot _____no_output_____### Learning as an inverse problem - Search is a forward problem - given a cost(s,a), we find the sequence of actions - Learning is an inverse problem - given the sequence of actions, find the cost(s,a) - input x: search prob w/o costs - output y: solution path - w - will be weights - w[a1] = w[walk] - w[a2] = w[tram] - w's are the costs of going from 1 to 2 - say walking cost is 3, and tram cost is 2 - update these values, so that we get the optimal path of - w[a1] = w[walk] = 3 - w[a2] = w[tram] = 2 - y(optimal solution) - walk, walk, walk - the cost is 3+3+3 = 9 - y'(prediction solution) - walk, tram - the cost is 3+2=5 - so i will pick prediction path - first go over all the values of optimal values of y and lower value of w[walk] -> 3 -> 2 -> 1 -> 0 - then go over all the prediction values of y' and increase those value -> 0 -> 1 - Repeat doing this and see if it converges <img src="images/06_learningProblem.png" width=400 height=400> $\tiny{\text{YouTube-Stanford-CS221-Dorsa Sadigh}}$ -----_____no_output_____<img src="images/06_learningProblemOnBoard.png" width=400 height=400> $\tiny{\text{YouTube-Stanford-CS221-Dorsa Sadigh}}$ -----_____no_output_____### How to compute the cost? - cost of a path is the sum of all the paths - this is called __Structured Perceptron__ <img src="images/06_learningProblemOnBoard2.png" width=400 height=400> $\tiny{\text{YouTube-Stanford-CS221-Dorsa Sadigh}}$ ----- _____no_output_____### Structured Perceptron - Algo in class <img src="images/06_structuredPerceptronAlgo.png" width=400 height=400> $\tiny{\text{YouTube-Stanford-CS221-Dorsa Sadigh}}$ ----- _____no_output_____- Update w based on subtracting the features over true path plus the features over predicted path - This is Collins algorithm - he used this in NLP - to match part of speech and tag to sentence moving the scores up or down - Fruit flies like a banana -> Noun Noun Verb Det Noun - Same can be used in machine translation - Beam search - up-weight or down-weight based on the training data - la maison bleue -> the blue house <img src="images/06_structuredPerceptronModified.png" width=400 height=400> $\tiny{\text{YouTube-Stanford-CS221-Dorsa Sadigh}}$ ----- _____no_output_____### A* search - making things faster - the goal is similar to UCS but do it smarter and move towards the direction of goal state - but we dont have access to futurecost - but we have access to _h(s)-heuristic_, which is a estimate of _FutureCost(s)_ - this heuristic helps me to be smarter when solving the algorithm <img src="images/06_heuristicFun.png" width=400 height=400> $\tiny{\text{YouTube-Stanford-CS221-Dorsa Sadigh}}$ ----- _____no_output_____- if i m at state s, and there is some other successor succ(s,a) and if I am trying to go $s_{end}$ - h was my estimate to get to futureCost from successor to s-end minus the estimate to get to s-end from s - the h function penalizes from s to s-end, if we sway going away from the end state - depends on how good the h function is designed <img src="images/06_heuristicFun2.png" width=400 height=400> $\tiny{\text{YouTube-Stanford-CS221-Dorsa Sadigh}}$ ----- _____no_output_____### Consistent heuristics - must satisfy triangle inequality > - Cost'(s,a) >= Cost(s,a) + h(s') - h(s) >= 0 > - $h(s_{end}) = 0$ <img src="images/06_heuConsistent.png" width=400 height=400> $\tiny{\text{YouTube-Stanford-CS221-Dorsa Sadigh}}$ - The path of new cost - sum of new cost is equal to the sum of old cost minus some constant which is heuristic cost at $s_{0}$ - The A star cost is just uniform cost with a constant - $A^{*}$ is correct only if it is consistent ----- _____no_output_____### Efficiency of A* - A* is more efficient is because it does not explore everything but explores in a directed manner - UCS explores all states s which satisfies > PastCost(s) <= PastCost($s_{end}$) - A* explores explores less states and is smaller, because we are doing a directed search rather all states > PastCost(s) <= PastCost($s_{end}$) - h(s) - larger the h(s), better it is_____no_output_____- Few engineering tweaks/creating reverse problem - Relaxation - Easier search - Reversed relaxed problem - Independent subproblems_____no_output_____
{ "repository": "chandrabsingh/learnings", "path": "cs221_ai/lec06-Search2-Astar.ipynb", "matched_keywords": [ "STAR" ], "stars": null, "size": 10958, "hexsha": "487d262e3c6bf33e3885e801089e3b3919467500", "max_line_length": 258, "avg_line_length": 27.3266832918, "alphanum_fraction": 0.5485490053 }
# Notebook from jradavenport/EBHRD Path: notebooks/metric_v2vis.ipynb # Metric v2 vis Based on Metric v1, but now exploring QuadTree binning but now make the viz more "normal", aim for paper/proposals_____no_output_____ <code> %matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.cm as cm import matplotlib as mpl from matplotlib.colors import LogNorm import sys sys.path.append('/Users/james/python/qthist2d/') # from QThist import QThist, QTcount from qthist2d import qthist, qtcount_____no_output_____cpunk = False import matplotlib matplotlib.rcParams.update({'font.size':18}) if cpunk: import mplcyberpunk plt.style.use("cyberpunk") else: matplotlib.rcParams.update({'font.family':'serif'})_____no_output_____# smooth both EBs and Stars w/ a 2D Gaussian KDE from sklearn.neighbors import KernelDensity def kde2D(x, y, bandwidth, xmin=-1, xmax = 5.5, ymin= -6, ymax=16, xbins=100j, ybins=100j, **kwargs): """ Build a 2D kernel density estimate (KDE) https://stackoverflow.com/a/41639690 """ # create grid of sample locations (default: 100x100) xx, yy = np.mgrid[xmin:xmax:xbins, ymin:ymax:ybins] xy_sample = np.vstack([yy.ravel(), xx.ravel()]).T xy_train = np.vstack([y, x]).T kde_skl = KernelDensity(bandwidth=bandwidth, **kwargs) kde_skl.fit(xy_train) # score_samples() returns the log-likelihood of the samples z = np.exp(kde_skl.score_samples(xy_sample)) return xx, yy, np.reshape(z, xx.shape)_____no_output_____denominator = pd.read_csv('gaia_tess2min.csv') Dok = ((denominator['parallax'] > 0) & np.isfinite(denominator['bp_rp']) & np.isfinite(denominator['phot_g_mean_mag'])) # do the 2d KDE smoothing thing (slow) xx2, yy2, zz2 = kde2D(denominator['bp_rp'][Dok], denominator['phot_g_mean_mag'][Dok] - 5. * np.log10(1000./denominator['parallax'][Dok]) + 5, 0.1)_____no_output_____EHow = pd.read_csv('Erin_and_Known_EBs.csv') Eok = ((EHow['parallax'] > 0) & np.isfinite(EHow['bp_rp']) & np.isfinite(EHow['phot_g_mean_mag']))_____no_output_____x = EHow['bp_rp'][Eok] y = EHow['phot_g_mean_mag'][Eok] - 5. * np.log10(1000./EHow['parallax'][Eok]) + 5 num, xmin, xmax, ymin, ymax = qthist(x,y, N=7, thresh=3, density=False, rng=[[-1.1,4.1],[-6,15]])_____no_output_____fig = plt.figure(figsize=(7,8)) ax = fig.add_subplot(111) plt.scatter(x,y, s=50, alpha=0.5, rasterized=True) for k in range(len(num)): ax.add_patch(plt.Rectangle((xmin[k], ymin[k]), xmax[k]-xmin[k], ymax[k]-ymin[k], fc ='none', lw=1, alpha=0.5, color='k')) plt.gca().invert_yaxis() plt.ylim(15.5,-6.5) plt.xlabel('$G_{BP} - G_{RP}$ (mag)') plt.ylabel('$M_G$ (mag)') plt.savefig('QT_bins_v2.2.pdf', dpi=300, bbox_inches='tight', pad_inches=0.25)_____no_output_____x2 = denominator['bp_rp'][Dok] y2 = denominator['phot_g_mean_mag'][Dok] - 5. * np.log10(1000./denominator['parallax'][Dok]) + 5 # using Quad Tree bins defined from EB sample, count the background sample num2 = qtcount(x2, y2, xmin, xmax, ymin, ymax, density=False) _____no_output_____# do a normal 2d histogram (fast), if not the 2D KDE above # zz2, xx2, yy2, img = plt.hist2d(x2,y2, bins=75) # print(zz2.shape, xx2.shape, yy2.shape)_____no_output_____ </code> # The Score The importance score for EBs within the $i$'th QuadTree bin of the CMD is defined as: $$score_i = 1 - \frac{d_{EB,i} + 1}{ d_{bkgd,i} + 1}$$ where the densities for the EB's are defined using the number of EB's in the bin ($N_{EB,i}$), the area of the bin ($a_i$), and the total number of EBs across the CMD ($\sum N_{EB}$): $$ d_{EB,i} = N_{EB,i}\, / a_i\, / \sum N_{EB} $$ The background star density is defined similarly: $$ d_{bkgd,i} = N_{bkgd,i}\, / a_i\, / \sum N_{bkgd} $$_____no_output_____ <code> def EBscore(num, num2, xmin, xmax, ymin, ymax): ''' compute the Eclipsing Binary rarity score assumes the number counts in each bin are NOT densities ''' areas = np.abs(ymax - ymin) * np.abs(xmax - xmin) d_EB = num / areas / num.sum() + 1 d_bk = num2 / areas / num2.sum() + 1 SCORE = 1 - (d_EB/d_bk) return SCORE*100_____no_output_____# Now the new QT Score approach: # areas =(ymax - ymin) * (xmax - xmin) # SCORE = (1 - (num / areas / num.sum() + 1) / (num2 / areas / num2.sum()+1)) * 100 SCORE = EBscore(num, num2, xmin,xmax,ymin,ymax) _ = plt.hist(SCORE,bins=50)_____no_output_____# clip the distribution, center about SCORE=0... if -np.min(SCORE) > np.max(SCORE): SCORE[-SCORE > np.max(SCORE)] = -np.max(SCORE) if np.max(SCORE) > -np.min(SCORE): SCORE[np.max(SCORE) > -np.min(SCORE)] = -np.min(SCORE) # SCORE[SCORE > 25] = 25 # SCORE[SCORE < -25] = -25 fig = plt.figure(figsize=(7,8)) ax = fig.add_subplot(111) CMAP = plt.cm.PRGn # if doing the 2d Histogram for background (fast, but ratty) # plt.contour((xx2[1:]+xx2[:-1])/2., (yy2[1:]+yy2[:-1])/2., # zz2.T / np.sum(zz2)*np.float(len(Dok)), colors='C0', alpha=0.5, # levels=(3,10,100,500)) # if doing the 2d KDE (slow, but smoother) for background illustration plt.contour(xx2, yy2, zz2/np.sum(zz2)*np.float(len(Dok)), colors='C0', levels=(1,3,10,30,70,100,200), alpha=0.75, linewidths=0.8) # scale SCORE to [0,1] for color codes clr = (SCORE - np.nanmin(SCORE)) / (np.nanmax(SCORE) - np.nanmin(SCORE)) for k in range(len(SCORE)): ax.add_patch(plt.Rectangle((xmin[k], ymin[k]), xmax[k]-xmin[k], ymax[k]-ymin[k], color=CMAP(clr[k]), ec='grey', lw=0.2, )) # create a fake image to show, just to invoke colormap img = plt.imshow(np.array([[clr.min(), clr.max()]]), cmap=CMAP, aspect='auto', vmin=SCORE.min(),vmax=SCORE.max()) # scale to the "SCORE" img.set_visible(False) # throw this away cb = plt.colorbar() cb.set_label('EB Score') plt.gca().invert_yaxis() plt.xlabel('$G_{BP} - G_{RP}$ (mag)') plt.ylabel('$M_G$ (mag)') # plt.grid(True) # plt.xlim(min(x), max(x)) # plt.ylim(max(y), min(y)) plt.ylim(15.5,-6.5) plt.xlim(-1,4) plt.title('TESS EBs (2min Prelim Sample)') plt.savefig('score_v2.2.pdf', dpi=300, bbox_inches='tight', pad_inches=0.25)_____no_output_____# save the arrays to make the above figure, for QThist comparison elsewhere # print(np.shape(num, num2, xmin,xmax,ymin,ymax)) df_out = pd.DataFrame(data={'num_bin':num, 'num_bkg':num2, 'xmin':xmin,'xmax':xmax,'ymin':ymin,'ymax':ymax}) df_out.to_csv('TESS_EB_qthist_v2.2.csv', index=False)_____no_output_____# lets try some simple tests: what if our EBs were drawn # TRULY randomly from the background distribution? kr = np.random.randint(0, len(denominator['bp_rp'][Dok]), 2000) xr = x2.values[kr] yr = y2.values[kr] num, xmin, xmax, ymin, ymax = qthist(xr,yr, N=7, thresh=3, density=False, rng=[[-1.1,4.1],[-6,15]]) num2 = qtcount(x2, y2, xmin, xmax, ymin, ymax, density=False) SCORE = EBscore(num, num2, xmin,xmax,ymin,ymax) # clip the distribution, center about SCORE=0... if -np.min(SCORE) > np.max(SCORE): SCORE[-SCORE > np.max(SCORE)] = -np.max(SCORE) if np.max(SCORE) > -np.min(SCORE): SCORE[np.max(SCORE) > -np.min(SCORE)] = -np.min(SCORE) fig = plt.figure(figsize=(7,8)) ax = fig.add_subplot(111) CMAP = plt.cm.PRGn plt.contour(xx2, yy2, zz2/np.sum(zz2)*np.float(len(Dok)), colors='C0', levels=(1,3,10,30,70,100,200), alpha=0.75, linewidths=0.8) # scale SCORE to [0,1] for color codes clr = (SCORE - np.nanmin(SCORE)) / (np.nanmax(SCORE) - np.nanmin(SCORE)) for k in range(len(SCORE)): ax.add_patch(plt.Rectangle((xmin[k], ymin[k]), xmax[k]-xmin[k], ymax[k]-ymin[k], color=CMAP(clr[k]), ec='grey', lw=0.2, )) # create a fake image to show, just to invoke colormap img = plt.imshow(np.array([[clr.min(), clr.max()]]), cmap=CMAP, aspect='auto', vmin=SCORE.min(),vmax=SCORE.max()) # scale to the "SCORE" img.set_visible(False) # throw this away cb = plt.colorbar() cb.set_label('EB Score') plt.gca().invert_yaxis() plt.xlabel('$G_{BP} - G_{RP}$ (mag)') plt.ylabel('$M_G$ (mag)') plt.ylim(15.5,-6.5) plt.xlim(-1,4) plt.title('2K RANDOM STARS') # plt.savefig('score_v2.2.pdf', dpi=300, bbox_inches='tight', pad_inches=0.25) # not a solution - the EBs don't follow the same CMD distribution_____no_output_____# what if 2k stars are randomly drawn, but are systematically brighter # e.g. mostly high mass ratio systems kr = np.random.randint(0, len(denominator['bp_rp'][Dok]), 2000) xr = x2.values[kr] yr = y2.values[kr] - 0.6 num, xmin, xmax, ymin, ymax = qthist(xr,yr, N=7, thresh=3, density=False, rng=[[-1.1,4.1],[-6,15]]) num2 = qtcount(x2, y2, xmin, xmax, ymin, ymax, density=False) SCORE = EBscore(num, num2, xmin,xmax,ymin,ymax) # clip the distribution, center about SCORE=0... if -np.min(SCORE) > np.max(SCORE): SCORE[-SCORE > np.max(SCORE)] = -np.max(SCORE) if np.max(SCORE) > -np.min(SCORE): SCORE[np.max(SCORE) > -np.min(SCORE)] = -np.min(SCORE) fig = plt.figure(figsize=(7,8)) ax = fig.add_subplot(111) CMAP = plt.cm.PRGn plt.contour(xx2, yy2, zz2/np.sum(zz2)*np.float(len(Dok)), colors='C0', levels=(1,3,10,30,70,100,200), alpha=0.75, linewidths=0.8) # scale SCORE to [0,1] for color codes clr = (SCORE - np.nanmin(SCORE)) / (np.nanmax(SCORE) - np.nanmin(SCORE)) for k in range(len(SCORE)): ax.add_patch(plt.Rectangle((xmin[k], ymin[k]), xmax[k]-xmin[k], ymax[k]-ymin[k], color=CMAP(clr[k]), ec='grey', lw=0.2, )) # create a fake image to show, just to invoke colormap img = plt.imshow(np.array([[clr.min(), clr.max()]]), cmap=CMAP, aspect='auto', vmin=SCORE.min(),vmax=SCORE.max()) # scale to the "SCORE" img.set_visible(False) # throw this away cb = plt.colorbar() cb.set_label('EB Score') plt.gca().invert_yaxis() plt.xlabel('$G_{BP} - G_{RP}$ (mag)') plt.ylabel('$M_G$ (mag)') plt.ylim(15.5,-6.5) plt.xlim(-1,4) plt.title('2K RANDOM + OFFSET') # plt.savefig('score_v2.2.pdf', dpi=300, bbox_inches='tight', pad_inches=0.25) # this definitely accounts for SOME of what we see in the actual data, as predicted_____no_output_____SCORE, xmin, xmax, ymin, ymax = qthist(x,y, N=7, thresh=3, density=True, rng=[[-1.1,4.1],[-6,15]]) _ = plt.hist(SCORE,bins=100)_____no_output_____# another easy metric is: where are there simply NO EBs? # this is just a histogram, of course, using our fun Quad Tree fig = plt.figure(figsize=(7,8)) ax = fig.add_subplot(111) CMAP = plt.cm.Greens_r # if doing the 2d KDE (slow, but smoother) for background illustration plt.contour(xx2, yy2, zz2/np.sum(zz2)*np.float(len(Dok)), colors='C0', levels=(1,3,10,30,70,100,200), alpha=0.75, linewidths=0.8) SCORE, xmin, xmax, ymin, ymax = qthist(x,y, N=7, thresh=3, density=False, rng=[[-1.1,4.1],[-6,15]]) SCORE[SCORE > 3] = 3 # scale SCORE to [0,1] for color codes clr = (SCORE - np.nanmin(SCORE)) / (np.nanmax(SCORE) - np.nanmin(SCORE)) for k in range(len(SCORE)): ax.add_patch(plt.Rectangle((xmin[k], ymin[k]), xmax[k]-xmin[k], ymax[k]-ymin[k], color=CMAP(clr[k]), ec='grey', lw=0.2, )) img = plt.scatter(xmin, ymin, c=SCORE, cmap=CMAP) img.set_visible(False) # throw this away cb = plt.colorbar(boundaries=np.arange(-.5, 4, 1), ticks = [0,1,2,3]) cb.set_label('EB Count') plt.gca().invert_yaxis() plt.xlabel('$G_{BP} - G_{RP}$ (mag)') plt.ylabel('$M_G$ (mag)') plt.ylim(15.5,-6.5) plt.xlim(-1,4) plt.title('TESS EBs (2min Prelim Sample)') plt.savefig('EBcount.pdf', dpi=300, bbox_inches='tight', pad_inches=0.25)_____no_output_____ </code>
{ "repository": "jradavenport/EBHRD", "path": "notebooks/metric_v2vis.ipynb", "matched_keywords": [ "STAR" ], "stars": 3, "size": 704620, "hexsha": "487e25898ed7a184c48a76189abb554eab76e260", "max_line_length": 157436, "avg_line_length": 1149.4616639478, "alphanum_fraction": 0.9564048707 }
# Notebook from jorgemarpa/lightkurve Path: docs/source/tutorials/2-creating-light-curves/2-1-combining-multiple-quarters.ipynb # Combining multiple quarters of *Kepler* data_____no_output_____## Learning Goals By the end of this tutorial, you will: - Understand a *Kepler* Quarter. - Understand how to download multiple quarters of data at once. - Learn how to normalize *Kepler* data. - Understand how to combine multiple quarters of data. _____no_output_____## Introduction_____no_output_____The [*Kepler*](https://archive.stsci.edu/kepler), [*K2*](https://archive.stsci.edu/k2), and [*TESS*](https://archive.stsci.edu/tess) telescopes observe stars for long periods of time. These long, time series observations are broken up into separate chunks, called quarters for the *Kepler* mission, campaigns for *K2*, and sectors for *TESS*. Building light curves with as much data as is available is useful when searching for small signals, such as planetary transits or stellar pulsations. In this tutorial, we will learn how to use Lightkurve's tools to download and stitch together multiple quarters of *Kepler* observations. It is recommended to first read the tutorial discussing how to use *Kepler* light curve products with Lightkurve. That tutorial will introduce you to some specifics of how *Kepler*, *K2*, and *TESS* make observations, and how these are displayed as light curves. It also introduces some important terms and concepts that are referred to in this tutorial. This tutorial demonstrates how to access and combine multiple quarters of data from the *Kepler* space telescope, using the Lightkurve package. When accessing *Kepler* data through MAST, it will be stored in three-month chunks, corresponding to a quarter of observations. By combining and normalizing these separate observations, you can form a single light curve that spans all observed quarters. Utilizing all of the data available is especially important when looking at repeating signals, such as planet transits and stellar oscillations. We will use the *Kepler* mission as an example, but these tools are extensible to *TESS* and *K2* as well._____no_output_____## Imports This tutorial requires the [**Lightkurve**](http://docs.lightkurve.org/) package, which in turn uses `matplotlib` for plotting._____no_output_____ <code> import lightkurve as lk %matplotlib inline_____no_output_____ </code> ## 1. What is a *Kepler* Quarter?_____no_output_____In order to search for planets around other stars, the *Kepler* space telescope performed near-continuous monitoring of a single field of view, from an Earth-trailing orbit. However, this posed a challenge. If the space telescope is trailing Earth and maintaining steady pointing, its solar panels would slowly receive less and less sunlight. In order to make sure the solar panels remained oriented towards the Sun, *Kepler* performed quarterly rolls, one every 93 days. The infographic below helps visualize this, and shows the points in the orbit where the rolls took place. After each roll, *Kepler* retained its fine-pointing at the same field of view. Because the camera rotated by 90 degrees, all of the target stars fell on different parts of the charge-coupled device (CCD) camera. This had an effect on the amount of flux recorded for the same star, because different CCD pixels have different sensitivities. The way in which the flux from the same stars was distributed on the CCD (called the point spread function or PSF) also changed after each roll, due to focus changes and other instrumental effects. As a result, the aperture mask set for a star had to be recomputed after each roll, and may capture slightly different amounts of flux. The data obtained between rolls is referred to as a quarter. While there are changes to the flux *systematics*, not much else changes quarter to quarter, and the majority of the target list remains identical. This means that, after removing systematic trends (such as was done for the presearch data conditioning simple aperture photometry (PDCSAP) flux), multiple quarters together can form one continuous observation. <!-- ![](https://keplergo.arc.nasa.gov/images/program/Orbit_Mar5_09L.gif) --> <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/84/Kepler_space_telescope_orbit.png/800px-Kepler_space_telescope_orbit.png" width="800"> *Figure*: Infographic showcasing the necessity of *Kepler*'s quarterly rolls and its Earth-trailing orbit. Source: [Kepler Science Center](https://keplergo.arc.nasa.gov/ExtendedMissionOverview.shtml)._____no_output_____**Note**: Observations by *K2* and *TESS* are also broken down into chunks of a month or more, called campaigns (for *K2*) and sectors (for *TESS*). While not discussed in this tutorial, the tools below work for these data products as well._____no_output_____## 2. Downloading Multiple `KeplerLightCurve` Objects at Once_____no_output_____To start, we can use Lightkurve's [search_lightcurve()](https://docs.lightkurve.org/reference/api/lightkurve.search_lightcurve.html?highlight=search_lightcurve) function to see what data are available for our target star on the [Mikulski Archive for Space Telescopes](https://archive.stsci.edu/kepler/) (MAST) archive. We will use the star [Kepler-8](http://www.openexoplanetcatalogue.com/planet/Kepler-8%20b/), a star somewhat larger than the Sun, and the host of a [hot Jupiter planet](https://en.wikipedia.org/wiki/Hot_Jupiter). _____no_output_____ <code> search_result = lk.search_lightcurve("Kepler-8", author="Kepler", cadence="long") search_result_____no_output_____ </code> In this list, each row represents a different observing quarter, for a total of 18 quarters across four years. The **observation** column lists the *Kepler* Quarter. The **target_name** represents the *Kepler* Input Catalogue (KIC) ID of the target, and the **productFilename** column is the name of the FITS files downloaded from MAST. The **distance** column shows the separation on the sky between the searched coordinates and the downloaded objects — this is only relevant when searching for specific coordinates in the sky, and not when looking for individual objects. Instead of downloading a single quarter using the [download()](https://docs.lightkurve.org/reference/api/lightkurve.SearchResult.download.html?highlight=download#lightkurve.SearchResult.download) function, we can use the [download_all()](https://docs.lightkurve.org/reference/api/lightkurve.SearchResult.download_all.html?highlight=download_all) function to access all 18 quarters at once (this might take a while)._____no_output_____ <code> lc_collection = search_result.download_all() lc_collection_____no_output_____ </code> All of the downloaded data are stored in a `LightCurveCollection`. This object acts as a wrapper for 18 separate `KeplerLightCurve` objects, listed above. We can access the `KeplerLightCurve` objects and interact with them as usual through the `LightCurveCollection`._____no_output_____ <code> lc_Q4 = lc_collection[4] lc_Q4_____no_output_____lc_Q4.plot();_____no_output_____ </code> #### Note: The example given above also works for downloading target pixel files (TPFs). This will produce a `TargetPixelFileCollection` object instead._____no_output_____## 3. Investigating the Data_____no_output_____Let's first have a look at how these observations differ from one another. We can plot the simple aperture photometry (SAP) flux of all of the observations in the [`LightCurveCollection`](https://docs.lightkurve.org/api/lightkurve.collections.LightCurveCollection.html#lightkurve.collections.LightCurveCollection) to see how they compare._____no_output_____ <code> ax = lc_collection[0].plot(column='sap_flux', label=None) for lc in lc_collection[1:]: lc.plot(ax=ax, column='sap_flux', label=None)_____no_output_____ </code> In the figure above, each quarter of data looks strikingly different, with global patterns repeating every four quarters as *Kepler* has made a full rotation. The change in flux within each quarter is in part driven by changes in the telescope focus, which are caused by changes in the temperature of *Kepler*'s components as the spacecraft orbits the Sun. The changes are also caused by an effect called *differential velocity aberration* (DVA), which causes stars to drift over the course of a quarter, depending on their distance from the center of *Kepler*'s field of view. While the figure above looks messy, all the systematic effects mentioned above are well understood, and have been detrended in the PDCSAP flux. For a more detailed overview, see the [*Kepler* Data Characteristics Handbook](https://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/kepler/_documents/Data_Characteristics.pdf), specifically: *Section 5. Ongoing Phenomena*._____no_output_____## 4. Normalizing a Light Curve_____no_output_____If we want to see the actual variation of the targeted object over the course of these observations, the plot above isn't very useful to us. It is also not useful to have flux expressed in physical units, because it is affected by the observing conditions such as telescope focus and pointing (see above). Instead, it is a common practice to normalize light curves by dividing by their median value. This means that the median of the newly normalized light curve will be equal to 1, and that the relative size of signals in the observation (such as transits) will be maintained. A normalization can be performed using the [normalize()](https://docs.lightkurve.org/reference/api/lightkurve.LightCurve.normalize.html?highlight=normalize#lightkurve.LightCurve.normalize) method of a `KeplerLightCurve`, for example:_____no_output_____ <code> lc_collection[4].normalize().plot();_____no_output_____ </code> In the figure above, we have plotted the normalized PDCSAP flux for Quarter 4. The median normalized flux is at 1, and the transit depths lie around 0.991, indicating a 0.9% dip in brightness due to the planet transiting the star._____no_output_____The `LightCurveCollection` also has a [plot()](https://docs.lightkurve.org/reference/api/lightkurve.FoldedLightCurve.plot.html?highlight=plot#lightkurve.FoldedLightCurve.plot) method. We can use it to plot the PDCSAP flux. The method automatically normalizes the flux in same way we did for a single quarter above._____no_output_____ <code> lc_collection.plot();_____no_output_____ </code> As you can see above, because we have normalized the data, all of the observations form a single consistent light curve._____no_output_____## 5. Combining Multiple Observations into a Single Light Curve_____no_output_____Finally, we can combine these different light curves into a single `KeplerLightCurve` object. This is done using the `stitch()` method. This method concatenates all quarters in our `LightCurveCollection` together, and normalizes them at the same time, in the manner we saw above._____no_output_____ <code> lc_stitched = lc_collection.stitch() lc_stitched_____no_output_____ </code> This returns a single `KeplerLightCurve`! It is in all ways identical to `KeplerLightCurve` of a single quarter, just longer. We can plot it the usual way._____no_output_____ <code> lc_stitched.plot();_____no_output_____ </code> In this final normalized light curve, the interesting observational features of the star are more clear. Specifically: repeating transits that can be used to [characterize planets](https://docs.lightkurve.org/tutorials/02-recover-a-planet.html) and a noisy stellar flux that can be used to study brightness variability through [asteroseismology](http://docs.lightkurve.org/tutorials/02-asteroseismology.html)._____no_output_____Normalizing individual *Kepler* Quarters before combining them to form a single light curve isn't the only way to make sure different quarters are consistent with one another. For a breakdown of other available methods and their benefits, see *Section 6. Stitching Kepler Quarters Together* in [Kinemuchi et al. 2012](https://arxiv.org/pdf/1207.3093.pdf)._____no_output_____## About this Notebook_____no_output_____**Authors:** Oliver Hall ([email protected]), Geert Barentsen **Updated On**: 2020-09-15_____no_output_____## Citing Lightkurve and Astropy If you use `lightkurve` or `astropy` for published research, please cite the authors. Click the buttons below to copy BibTeX entries to your clipboard._____no_output_____ <code> lk.show_citation_instructions()_____no_output_____ </code> <img style="float: right;" src="https://raw.githubusercontent.com/spacetelescope/notebooks/master/assets/stsci_pri_combo_mark_horizonal_white_bkgd.png" alt="Space Telescope Logo" width="200px"/> _____no_output_____
{ "repository": "jorgemarpa/lightkurve", "path": "docs/source/tutorials/2-creating-light-curves/2-1-combining-multiple-quarters.ipynb", "matched_keywords": [ "STAR" ], "stars": null, "size": 23643, "hexsha": "487f3237653005709e20679298c565c1e00f1983", "max_line_length": 684, "avg_line_length": 34.8716814159, "alphanum_fraction": 0.6423042761 }
# Notebook from marcusvlc/PySyft Path: examples/tutorials/advanced/websockets-example-MNIST-parallel/Asynchronous-federated-learning-on-MNIST.ipynb # Tutorial: Asynchronous federated learning on MNIST This notebook will go through the steps to run a federated learning via websocket workers in an asynchronous way using [TrainConfig](https://github.com/OpenMined/PySyft/blob/dev/examples/tutorials/advanced/Federated%20Learning%20with%20TrainConfig/Introduction%20to%20TrainConfig.ipynb). We will use federated averaging to join the remotely trained models. Authors: - Silvia - GitHub [@midokura-silvia](https://github.com/midokura-silvia)_____no_output_____ <code> %load_ext autoreload %autoreload 2 import inspect_____no_output_____ </code> ## Federated Learning setup For a Federated Learning setup with TrainConfig we need different participants: * _Workers_: own datasets. * _Coordinator_: an entity that knows the workers and the dataset name that lives in each worker. * _Evaluator_: holds the testing data and tracks model performance Each worker is represented by two parts, a proxy local to the scheduler (websocket client worker) and the remote instance that holds the data and performs the computations. The remote part is called a websocket server worker._____no_output_____## Preparation: Start the websocket workers So first, we need to create the remote workers. For this, you need to run in a terminal (not possible from the notebook): ```bash python start_websocket_servers.py ``` #### What's going on? The script will instantiate three workers, Alice, Bob and Charlie and prepare their local data. Each worker is set up to have a subset of the MNIST training dataset. Alice holds all images corresponding to the digits 0-3, Bob holds all images corresponding to the digits 4-6 and Charlie holds all images corresponding to the digits 7-9. | Worker | Digits in local dataset | Number of samples | | ----------- | ----------------------- | ----------------- | | Alice | 0-3 | 24754 | | Bob | 4-6 | 17181 | | Charlie | 7-9 | 18065 | The evaluator will be called Testing and holds the entire MNIST testing dataset. | Evaluator | Digits in local dataset | Number of samples | | ----------- | ----------------------- | ----------------- | | Testing | 0-9 | 10000 | _____no_output_____ <code> # uncomment the following to see the code of the function that starts a worker # import run_websocket_server # print(inspect.getsource(run_websocket_server.start_websocket_server_worker))_____no_output_____ </code> Before continuing let's first need to import dependencies, setup needed arguments and configure logging._____no_output_____ <code> # Dependencies import sys import asyncio import syft as sy from syft.workers.websocket_client import WebsocketClientWorker from syft.frameworks.torch.fl import utils import torch from torchvision import datasets, transforms import numpy as np import run_websocket_client as rwc Falling back to insecure randomness since the required custom op could not be found for the installed version of TensorFlow. Fix this by compiling custom ops. Missing file was '/home/george/.conda/envs/pysyft-contrib/lib/python3.7/site-packages/tf_encrypted/operations/secure_random/secure_random_module_tf_1.15.0.so' # Hook torch hook = sy.TorchHook(torch)_____no_output_____# Arguments args = rwc.define_and_get_arguments(args=[]) use_cuda = args.cuda and torch.cuda.is_available() torch.manual_seed(args.seed) device = torch.device("cuda" if use_cuda else "cpu") print(args)Namespace(batch_size=32, cuda=False, federate_after_n_batches=10, lr=0.1, save_model=False, seed=1, test_batch_size=128, training_rounds=40, verbose=False) # Configure logging import logging logger = logging.getLogger("run_websocket_client") if not len(logger.handlers): FORMAT = "%(asctime)s - %(message)s" DATE_FMT = "%H:%M:%S" formatter = logging.Formatter(FORMAT, DATE_FMT) handler = logging.StreamHandler() handler.setFormatter(formatter) logger.addHandler(handler) logger.propagate = False LOG_LEVEL = logging.DEBUG logger.setLevel(LOG_LEVEL)_____no_output_____ </code> Now let's instantiate the websocket client workers, our local proxies to the remote workers. Note that **this step will fail, if the websocket server workers are not running**. The workers Alice, Bob and Charlie will perform the training, wheras the testing worker hosts the test data and performs the evaluation._____no_output_____ <code> kwargs_websocket = {"host": "0.0.0.0", "hook": hook, "verbose": args.verbose} alice = WebsocketClientWorker(id="alice", port=8777, **kwargs_websocket) bob = WebsocketClientWorker(id="bob", port=8778, **kwargs_websocket) charlie = WebsocketClientWorker(id="charlie", port=8779, **kwargs_websocket) testing = WebsocketClientWorker(id="testing", port=8780, **kwargs_websocket) worker_instances = [alice, bob, charlie]_____no_output_____ </code> ## Setting up the training_____no_output_____### Model Let's instantiate the machine learning model. It is a small neural network with 2 convolutional and two fully connected layers. It uses ReLU activations and max pooling._____no_output_____ <code> print(inspect.getsource(rwc.Net))class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5, 1) self.conv2 = nn.Conv2d(20, 50, 5, 1) self.fc1 = nn.Linear(4 * 4 * 50, 500) self.fc2 = nn.Linear(500, 10) def forward(self, x): x = F.relu(self.conv1(x)) x = F.max_pool2d(x, 2, 2) x = F.relu(self.conv2(x)) x = F.max_pool2d(x, 2, 2) x = x.view(-1, 4 * 4 * 50) x = F.relu(self.fc1(x)) x = self.fc2(x) return F.log_softmax(x, dim=1) model = rwc.Net().to(device) print(model)Net( (conv1): Conv2d(1, 20, kernel_size=(5, 5), stride=(1, 1)) (conv2): Conv2d(20, 50, kernel_size=(5, 5), stride=(1, 1)) (fc1): Linear(in_features=800, out_features=500, bias=True) (fc2): Linear(in_features=500, out_features=10, bias=True) ) </code> #### Making the model serializable In order to send the model to the workers we need the model to be serializable, for this we use [`jit`](https://pytorch.org/docs/stable/jit.html)._____no_output_____ <code> traced_model = torch.jit.trace(model, torch.zeros([1, 1, 28, 28], dtype=torch.float))_____no_output_____ </code> ### Let's start the training Now we are ready to start the federated training. We will perform training over a given number of batches separately on each worker and then calculate the federated average of the resulting model. Every 10th training round we will evaluate the performance of the models returned by the workers and of the model obtained by federated averaging. The performance will be given both as the accuracy (ratio of correct predictions) and as the histograms of predicted digits. This is of interest, as each worker only owns a subset of the digits. Therefore, in the beginning each worker will only predict their numbers and only know about the other numbers via the federated averaging process. The training is done in an asynchronous manner. This means that the scheduler just tell the workers to train and does not block to wait for the result of the training before talking to the next worker._____no_output_____The parameters of the training are given in the arguments. Each worker will train on a given number of batches, given by the value of federate_after_n_batches. The training batch size and learning rate are also configured. _____no_output_____ <code> print("Federate_after_n_batches: " + str(args.federate_after_n_batches)) print("Batch size: " + str(args.batch_size)) print("Initial learning rate: " + str(args.lr))Federate_after_n_batches: 10 Batch size: 32 Initial learning rate: 0.1 learning_rate = args.lr traced_model = torch.jit.trace(model, torch.zeros([1, 1, 28, 28], dtype=torch.float)) for curr_round in range(1, args.training_rounds + 1): logger.info("Training round %s/%s", curr_round, args.training_rounds) results = await asyncio.gather( *[ rwc.fit_model_on_worker( worker=worker, traced_model=traced_model, batch_size=args.batch_size, curr_round=curr_round, max_nr_batches=args.federate_after_n_batches, lr=learning_rate, ) for worker in worker_instances ] ) models = {} loss_values = {} test_models = curr_round % 10 == 1 or curr_round == args.training_rounds if test_models: logger.info("Evaluating models") np.set_printoptions(formatter={"float": "{: .0f}".format}) for worker_id, worker_model, _ in results: rwc.evaluate_model_on_worker( model_identifier="Model update " + worker_id, worker=testing, dataset_key="mnist_testing", model=worker_model, nr_bins=10, batch_size=128, print_target_hist=False, ) # Federate models (note that this will also change the model in models[0] for worker_id, worker_model, worker_loss in results: if worker_model is not None: models[worker_id] = worker_model loss_values[worker_id] = worker_loss traced_model = utils.federated_avg(models) if test_models: rwc.evaluate_model_on_worker( model_identifier="Federated model", worker=testing, dataset_key="mnist_testing", model=traced_model, nr_bins=10, batch_size=128, print_target_hist=False, ) # decay learning rate learning_rate = max(0.98 * learning_rate, args.lr * 0.01) if args.save_model: torch.save(model.state_dict(), "mnist_cnn.pt")20:59:41 - Training round 1/40 20:59:47 - Evaluating models 20:59:48 - Model update alice: Average loss: 0.0291, Accuracy: 2657/10000 (26.57%) 20:59:50 - Model update bob: Average loss: 0.0316, Accuracy: 959/10000 (9.59%) 20:59:52 - Model update charlie: Average loss: 0.0399, Accuracy: 974/10000 (9.74%) 20:59:54 - Federated model: Average loss: 0.0174, Accuracy: 1517/10000 (15.17%) 20:59:54 - Training round 2/40 20:59:59 - Training round 3/40 21:00:05 - Training round 4/40 21:00:10 - Training round 5/40 21:00:16 - Training round 6/40 21:00:21 - Training round 7/40 21:00:26 - Training round 8/40 21:00:31 - Training round 9/40 21:00:36 - Training round 10/40 21:00:43 - Training round 11/40 21:00:50 - Evaluating models 21:00:52 - Model update alice: Average loss: 0.0083, Accuracy: 6361/10000 (63.61%) 21:00:54 - Model update bob: Average loss: 0.0146, Accuracy: 5044/10000 (50.44%) 21:00:56 - Model update charlie: Average loss: 0.0134, Accuracy: 4240/10000 (42.40%) 21:00:58 - Federated model: Average loss: 0.0032, Accuracy: 8809/10000 (88.09%) 21:00:58 - Training round 12/40 21:01:05 - Training round 13/40 21:01:11 - Training round 14/40 21:01:18 - Training round 15/40 21:01:23 - Training round 16/40 21:01:29 - Training round 17/40 21:01:36 - Training round 18/40 21:01:41 - Training round 19/40 21:01:47 - Training round 20/40 21:01:52 - Training round 21/40 21:01:58 - Evaluating models 21:02:00 - Model update alice: Average loss: 0.0060, Accuracy: 7458/10000 (74.58%) 21:02:01 - Model update bob: Average loss: 0.0082, Accuracy: 6867/10000 (68.67%) 21:02:04 - Model update charlie: Average loss: 0.0103, Accuracy: 6153/10000 (61.53%) 21:02:06 - Federated model: Average loss: 0.0016, Accuracy: 9395/10000 (93.95%) 21:02:06 - Training round 22/40 21:02:14 - Training round 23/40 21:02:22 - Training round 24/40 21:02:29 - Training round 25/40 21:02:35 - Training round 26/40 21:02:40 - Training round 27/40 21:02:47 - Training round 28/40 21:02:52 - Training round 29/40 21:02:58 - Training round 30/40 21:03:04 - Training round 31/40 21:03:09 - Evaluating models 21:03:11 - Model update alice: Average loss: 0.0073, Accuracy: 7188/10000 (71.88%) 21:03:13 - Model update bob: Average loss: 0.0064, Accuracy: 7283/10000 (72.83%) 21:03:14 - Model update charlie: Average loss: 0.0060, Accuracy: 7112/10000 (71.12%) 21:03:16 - Federated model: Average loss: 0.0012, Accuracy: 9573/10000 (95.73%) 21:03:16 - Training round 32/40 21:03:21 - Training round 33/40 21:03:26 - Training round 34/40 21:03:32 - Training round 35/40 21:03:37 - Training round 36/40 21:03:42 - Training round 37/40 21:03:48 - Training round 38/40 21:03:53 - Training round 39/40 21:03:58 - Training round 40/40 21:04:05 - Evaluating models 21:04:07 - Model update alice: Average loss: 0.0027, Accuracy: 8856/10000 (88.56%) 21:04:08 - Model update bob: Average loss: 0.0041, Accuracy: 7967/10000 (79.67%) 21:04:10 - Model update charlie: Average loss: 0.0073, Accuracy: 6698/10000 (66.98%) 21:04:12 - Federated model: Average loss: 0.0010, Accuracy: 9596/10000 (95.96%) </code> After 40 rounds of training we achieve an accuracy larger than 95% on the entire testing dataset. This is impressing, given that no worker has access to more than 4 digits!!_____no_output_____# Congratulations!!! - Time to Join the Community! Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways! ### Star PySyft on GitHub The easiest way to help our community is just by starring the GitHub repos! This helps raise awareness of the cool tools we're building. - [Star PySyft](https://github.com/OpenMined/PySyft) ### Join our Slack! The best way to keep up to date on the latest advancements is to join our community! You can do so by filling out the form at [http://slack.openmined.org](http://slack.openmined.org) ### Join a Code Project! The best way to contribute to our community is to become a code contributor! At any time you can go to PySyft GitHub Issues page and filter for "Projects". This will show you all the top level Tickets giving an overview of what projects you can join! If you don't want to join a project, but you would like to do a bit of coding, you can also look for more "one off" mini-projects by searching for GitHub issues marked "good first issue". - [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject) - [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) ### Donate If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups! [OpenMined's Open Collective Page](https://opencollective.com/openmined)_____no_output_____
{ "repository": "marcusvlc/PySyft", "path": "examples/tutorials/advanced/websockets-example-MNIST-parallel/Asynchronous-federated-learning-on-MNIST.ipynb", "matched_keywords": [ "STAR" ], "stars": 2, "size": 21121, "hexsha": "487fc03becf572fa2040ac12e6e6eff4d0b728e2", "max_line_length": 453, "avg_line_length": 37.6488413547, "alphanum_fraction": 0.5806543251 }
# Notebook from clij/clijpy Path: python/benchmark_clijx_pull.ipynb <code> # init pyimage to get access to jar files import imagej ij = imagej.init('C:/programs/fiji-win64/Fiji.app/') _____no_output_____# load some image data from skimage import io # sk_img = io.imread('https://samples.fiji.sc/blobs.png') sk_img = io.imread('https://bds.mpi-cbg.de/CLIJ_benchmarking_data/000301.raw.tif') # init clijpy to get access to the GPU from jnius import autoclass CLIJx = autoclass('net.haesleinhuepf.clijx.CLIJx') clijx = CLIJx.getInstance();_____no_output_____def showImages(img1, img2): vals = np.linspace(0,1,256) np.random.shuffle(vals) # show the input and the result image from matplotlib import pyplot as plt cmap = plt.cm.colors.ListedColormap(plt.cm.jet(vals)) plt.subplot(121) plt.imshow(img1) plt.subplot(122) plt.imshow(img2, cmap=cmap, vmin=0, vmax=65) plt.show()_____no_output_____# convert an array to an ImageJ2 img: import numpy as np np_arr = np.array(sk_img) ij_img = ij.py.to_java(np_arr) # push the image to the GPU input8 = clijx.push(ij_img) # convert it to Float input = clijx.create(input8.getDimensions()) # generates a Float image # actual conversion clijx.copy(input8, input)_____no_output_____# reserve memory for output, same size and type as input blurred = clijx.create(input); thresholded = clijx.create(input); labelled = clijx.create(input); labelled_without_edges = clijx.create(input);_____no_output_____# blur, threshold and label the image clijx.blur(input, blurred, 5, 5, 0); clijx.automaticThreshold(blurred, thresholded, "Otsu"); clijx.connectedComponentsLabeling(thresholded, labelled); clijx.excludeLabelsOnEdges(labelled, labelled_without_edges);_____no_output_____# pull image back from GPU RandomAccessibleInterval = autoclass('net.imglib2.RandomAccessibleInterval'); ij_img_result = clijx.convert(labelled_without_edges, RandomAccessibleInterval); imageplus_result = clijx.pull(labelled_without_edges); # # convert to numpy/python import time def getTime(): return int(round(time.time() * 1000)) def clijx_pull(buffer): import numpy numpy_image = numpy.zeros([buffer.getWidth(), buffer.getHeight(), buffer.getDepth()]) wrapped = ij.py.to_java(numpy_image); clijx.pullToRAI(buffer, wrapped); # see https://github.com/clij/clij-advanced-filters/blob/master/src/main/java/net/haesleinhuepf/clijx/CLIJx.java#L87 return numpy_image def my_rai_to_numpy(rai): result = ij.py.new_numpy_image(rai) CopyRAI = autoclass('net.imagej.ops.copy.CopyRAI'); ij.py._ij.op().run(CopyRAI, ij.py.to_java(result), rai) return result num_iterations = 10; way1timings = np.zeros(num_iterations); way2timings = np.zeros(num_iterations); way3timings = np.zeros(num_iterations); for it in range(0, num_iterations): ######################################################### # Way 1 millis = getTime(); np_arr_result = clijx_pull(labelled_without_edges); way1timings[it] = (getTime() - millis) ######################################################### # Way 2 ij_img_result = clijx.convert(labelled_without_edges, RandomAccessibleInterval); # we exclude clij routines for benchmarking conversion if possible millis = getTime(); np_arr_result = ij.py.rai_to_numpy(ij_img_result); way2timings[it] = (getTime() - millis) ######################################################### # Way 3 ij_img_result = clijx.convert(labelled_without_edges, RandomAccessibleInterval); # we exclude clij routines for benchmarking conversion if possible millis = getTime(); np_arr_result = my_rai_to_numpy(ij_img_result); way3timings[it] = (getTime() - millis) ######################################################### print("Way 1: " + str(np.mean(way1timings)) + " +- " + str(np.std(way1timings))); print("Way 2: " + str(np.mean(way2timings)) + " +- " + str(np.std(way2timings))); print("Way 3: " + str(np.mean(way3timings)) + " +- " + str(np.std(way3timings))); # show the input and the result image showImages(np_arr, np_arr_result);Way 1: 610.1 +- 20.290145391297717 Way 2: 218.5 +- 118.87998149394204 Way 3: 211.4 +- 90.92106466600576 # clean up input.close(); blurred.close(); thresholded.close(); labelled.close(); labelled_without_edges.close();_____no_output_____ </code>
{ "repository": "clij/clijpy", "path": "python/benchmark_clijx_pull.ipynb", "matched_keywords": [ "ImageJ" ], "stars": 12, "size": 25370, "hexsha": "48805680e2ecedb07e838ee1c973d400740215d2", "max_line_length": 3452, "avg_line_length": 82.3701298701, "alphanum_fraction": 0.6886874261 }
# Notebook from bioexcel/biobb_wf_md_setup_api Path: biobb_wf_md_setup_api/notebooks/biobb_MDsetupAPI_tutorial.ipynb # Protein MD Setup tutorial using BioExcel Building Blocks (biobb) through REST API **Based on the official GROMACS tutorial:** [http://www.mdtutorials.com/gmx/lysozyme/index.html](http://www.mdtutorials.com/gmx/lysozyme/index.html) *** This tutorial aims to illustrate the process of **setting up a simulation system** containing a **protein**, step by step, using the **BioExcel Building Blocks (biobb) [REST API](https://mmb.irbbarcelona.org/biobb-api)**. The particular example used is the **Lysozyme** protein (PDB code 1AKI). *** ## Settings ### Auxiliar libraries used - [requests](https://pypi.org/project/requests/): Requests allows you to send *organic, grass-fed* HTTP/1.1 requests, without the need for manual labor. - [nb_conda_kernels](https://github.com/Anaconda-Platform/nb_conda_kernels): Enables a Jupyter Notebook or JupyterLab application in one conda environment to access kernels for Python, R, and other languages found in other environments. - [nglview](http://nglviewer.org/#nglview): Jupyter/IPython widget to interactively view molecular structures and trajectories in notebooks. - [ipywidgets](https://github.com/jupyter-widgets/ipywidgets): Interactive HTML widgets for Jupyter notebooks and the IPython kernel. - [plotly](https://plot.ly/python/offline/): Python interactive graphing library integrated in Jupyter notebooks. - [simpletraj](https://github.com/arose/simpletraj): Lightweight coordinate-only trajectory reader based on code from GROMACS, MDAnalysis and VMD. ### Conda Installation and Launch ```console git clone https://github.com/bioexcel/biobb_wf_md_setup_api.git cd biobb_wf_md_setup_api conda env create -f conda_env/environment.yml conda activate biobb_MDsetupAPI_tutorial jupyter-nbextension enable --py --user widgetsnbextension jupyter-nbextension enable --py --user nglview jupyter-notebook biobb_wf_md_setup_api/notebooks/biobb_MDsetupAPI_tutorial.ipynb ``` *** ## Pipeline steps 1. [Input Parameters](#input) 2. [Fetching PDB Structure](#fetch) 3. [Fix Protein Structure](#fix) 4. [Create Protein System Topology](#top) 5. [Create Solvent Box](#box) 6. [Fill the Box with Water Molecules](#water) 7. [Adding Ions](#ions) 8. [Energetically Minimize the System](#min) 9. [Equilibrate the System (NVT)](#nvt) 10. [Equilibrate the System (NPT)](#npt) 11. [Free Molecular Dynamics Simulation](#free) 12. [Post-processing and Visualizing Resulting 3D Trajectory](#post) 13. [Output Files](#output) 14. [Questions & Comments](#questions) *** <img src="https://bioexcel.eu/wp-content/uploads/2019/04/Bioexcell_logo_1080px_transp.png" alt="Bioexcel2 logo" title="Bioexcel2 logo" width="400" /> *** _____no_output_____<a id="input"></a> ## Input parameters **Input parameters** needed: - **pdbCode**: PDB code of the protein structure (e.g. 1AKI) - **apiURL**: Base URL for the Biobb REST API ([https://mmb.irbbarcelona.org/biobb-api/rest/v1/](https://mmb.irbbarcelona.org/biobb-api/rest/v1/)) Additionally, the **utils** library is loaded. This library contains global functions that are used for sending and retrieving data to / from the REST API. [Click here](https://mmb.irbbarcelona.org/biobb-api/tutorial) for more information about how the BioBB REST API works and which is the purpose for each of these functions._____no_output_____ <code> import nglview import ipywidgets from utils import * pdbCode = "1AKI" apiURL = "https://mmb.irbbarcelona.org/biobb-api/rest/v1/" _____no_output_____ </code> <a id="fetch"></a> *** ## Fetching PDB structure Downloading **PDB structure** with the **protein molecule** from the RCSB PDB database.<br> Alternatively, a **PDB file** can be used as starting structure. <br> *** **BioBB REST API** end points used: - [PDB](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_io/pdb) from **biobb_io.api.pdb** ***_____no_output_____ <code> # Downloading desired PDB file # Create properties dict and inputs/outputs downloaded_pdb = pdbCode + '.pdb' prop = { 'pdb_code': pdbCode } # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_io/pdb', config = prop, output_pdb_path = downloaded_pdb)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____ </code> <a id="vis3D"></a> ### Visualizing 3D structure Visualizing the downloaded/given **PDB structure** using **NGL**: _____no_output_____ <code> # Show protein view = nglview.show_structure_file(downloaded_pdb) view.add_representation(repr_type='ball+stick', selection='all') view._remote_call('setSize', target='Widget', args=['','600px']) view_____no_output_____ </code> <a id="fix"></a> *** ## Fix protein structure **Checking** and **fixing** (if needed) the protein structure:<br> - **Modeling** **missing side-chain atoms**, modifying incorrect **amide assignments**, choosing **alternative locations**.<br> - **Checking** for missing **backbone atoms**, **heteroatoms**, **modified residues** and possible **atomic clashes**. *** **BioBB REST API** end points used: - [FixSideChain](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_model/fix_side_chain) from **biobb_model.model.fix_side_chain** ***_____no_output_____ <code> # Check & Fix PDB # Create inputs/outputs fixed_pdb = pdbCode + '_fixed.pdb' # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_model/fix_side_chain', input_pdb_path = downloaded_pdb, output_pdb_path = fixed_pdb)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____ </code> ### Visualizing 3D structure Visualizing the fixed **PDB structure** using **NGL**. In this particular example, the checking step didn't find any issue to be solved, so there is no difference between the original structure and the fixed one. _____no_output_____ <code> # Show protein view = nglview.show_structure_file(fixed_pdb) view.add_representation(repr_type='ball+stick', selection='all') view._remote_call('setSize', target='Widget', args=['','600px']) view.camera='orthographic' view_____no_output_____ </code> <a id="top"></a> *** ## Create protein system topology **Building GROMACS topology** corresponding to the protein structure.<br> Force field used in this tutorial is [**amber99sb-ildn**](https://dx.doi.org/10.1002%2Fprot.22711): AMBER **parm99** force field with **corrections on backbone** (sb) and **side-chain torsion potentials** (ildn). Water molecules type used in this tutorial is [**spc/e**](https://pubs.acs.org/doi/abs/10.1021/j100308a038).<br> Adding **hydrogen atoms** if missing. Automatically identifying **disulfide bridges**. <br> Generating two output files: - **GROMACS structure** (gro file) - **GROMACS topology** ZIP compressed file containing: - *GROMACS topology top file* (top file) - *GROMACS position restraint file/s* (itp file/s) *** **BioBB REST API** end points used: - [Pdb2gmx](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_md/pdb2gmx) from **biobb_md.gromacs.pdb2gmx** ***_____no_output_____ <code> # Create system topology # Create inputs/outputs output_pdb2gmx_gro = pdbCode + '_pdb2gmx.gro' output_pdb2gmx_top_zip = pdbCode + '_pdb2gmx_top.zip' # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_md/pdb2gmx', input_pdb_path = fixed_pdb, output_gro_path = output_pdb2gmx_gro, output_top_zip_path = output_pdb2gmx_top_zip)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____ </code> ### Visualizing 3D structure Visualizing the generated **GRO structure** using **NGL**. Note that **hydrogen atoms** were added to the structure by the **pdb2gmx GROMACS tool** when generating the **topology**. _____no_output_____ <code> # Show protein view = nglview.show_structure_file(output_pdb2gmx_gro) view.add_representation(repr_type='ball+stick', selection='all') view._remote_call('setSize', target='Widget', args=['','600px']) view.camera='orthographic' view_____no_output_____ </code> <a id="box"></a> *** ## Create solvent box Define the unit cell for the **protein structure MD system** to fill it with water molecules.<br> A **cubic box** is used to define the unit cell, with a **distance from the protein to the box edge of 1.0 nm**. The protein is **centered in the box**. *** **BioBB REST API** end points used: - [Editconf](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_md/editconf) from **biobb_md.gromacs.editconf** ***_____no_output_____ <code> # Editconf: Create solvent box # Create properties dict and inputs/outputs output_editconf_gro = pdbCode + '_editconf.gro' prop = { 'box_type': 'cubic', 'distance_to_molecule': 1.0 } # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_md/editconf', config = prop, input_gro_path = output_pdb2gmx_gro, output_gro_path = output_editconf_gro)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____ </code> <a id="water"></a> *** ## Fill the box with water molecules Fill the unit cell for the **protein structure system** with water molecules.<br> The solvent type used is the default **Simple Point Charge water (SPC)**, a generic equilibrated 3-point solvent model. *** **BioBB REST API** end points used: - [Solvate](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_md/solvate) from **biobb_md.gromacs.solvate** ***_____no_output_____ <code> # Solvate: Fill the box with water molecules # Create inputs/outputs output_solvate_gro = pdbCode + '_solvate.gro' output_solvate_top_zip = pdbCode + '_solvate_top.zip' # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_md/solvate', input_solute_gro_path = output_editconf_gro, input_top_zip_path = output_pdb2gmx_top_zip, output_gro_path = output_solvate_gro, output_top_zip_path = output_solvate_top_zip)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____ </code> ### Visualizing 3D structure Visualizing the **protein system** with the newly added **solvent box** using **NGL**.<br> Note the **cubic box** filled with **water molecules** surrounding the **protein structure**, which is **centered** right in the middle of the cube._____no_output_____ <code> # Show protein view = nglview.show_structure_file(output_solvate_gro) view.clear_representations() view.add_representation(repr_type='cartoon', selection='solute', color='green') view.add_representation(repr_type='ball+stick', selection='SOL') view._remote_call('setSize', target='Widget', args=['','600px']) view.camera='orthographic' view_____no_output_____ </code> <a id="ions"></a> *** ## Adding ions Add ions to neutralize the **protein structure** charge - [Step 1](#ionsStep1): Creating portable binary run file for ion generation - [Step 2](#ionsStep2): Adding ions to **neutralize** the system *** **BioBB REST API** end points used: - [Grompp](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_md/grompp) from **biobb_md.gromacs.grompp** - [Genion](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_md/genion) from **biobb_md.gromacs.genion** ***_____no_output_____<a id="ionsStep1"></a> ### Step 1: Creating portable binary run file for ion generation A simple **energy minimization** molecular dynamics parameters (mdp) properties will be used to generate the portable binary run file for **ion generation**, although **any legitimate combination of parameters** could be used in this step._____no_output_____ <code> # Grompp: Creating portable binary run file for ion generation # Create prop dict and inputs/outputs output_gppion_tpr = pdbCode + '_gppion.tpr' prop = { 'simulation_type':'minimization' } # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_md/grompp', config = prop, input_gro_path = output_solvate_gro, input_top_zip_path = output_solvate_top_zip, output_tpr_path = output_gppion_tpr)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____ </code> <a id="ionsStep2"></a> ### Step 2: Adding ions to neutralize the system Replace **solvent molecules** with **ions** to **neutralize** the system._____no_output_____ <code> # Genion: Adding ions to neutralize the system # Create prop dict and inputs/outputs output_genion_gro = pdbCode + '_genion.gro' output_genion_top_zip = pdbCode + '_genion_top.zip' prop={ 'neutral':True } # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_md/genion', config = prop, input_tpr_path = output_gppion_tpr, input_top_zip_path = output_solvate_top_zip, output_gro_path = output_genion_gro, output_top_zip_path = output_genion_top_zip)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____ </code> ### Visualizing 3D structure Visualizing the **neutralized protein system** with the newly added **ions** using **NGL**_____no_output_____ <code> # Show protein view = nglview.show_structure_file(output_genion_gro) view.clear_representations() view.add_representation(repr_type='cartoon', selection='solute', color='sstruc') view.add_representation(repr_type='ball+stick', selection='NA') view.add_representation(repr_type='ball+stick', selection='CL') view._remote_call('setSize', target='Widget', args=['','600px']) view.camera='orthographic' view_____no_output_____ </code> <a id="min"></a> *** ## Energetically minimize the system Energetically minimize the **protein system** till reaching a desired potential energy. - [Step 1](#emStep1): Creating portable binary run file for energy minimization - [Step 2](#emStep2): Energetically minimize the **system** till reaching a force of 500 kJ mol-1 nm-1. - [Step 3](#emStep3): Checking **energy minimization** results. Plotting energy by time during the **minimization** process. *** **BioBB REST API** end points used: - [Grompp](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_md/grompp) from **biobb_md.gromacs.grompp** - [Mdrun](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_md/mdrun) from **biobb_md.gromacs.mdrun** - [GMXEnergy](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_analysis/gmx_energy) from **biobb_analysis.gromacs.gmx_energy** ***_____no_output_____<a id="emStep1"></a> ### Step 1: Creating portable binary run file for energy minimization The **minimization** type of the **molecular dynamics parameters (mdp) property** contains the main default parameters to run an **energy minimization**: - integrator = steep ; Algorithm (steep = steepest descent minimization) - emtol = 1000.0 ; Stop minimization when the maximum force < 1000.0 kJ/mol/nm - emstep = 0.01 ; Minimization step size (nm) - nsteps = 50000 ; Maximum number of (minimization) steps to perform In this particular example, the method used to run the **energy minimization** is the default **steepest descent**, but the **maximum force** is placed at **500 KJ/mol\*nm^2**, and the **maximum number of steps** to perform (if the maximum force is not reached) to **5,000 steps**. _____no_output_____ <code> # Grompp: Creating portable binary run file for mdrun # Create prop dict and inputs/outputs output_gppmin_tpr = pdbCode + '_gppmin.tpr' prop = { 'mdp':{ 'emtol':'500', 'nsteps':'5000' }, 'simulation_type':'minimization' } # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_md/grompp', config = prop, input_gro_path = output_genion_gro, input_top_zip_path = output_genion_top_zip, output_tpr_path = output_gppmin_tpr)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____ </code> <a id="emStep2"></a> ### Step 2: Running Energy Minimization Running **energy minimization** using the **tpr file** generated in the previous step. _____no_output_____ <code> # Mdrun: Running minimization # Create inputs/outputs output_min_trr = pdbCode + '_min.trr' output_min_gro = pdbCode + '_min.gro' output_min_edr = pdbCode + '_min.edr' output_min_log = pdbCode + '_min.log' # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_md/mdrun', input_tpr_path = output_gppmin_tpr, output_trr_path = output_min_trr, output_gro_path = output_min_gro, output_edr_path = output_min_edr, output_log_path = output_min_log)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____ </code> <a id="emStep3"></a> ### Step 3: Checking Energy Minimization results Checking **energy minimization** results. Plotting **potential energy** by time during the minimization process. _____no_output_____ <code> # GMXEnergy: Getting system energy by time # Create prop dict and inputs/outputs output_min_ene_xvg = pdbCode + '_min_ene.xvg' prop = { 'terms': ["Potential"] } # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_analysis/gmx_energy', config = prop, input_energy_path = output_min_edr, output_xvg_path = output_min_ene_xvg)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____import plotly import plotly.graph_objs as go #Read data from file and filter energy values higher than 1000 Kj/mol^-1 with open(output_min_ene_xvg,'r') as energy_file: x,y = map( list, zip(*[ (float(line.split()[0]),float(line.split()[1])) for line in energy_file if not line.startswith(("#","@")) if float(line.split()[1]) < 1000 ]) ) plotly.offline.init_notebook_mode(connected=True) fig = { "data": [go.Scatter(x=x, y=y)], "layout": go.Layout(title="Energy Minimization", xaxis=dict(title = "Energy Minimization Step"), yaxis=dict(title = "Potential Energy KJ/mol-1") ) } plotly.offline.iplot(fig)_____no_output_____ </code> <a id="nvt"></a> *** ## Equilibrate the system (NVT) Equilibrate the **protein system** in **NVT ensemble** (constant Number of particles, Volume and Temperature). Protein **heavy atoms** will be restrained using position restraining forces: movement is permitted, but only after overcoming a substantial energy penalty. The utility of position restraints is that they allow us to equilibrate our solvent around our protein, without the added variable of structural changes in the protein. - [Step 1](#eqNVTStep1): Creating portable binary run file for system equilibration - [Step 2](#eqNVTStep2): Equilibrate the **protein system** with **NVT** ensemble. - [Step 3](#eqNVTStep3): Checking **NVT Equilibration** results. Plotting **system temperature** by time during the **NVT equilibration** process. *** **BioBB REST API** end points used: - [Grompp](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_md/grompp) from **biobb_md.gromacs.grompp** - [Mdrun](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_md/mdrun) from **biobb_md.gromacs.mdrun** - [GMXEnergy](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_analysis/gmx_energy) from **biobb_analysis.gromacs.gmx_energy** ***_____no_output_____<a id="eqNVTStep1"></a> ### Step 1: Creating portable binary run file for system equilibration (NVT) The **nvt** type of the **molecular dynamics parameters (mdp) property** contains the main default parameters to run an **NVT equilibration** with **protein restraints** (see [GROMACS mdp options](http://manual.gromacs.org/documentation/2018/user-guide/mdp-options.html)): - Define = -DPOSRES - integrator = md - dt = 0.002 - nsteps = 5000 - pcoupl = no - gen_vel = yes - gen_temp = 300 - gen_seed = -1 In this particular example, the default parameters will be used: **md** integrator algorithm, a **step size** of **2fs**, **5,000 equilibration steps** with the protein **heavy atoms restrained**, and a temperature of **300K**. *Please note that for the sake of time this tutorial is only running 10ps of NVT equilibration, whereas in the [original example](http://www.mdtutorials.com/gmx/lysozyme/06_equil.html) the simulated time was 100ps.*_____no_output_____ <code> # Grompp: Creating portable binary run file for NVT Equilibration # Create prop dict and inputs/outputs output_gppnvt_tpr = pdbCode + '_gppnvt.tpr' prop = { 'mdp':{ 'nsteps': 5000, 'dt': 0.002, 'Define': '-DPOSRES', #'tc_grps': "DNA Water_and_ions" # NOTE: uncomment this line if working with DNA }, 'simulation_type':'nvt' } # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_md/grompp', config = prop, input_gro_path = output_min_gro, input_top_zip_path = output_genion_top_zip, output_tpr_path = output_gppnvt_tpr)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____ </code> <a id="eqNVTStep2"></a> ### Step 2: Running NVT equilibration Running **energy minimization** using the **tpr file** generated in the previous step._____no_output_____ <code> # Mdrun: Running Equilibration NVT # Create inputs/outputs output_nvt_trr = pdbCode + '_nvt.trr' output_nvt_gro = pdbCode + '_nvt.gro' output_nvt_edr = pdbCode + '_nvt.edr' output_nvt_log = pdbCode + '_nvt.log' output_nvt_cpt = pdbCode + '_nvt.cpt' # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_md/mdrun', input_tpr_path = output_gppnvt_tpr, output_trr_path = output_nvt_trr, output_gro_path = output_nvt_gro, output_edr_path = output_nvt_edr, output_log_path = output_nvt_log, output_cpt_path = output_nvt_cpt)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____ </code> <a id="eqNVTStep3"></a> ### Step 3: Checking NVT Equilibration results Checking **NVT Equilibration** results. Plotting **system temperature** by time during the NVT equilibration process. _____no_output_____ <code> # GMXEnergy: Getting system temperature by time during NVT Equilibration # Create prop dict and inputs/outputs output_nvt_temp_xvg = pdbCode + '_nvt_temp.xvg' prop = { 'terms': ["Temperature"] } # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_analysis/gmx_energy', config = prop, input_energy_path = output_nvt_edr, output_xvg_path = output_nvt_temp_xvg)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____import plotly import plotly.graph_objs as go # Read temperature data from file with open(output_nvt_temp_xvg,'r') as temperature_file: x,y = map( list, zip(*[ (float(line.split()[0]),float(line.split()[1])) for line in temperature_file if not line.startswith(("#","@")) ]) ) plotly.offline.init_notebook_mode(connected=True) fig = { "data": [go.Scatter(x=x, y=y)], "layout": go.Layout(title="Temperature during NVT Equilibration", xaxis=dict(title = "Time (ps)"), yaxis=dict(title = "Temperature (K)") ) } plotly.offline.iplot(fig)_____no_output_____ </code> <a id="npt"></a> *** ## Equilibrate the system (NPT) Equilibrate the **protein system** in **NPT** ensemble (constant Number of particles, Pressure and Temperature). - [Step 1](#eqNPTStep1): Creating portable binary run file for system equilibration - [Step 2](#eqNPTStep2): Equilibrate the **protein system** with **NPT** ensemble. - [Step 3](#eqNPTStep3): Checking **NPT Equilibration** results. Plotting **system pressure and density** by time during the **NPT equilibration** process. *** **BioBB REST API** end points used: - [Grompp](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_md/grompp) from **biobb_md.gromacs.grompp** - [Mdrun](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_md/mdrun) from **biobb_md.gromacs.mdrun** - [GMXEnergy](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_analysis/gmx_energy) from **biobb_analysis.gromacs.gmx_energy** ***_____no_output_____<a id="eqNPTStep1"></a> ### Step 1: Creating portable binary run file for system equilibration (NPT) The **npt** type of the **molecular dynamics parameters (mdp) property** contains the main default parameters to run an **NPT equilibration** with **protein restraints** (see [GROMACS mdp options](http://manual.gromacs.org/documentation/2018/user-guide/mdp-options.html)): - Define = -DPOSRES - integrator = md - dt = 0.002 - nsteps = 5000 - pcoupl = Parrinello-Rahman - pcoupltype = isotropic - tau_p = 1.0 - ref_p = 1.0 - compressibility = 4.5e-5 - refcoord_scaling = com - gen_vel = no In this particular example, the default parameters will be used: **md** integrator algorithm, a **time step** of **2fs**, **5,000 equilibration steps** with the protein **heavy atoms restrained**, and a Parrinello-Rahman **pressure coupling** algorithm. *Please note that for the sake of time this tutorial is only running 10ps of NPT equilibration, whereas in the [original example](http://www.mdtutorials.com/gmx/lysozyme/07_equil2.html) the simulated time was 100ps.*_____no_output_____ <code> # Grompp: Creating portable binary run file for NPT System Equilibration # Create prop dict and inputs/outputs output_gppnpt_tpr = pdbCode + '_gppnpt.tpr' prop = { 'mdp':{ 'nsteps':'5000', #'tc_grps': "DNA Water_and_ions" # NOTE: uncomment this line if working with DNA }, 'simulation_type':'npt' } # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_md/grompp', config = prop, input_gro_path = output_nvt_gro, input_top_zip_path = output_genion_top_zip, output_tpr_path = output_gppnpt_tpr)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____ </code> <a id="eqNPTStep2"></a> ### Step 2: Running NPT equilibration_____no_output_____ <code> # Mdrun: Running NPT System Equilibration # Create inputs/outputs output_npt_trr = pdbCode + '_npt.trr' output_npt_gro = pdbCode + '_npt.gro' output_npt_edr = pdbCode + '_npt.edr' output_npt_log = pdbCode + '_npt.log' output_npt_cpt = pdbCode + '_npt.cpt' # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_md/mdrun', input_tpr_path = output_gppnpt_tpr, output_trr_path = output_npt_trr, output_gro_path = output_npt_gro, output_edr_path = output_npt_edr, output_log_path = output_npt_log, output_cpt_path = output_npt_cpt)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____ </code> <a id="eqNPTStep3"></a> ### Step 3: Checking NPT Equilibration results Checking **NPT Equilibration** results. Plotting **system pressure and density** by time during the **NPT equilibration** process. _____no_output_____ <code> # GMXEnergy: Getting system pressure and density by time during NPT Equilibration # Create prop dict and inputs/outputs output_npt_pd_xvg = pdbCode + '_npt_PD.xvg' prop = { 'terms': ["Pressure","Density"] } # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_analysis/gmx_energy', config = prop, input_energy_path = output_npt_edr, output_xvg_path = output_npt_pd_xvg)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____import plotly from plotly import subplots import plotly.graph_objs as go # Read pressure and density data from file with open(output_npt_pd_xvg,'r') as pd_file: x,y,z = map( list, zip(*[ (float(line.split()[0]),float(line.split()[1]),float(line.split()[2])) for line in pd_file if not line.startswith(("#","@")) ]) ) plotly.offline.init_notebook_mode(connected=True) trace1 = go.Scatter( x=x,y=y ) trace2 = go.Scatter( x=x,y=z ) fig = subplots.make_subplots(rows=1, cols=2, print_grid=False) fig.append_trace(trace1, 1, 1) fig.append_trace(trace2, 1, 2) fig['layout']['xaxis1'].update(title='Time (ps)') fig['layout']['xaxis2'].update(title='Time (ps)') fig['layout']['yaxis1'].update(title='Pressure (bar)') fig['layout']['yaxis2'].update(title='Density (Kg*m^-3)') fig['layout'].update(title='Pressure and Density during NPT Equilibration') fig['layout'].update(showlegend=False) plotly.offline.iplot(fig)_____no_output_____ </code> <a id="free"></a> *** ## Free Molecular Dynamics Simulation Upon completion of the **two equilibration phases (NVT and NPT)**, the system is now well-equilibrated at the desired temperature and pressure. The **position restraints** can now be released. The last step of the **protein** MD setup is a short, **free MD simulation**, to ensure the robustness of the system. - [Step 1](#mdStep1): Creating portable binary run file to run a **free MD simulation**. - [Step 2](#mdStep2): Run short MD simulation of the **protein system**. - [Step 3](#mdStep3): Checking results for the final step of the setup process, the **free MD run**. Plotting **Root Mean Square deviation (RMSd)** and **Radius of Gyration (Rgyr)** by time during the **free MD run** step. *** **BioBB REST API** end points used: - [Grompp](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_md/grompp) from **biobb_md.gromacs.grompp** - [Mdrun](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_md/mdrun) from **biobb_md.gromacs.mdrun** - [GMXRms](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_analysis/gmx_rms) from **biobb_analysis.gromacs.gmx_rms** - [GMXRgyr](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_analysis/gmx_rgyr) from **biobb_analysis.gromacs.gmx_rgyr** ***_____no_output_____<a id="mdStep1"></a> ### Step 1: Creating portable binary run file to run a free MD simulation The **free** type of the **molecular dynamics parameters (mdp) property** contains the main default parameters to run an **free MD simulation** (see [GROMACS mdp options](http://manual.gromacs.org/documentation/2018/user-guide/mdp-options.html)): - integrator = md - dt = 0.002 (ps) - nsteps = 50000 In this particular example, the default parameters will be used: **md** integrator algorithm, a **time step** of **2fs**, and a total of **50,000 md steps** (100ps). *Please note that for the sake of time this tutorial is only running 100ps of free MD, whereas in the [original example](http://www.mdtutorials.com/gmx/lysozyme/08_MD.html) the simulated time was 1ns (1000ps).*_____no_output_____ <code> # Grompp: Creating portable binary run file for mdrun # Create prop dict and inputs/outputs output_gppmd_tpr = pdbCode + '_gppmd.tpr' prop = { 'mdp':{ 'nsteps':'50000', #'tc_grps': "DNA Water_and_ions" # NOTE: uncomment this line if working with DNA }, 'simulation_type':'free' } # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_md/grompp', config = prop, input_gro_path = output_npt_gro, input_top_zip_path = output_genion_top_zip, output_tpr_path = output_gppmd_tpr)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____ </code> <a id="mdStep2"></a> ### Step 2: Running short free MD simulation_____no_output_____ <code> # Mdrun: Running free dynamics # Create inputs/outputs output_md_trr = pdbCode + '_md.trr' output_md_gro = pdbCode + '_md.gro' output_md_edr = pdbCode + '_md.edr' output_md_log = pdbCode + '_md.log' output_md_cpt = pdbCode + '_md.cpt' # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_md/mdrun', input_tpr_path = output_gppmd_tpr, output_trr_path = output_md_trr, output_gro_path = output_md_gro, output_edr_path = output_md_edr, output_log_path = output_md_log, output_cpt_path = output_md_cpt)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____ </code> <a id="mdStep3"></a> ### Step 3: Checking free MD simulation results Checking results for the final step of the setup process, the **free MD run**. Plotting **Root Mean Square deviation (RMSd)** and **Radius of Gyration (Rgyr)** by time during the **free MD run** step. **RMSd** against the **experimental structure** (input structure of the pipeline) and against the **minimized and equilibrated structure** (output structure of the NPT equilibration step)._____no_output_____ <code> # GMXRms: Computing Root Mean Square deviation to analyse structural stability # RMSd against minimized and equilibrated snapshot (backbone atoms) # Create prop dict and inputs/outputs output_rms_first = pdbCode + '_rms_first.xvg' prop = { 'selection': 'Backbone', #'selection': 'non-Water' } # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_analysis/gmx_rms', config = prop, input_structure_path = output_gppmd_tpr, input_traj_path = output_md_trr, output_xvg_path = output_rms_first)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____# GMXRms: Computing Root Mean Square deviation to analyse structural stability # RMSd against experimental structure (backbone atoms) # Create prop dict and inputs/outputs output_rms_exp = pdbCode + '_rms_exp.xvg' prop = { 'selection': 'Backbone', #'selection': 'non-Water' } # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_analysis/gmx_rms', config = prop, input_structure_path = output_gppmin_tpr, input_traj_path = output_md_trr, output_xvg_path = output_rms_exp)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____import plotly import plotly.graph_objs as go # Read RMS vs first snapshot data from file with open(output_rms_first,'r') as rms_first_file: x,y = map( list, zip(*[ (float(line.split()[0]),float(line.split()[1])) for line in rms_first_file if not line.startswith(("#","@")) ]) ) # Read RMS vs experimental structure data from file with open(output_rms_exp,'r') as rms_exp_file: x2,y2 = map( list, zip(*[ (float(line.split()[0]),float(line.split()[1])) for line in rms_exp_file if not line.startswith(("#","@")) ]) ) trace1 = go.Scatter( x = x, y = y, name = 'RMSd vs first' ) trace2 = go.Scatter( x = x, y = y2, name = 'RMSd vs exp' ) data = [trace1, trace2] plotly.offline.init_notebook_mode(connected=True) fig = { "data": data, "layout": go.Layout(title="RMSd during free MD Simulation", xaxis=dict(title = "Time (ps)"), yaxis=dict(title = "RMSd (nm)") ) } plotly.offline.iplot(fig) _____no_output_____# GMXRgyr: Computing Radius of Gyration to measure the protein compactness during the free MD simulation # Create prop dict and inputs/outputs output_rgyr = pdbCode + '_rgyr.xvg' prop = { 'selection': 'Backbone' } # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_analysis/gmx_rgyr', config = prop, input_structure_path = output_gppmin_tpr, input_traj_path = output_md_trr, output_xvg_path = output_rgyr)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____import plotly import plotly.graph_objs as go # Read Rgyr data from file with open(output_rgyr,'r') as rgyr_file: x,y = map( list, zip(*[ (float(line.split()[0]),float(line.split()[1])) for line in rgyr_file if not line.startswith(("#","@")) ]) ) plotly.offline.init_notebook_mode(connected=True) fig = { "data": [go.Scatter(x=x, y=y)], "layout": go.Layout(title="Radius of Gyration", xaxis=dict(title = "Time (ps)"), yaxis=dict(title = "Rgyr (nm)") ) } plotly.offline.iplot(fig)_____no_output_____ </code> <a id="post"></a> *** ## Post-processing and Visualizing resulting 3D trajectory Post-processing and Visualizing the **protein system** MD setup **resulting trajectory** using **NGL** - [Step 1](#ppStep1): *Imaging* the resulting trajectory, **stripping out water molecules and ions** and **correcting periodicity issues**. - [Step 2](#ppStep2): Generating a *dry* structure, **removing water molecules and ions** from the final snapshot of the MD setup pipeline. - [Step 3](#ppStep3): Visualizing the *imaged* trajectory using the *dry* structure as a **topology**. *** **BioBB REST API** end points used: - [GMXImage](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_analysis/gmx_image) from **biobb_analysis.gromacs.gmx_image** - [GMXTrjConvStr](https://mmb.irbbarcelona.org/biobb-api/rest/v1/launch/biobb_analysis/gmx_trjconv_str) from **biobb_analysis.gromacs.gmx_trjconv_str** ***_____no_output_____<a id="ppStep1"></a> ### Step 1: *Imaging* the resulting trajectory. Stripping out **water molecules and ions** and **correcting periodicity issues** _____no_output_____ <code> # GMXImage: "Imaging" the resulting trajectory # Removing water molecules and ions from the resulting structure # Create prop dict and inputs/outputs output_imaged_traj = pdbCode + '_imaged_traj.trr' prop = { 'center_selection': 'Protein', 'output_selection': 'Protein', 'pbc' : 'mol', 'center' : True } # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_analysis/gmx_image', config = prop, input_traj_path = output_md_trr, input_top_path = output_gppmd_tpr, output_traj_path = output_imaged_traj)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____ </code> <a id="ppStep2"></a> ### Step 2: Generating the output *dry* structure. **Removing water molecules and ions** from the resulting structure_____no_output_____ <code> # GMXTrjConvStr: Converting and/or manipulating a structure # Removing water molecules and ions from the resulting structure # The "dry" structure will be used as a topology to visualize # the "imaged dry" trajectory generated in the previous step. # Create prop dict and inputs/outputs output_dry_gro = pdbCode + '_md_dry.gro' prop = { 'selection': 'Protein' } # Launch bb on REST API token = launch_job(url = apiURL + 'launch/biobb_analysis/gmx_trjconv_str', config = prop, input_structure_path = output_md_gro, input_top_path = output_gppmd_tpr, output_str_path = output_dry_gro)_____no_output_____# Check job status out_files = check_job(token, apiURL)_____no_output_____# Save generated file to disk retrieve_data(out_files, apiURL)_____no_output_____ </code> <a id="ppStep3"></a> ### Step 3: Visualizing the generated dehydrated trajectory. Using the **imaged trajectory** (output of the [Post-processing step 1](#ppStep1)) with the **dry structure** (output of the [Post-processing step 2](#ppStep2)) as a topology._____no_output_____ <code> # Show trajectory view = nglview.show_simpletraj(nglview.SimpletrajTrajectory(output_imaged_traj, output_dry_gro), gui=True) view_____no_output_____ </code> <a id="output"></a> ## Output files Important **Output files** generated: - {{output_md_gro}}: **Final structure** (snapshot) of the MD setup protocol. - {{output_md_trr}}: **Final trajectory** of the MD setup protocol. - {{output_md_cpt}}: **Final checkpoint file**, with information about the state of the simulation. It can be used to **restart** or **continue** a MD simulation. - {{output_gppmd_tpr}}: **Final tpr file**, GROMACS portable binary run input file. This file contains the starting structure of the **MD setup free MD simulation step**, together with the molecular topology and all the simulation parameters. It can be used to **extend** the simulation. - {{output_genion_top_zip}}: **Final topology** of the MD system. It is a compressed zip file including a **topology file** (.top) and a set of auxiliar **include topology** files (.itp). **Analysis** (MD setup check) output files generated: - {{output_rms_first}}: **Root Mean Square deviation (RMSd)** against **minimized and equilibrated structure** of the final **free MD run step**. - {{output_rms_exp}}: **Root Mean Square deviation (RMSd)** against **experimental structure** of the final **free MD run step**. - {{output_rgyr}}: **Radius of Gyration** of the final **free MD run step** of the **setup pipeline**. _____no_output_____*** <a id="questions"></a> ## Questions & Comments Questions, issues, suggestions and comments are really welcome! * GitHub issues: * [https://github.com/bioexcel/biobb](https://github.com/bioexcel/biobb) * BioExcel forum: * [https://ask.bioexcel.eu/c/BioExcel-Building-Blocks-library](https://ask.bioexcel.eu/c/BioExcel-Building-Blocks-library) _____no_output_____
{ "repository": "bioexcel/biobb_wf_md_setup_api", "path": "biobb_wf_md_setup_api/notebooks/biobb_MDsetupAPI_tutorial.ipynb", "matched_keywords": [ "molecular dynamics" ], "stars": null, "size": 64162, "hexsha": "48847355ca9b67305e25573ac5bad22857da14c7", "max_line_length": 445, "avg_line_length": 33.7517096265, "alphanum_fraction": 0.5618590443 }
# Notebook from morales-gregorio/elephant Path: doc/tutorials/unitary_event_analysis.ipynb # The Unitary Events Analysis_____no_output_____The executed version of this tutorial is at https://elephant.readthedocs.io/en/latest/tutorials/unitary_event_analysis.html The Unitary Events (UE) analysis \[1\] tool allows us to reliably detect correlated spiking activity that is not explained by the firing rates of the neurons alone. It was designed to detect coordinated spiking activity that occurs significantly more often than predicted by the firing rates of the neurons. The method allows one to analyze correlations not only between pairs of neurons but also between multiple neurons, by considering the various spike patterns across the neurons. In addition, the method allows one to extract the dynamics of correlation between the neurons by perform-ing the analysis in a time-resolved manner. This enables us to relate the occurrence of spike synchrony to behavior. The algorithm: 1. Align trials, decide on width of analysis window. 2. Decide on allowed coincidence width. 3. Perform a sliding window analysis. In each window: 1. Detect and count coincidences. 2. Calculate expected number of coincidences. 3. Evaluate significance of detected coincidences. 4. If significant, the window contains Unitary Events. 4. Explore behavioral relevance of UE epochs. References: 1. Grün, S., Diesmann, M., Grammont, F., Riehle, A., & Aertsen, A. (1999). Detecting unitary events without discretization of time. Journal of neuroscience methods, 94(1), 67-79._____no_output_____ <code> import random import string import numpy as np import matplotlib.pyplot as plt import quantities as pq import neo import elephant.unitary_event_analysis as ue # Fix random seed to guarantee fixed output random.seed(1224)_____no_output_____ </code> Next, we download a data file containing spike train data from multiple trials of two neurons._____no_output_____ <code> # Download data !curl https://web.gin.g-node.org/INM-6/elephant-data/raw/master/dataset-1/dataset-1.h5 --output dataset-1.h5 --location_____no_output_____ </code> # Write a plotting function_____no_output_____ <code> # borrowed from Viziphant plot_params_default = { # epochs to be marked on the time axis 'events': {}, # figure size 'figsize': (10, 12), # right margin 'right': 0.9, # top margin 'top': 0.9, # bottom margin 'bottom': 0.1, # left margin 'left': 0.1, # horizontal white space between subplots 'hspace': 0.5, # width white space between subplots 'wspace': 0.5, # font size 'fsize': 12, # the actual unit ids from the experimental recording 'unit_real_ids': None, # line width 'lw': 2, # marker size for the UEs and coincidences 'ms': 5, # figure title 'suptitle': None, } def plot_ue(spiketrains, Js_dict, significance_level=0.05, **plot_params): """ Plots the results of pairwise unitary event analysis as a column of six subplots, comprised of raster plot, peri-stimulus time histogram, coincident event plot, coincidence rate plot, significance plot and unitary event plot, respectively. Parameters ---------- spiketrains : list of list of neo.SpikeTrain A nested list of trials, neurons and their neo.SpikeTrain objects, respectively. This should be identical to the one used to generate Js_dict. Js_dict : dict The output of :func:`elephant.unitary_event_analysis.jointJ_window_analysis` function. The values of each key has the shape of: * different window --> 0-axis. * different pattern hash --> 1-axis; Dictionary keys: 'Js': list of float JointSurprise of different given patterns within each window. 'indices': list of list of int A list of indices of pattern within each window. 'n_emp': list of int The empirical number of each observed pattern. 'n_exp': list of float The expected number of each pattern. 'rate_avg': list of float The average firing rate of each neuron. significance_level : float The significance threshold used to determine which coincident events are classified as unitary events within a window. **plot_params User-defined plotting parameters used to update the default plotting parameter values. The valid keys: 'events' : list Epochs to be marked on the time axis. 'figsize' : tuple of int The dimensions for the figure size. 'right' : float The size of the right margin. 'top' : float The size of the top margin. 'bottom' : float The size of the bottom margin. 'left' : float The size of the left margin. 'hspace' : flaot The size of the horizontal white space between subplots. 'wspace' : float The width of the white space between subplots. 'fsize' : int The size of the font. 'unit_real_ids' : list of int The unit ids from the experimental recording. 'lw' : int The default line width. 'ms' : int The marker size for the unitary events and coincidences. Returns ------- result : FigureUE The container for Axes objects generated by the function. Individual axes can be accessed using the following identifiers: * axes_spike_events : matplotlib.axes.Axes Contains the elements of the spike events subplot. * axes_spike_rates : matplotlib.axes.Axes Contains the elements of the spike rates subplot. * axes_coincident_events : matplotlib.axes.Axes Contains the elements of the coincident events subplot. * axes_coincidence_rates : matplotlib.axes.Axes Contains the elements of the coincidence rates subplot. * axes_significance : matplotlib.axes.Axes Contains the elements of the statistical significance subplot. * axes_unitary_events : matplotlib.axes.Axes Contains the elements of the unitary events subplot. Examples -------- Unitary Events of homogenous Poisson random processes. Since we don't expect to find significant correlations in random processes, we show non-significant events (``significance_level=0.34``). Typically, in your analyses, the significant level threshold is ~0.05. .. plot:: :include-source: import matplotlib.pyplot as plt import numpy as np import quantities as pq import viziphant from elephant.spike_train_generation import homogeneous_poisson_process from elephant.unitary_event_analysis import jointJ_window_analysis np.random.seed(10) spiketrains1 = [homogeneous_poisson_process(rate=20 * pq.Hz, t_stop=2 * pq.s) for _ in range(5)] spiketrains2 = [homogeneous_poisson_process(rate=50 * pq.Hz, t_stop=2 * pq.s) for _ in range(5)] spiketrains = np.stack((spiketrains1, spiketrains2), axis=1) ue_dict = jointJ_window_analysis(spiketrains, bin_size=5 * pq.ms, win_size=100 * pq.ms, win_step=10 * pq.ms) viziphant.unitary_event_analysis.plot_ue(spiketrains, Js_dict=ue_dict, significance_level=0.34, unit_real_ids=['1', '2']) plt.show() Refer to `UEA Tutorial <https://elephant.readthedocs.io/en/latest/ tutorials/unitary_event_analysis.html>`_ for real-case scenario. """ n_trials = len(spiketrains) n_neurons = len(spiketrains[0]) input_parameters = Js_dict['input_parameters'] t_start = input_parameters['t_start'] t_stop = input_parameters['t_stop'] bin_size = input_parameters['bin_size'] win_size = input_parameters['win_size'] win_step = input_parameters['win_step'] pattern_hash = input_parameters['pattern_hash'] if len(pattern_hash) > 1: raise ValueError(f"To not clutter the plots, only one pattern hash is " f"required; got {pattern_hash}. You can call this " f"function multiple times for each hash at a time.") for key in ['Js', 'n_emp', 'n_exp', 'rate_avg']: Js_dict[key] = Js_dict[key].squeeze() neurons_participated = ue.inverse_hash_from_pattern(pattern_hash, N=n_neurons).squeeze() t_winpos = ue._winpos(t_start=t_start, t_stop=t_stop, win_size=win_size, win_step=win_step) Js_sig = ue.jointJ(significance_level) # figure format plot_params_user = plot_params plot_params = plot_params_default.copy() plot_params.update(plot_params_user) if plot_params['unit_real_ids'] is None: plot_params['unit_real_ids'] = ['not specified'] * n_neurons if len(plot_params['unit_real_ids']) != n_neurons: raise ValueError('length of unit_ids should be' + 'equal to number of neurons!') plt.rcParams.update({'font.size': plot_params['fsize']}) ls = '-' alpha = 0.5 fig, axes = plt.subplots(nrows=6, sharex=True, figsize=plot_params['figsize']) axes[5].get_shared_y_axes().join(axes[0], axes[2], axes[5]) for ax in (axes[0], axes[2], axes[5]): for n in range(n_neurons): for tr, data_tr in enumerate(spiketrains): ax.plot(data_tr[n].rescale('ms').magnitude, np.full_like(data_tr[n].magnitude, fill_value=n * n_trials + tr), '.', markersize=0.5, color='k') for n in range(1, n_neurons): # subtract 0.5 to separate the raster plots; # otherwise, the line crosses the raster spikes ax.axhline(n * n_trials - 0.5, lw=0.5, color='k') ymax = max(ax.get_ylim()[1], 2 * n_trials - 0.5) ax.set_ylim([-0.5, ymax]) ax.set_yticks([n_trials - 0.5, 2 * n_trials - 0.5]) ax.set_yticklabels([1, n_trials], fontsize=plot_params['fsize']) ax.set_ylabel('Trial', fontsize=plot_params['fsize']) for i, ax in enumerate(axes): ax.set_xlim([t_winpos[0], t_winpos[-1] + win_size]) ax.text(-0.05, 1.1, string.ascii_uppercase[i], transform=ax.transAxes, size=plot_params['fsize'] + 5, weight='bold') for key in plot_params['events'].keys(): for event_time in plot_params['events'][key]: ax.axvline(event_time, ls=ls, color='r', lw=plot_params['lw'], alpha=alpha) axes[0].set_title('Spike Events') axes[0].text(1.0, 1.0, f"Unit {plot_params['unit_real_ids'][-1]}", fontsize=plot_params['fsize'] // 2, horizontalalignment='right', verticalalignment='bottom', transform=axes[0].transAxes) axes[0].text(1.0, 0, f"Unit {plot_params['unit_real_ids'][0]}", fontsize=plot_params['fsize'] // 2, horizontalalignment='right', verticalalignment='top', transform=axes[0].transAxes) axes[1].set_title('Spike Rates') for n in range(n_neurons): axes[1].plot(t_winpos + win_size / 2., Js_dict['rate_avg'][:, n].rescale('Hz'), label=f"Unit {plot_params['unit_real_ids'][n]}", lw=plot_params['lw']) axes[1].set_ylabel('Hz', fontsize=plot_params['fsize']) axes[1].legend(fontsize=plot_params['fsize'] // 2, loc='upper right') axes[1].locator_params(axis='y', tight=True, nbins=3) axes[2].set_title('Coincident Events') for n in range(n_neurons): if not neurons_participated[n]: continue for tr, data_tr in enumerate(spiketrains): indices = np.unique(Js_dict['indices'][f'trial{tr}']) axes[2].plot(indices * bin_size, np.full_like(indices, fill_value=n * n_trials + tr), ls='', ms=plot_params['ms'], marker='s', markerfacecolor='none', markeredgecolor='c') axes[2].set_ylabel('Trial', fontsize=plot_params['fsize']) axes[3].set_title('Coincidence Rates') axes[3].plot(t_winpos + win_size / 2., Js_dict['n_emp'] / ( win_size.rescale('s').magnitude * n_trials), label='Empirical', lw=plot_params['lw'], color='c') axes[3].plot(t_winpos + win_size / 2., Js_dict['n_exp'] / ( win_size.rescale('s').magnitude * n_trials), label='Expected', lw=plot_params['lw'], color='m') axes[3].set_ylabel('Hz', fontsize=plot_params['fsize']) axes[3].legend(fontsize=plot_params['fsize'] // 2, loc='upper right') axes[3].locator_params(axis='y', tight=True, nbins=3) axes[4].set_title('Statistical Significance') axes[4].plot(t_winpos + win_size / 2., Js_dict['Js'], lw=plot_params['lw'], color='k') axes[4].axhline(Js_sig, ls='-', color='r') axes[4].axhline(-Js_sig, ls='-', color='g') xlim_ax4 = axes[4].get_xlim()[1] alpha_pos_text = axes[4].text(xlim_ax4, Js_sig, r'$\alpha +$', color='r', horizontalalignment='right', verticalalignment='bottom') alpha_neg_text = axes[4].text(xlim_ax4, -Js_sig, r'$\alpha -$', color='g', horizontalalignment='right', verticalalignment='top') axes[4].set_yticks([ue.jointJ(1 - significance_level), ue.jointJ(0.5), ue.jointJ(significance_level)]) # Try '1 - 0.34' to see the floating point errors axes[4].set_yticklabels(np.round([1 - significance_level, 0.5, significance_level], decimals=6)) # autoscale fix to mind the text positions. # See https://stackoverflow.com/questions/11545062/ # matplotlib-autoscale-axes-to-include-annotations plt.get_current_fig_manager().canvas.draw() for text_handle in (alpha_pos_text, alpha_neg_text): bbox = text_handle.get_window_extent() bbox_data = bbox.transformed(axes[4].transData.inverted()) axes[4].update_datalim(bbox_data.corners(), updatex=False) axes[4].autoscale_view() mask_nonnan = ~np.isnan(Js_dict['Js']) significant_win_idx = np.nonzero(Js_dict['Js'][mask_nonnan] >= Js_sig)[0] t_winpos_significant = t_winpos[mask_nonnan][significant_win_idx] axes[5].set_title('Unitary Events') if len(t_winpos_significant) > 0: for n in range(n_neurons): if not neurons_participated[n]: continue for tr, data_tr in enumerate(spiketrains): indices = np.unique(Js_dict['indices'][f'trial{tr}']) indices_significant = [] for t_sig in t_winpos_significant: mask = (indices * bin_size >= t_sig ) & (indices * bin_size < t_sig + win_size) indices_significant.append(indices[mask]) indices_significant = np.hstack(indices_significant) indices_significant = np.unique(indices_significant) # does nothing if indices_significant is empty axes[5].plot(indices_significant * bin_size, np.full_like(indices_significant, fill_value=n * n_trials + tr), ms=plot_params['ms'], marker='s', ls='', mfc='none', mec='r') axes[5].set_xlabel(f'Time ({t_winpos.dimensionality})', fontsize=plot_params['fsize']) for key in plot_params['events'].keys(): for event_time in plot_params['events'][key]: axes[5].text(event_time - 10 * pq.ms, axes[5].get_ylim()[0] - 35, key, fontsize=plot_params['fsize'], color='r') plt.suptitle(plot_params['suptitle'], fontsize=20) plt.subplots_adjust(top=plot_params['top'], right=plot_params['right'], left=plot_params['left'], bottom=plot_params['bottom'], hspace=plot_params['hspace'], wspace=plot_params['wspace']) return axes_____no_output_____ </code> # Load data and extract spiketrains_____no_output_____ <code> block = neo.io.NeoHdf5IO("./dataset-1.h5") sts1 = block.read_block().segments[0].spiketrains sts2 = block.read_block().segments[1].spiketrains spiketrains = np.vstack((sts1,sts2)).T_____no_output_____ </code> # Calculate Unitary Events_____no_output_____ <code> UE = ue.jointJ_window_analysis( spiketrains, bin_size=5*pq.ms, winsize=100*pq.ms, winstep=10*pq.ms, pattern_hash=[3]) plot_ue(spiketrains, UE, significance_level=0.05) plt.show()_____no_output_____ </code>
{ "repository": "morales-gregorio/elephant", "path": "doc/tutorials/unitary_event_analysis.ipynb", "matched_keywords": [ "neuroscience" ], "stars": 121, "size": 24079, "hexsha": "48855eda47c5746c36f6d8e65efeb4787e0eba72", "max_line_length": 718, "avg_line_length": 41.5872193437, "alphanum_fraction": 0.5272644213 }
# Notebook from bmg-pcl/astronomy-python Path: _extras/notebooks/07-plot.ipynb # 7. Visualization This is the seventh in a series of notebooks related to astronomy data. As a continuing example, we will replicate part of the analysis in a recent paper, "[Off the beaten path: Gaia reveals GD-1 stars outside of the main stream](https://arxiv.org/abs/1805.00425)" by Adrian M. Price-Whelan and Ana Bonaca. In the previous notebook we selected photometry data from Pan-STARRS and used it to identify stars we think are likely to be in GD-1 In this notebook, we'll take the results from previous lessons and use them to make a figure that tells a compelling scientific story._____no_output_____## Outline Here are the steps in this notebook: 1. Starting with the figure from the previous notebook, we'll add annotations to present the results more clearly. 2. The we'll see several ways to customize figures to make them more appealing and effective. 3. Finally, we'll see how to make a figure with multiple panels or subplots. After completing this lesson, you should be able to * Design a figure that tells a compelling story. * Use Matplotlib features to customize the appearance of figures. * Generate a figure with multiple subplots._____no_output_____## Making Figures That Tell a Story So far the figure we've made have been "quick and dirty". Mostly we have used Matplotlib's default style, although we have adjusted a few parameters, like `markersize` and `alpha`, to improve legibility. Now that the analysis is done, it's time to think more about: 1. Making professional-looking figures that are ready for publication, and 2. Making figures that communicate a scientific result clearly and compellingly. Not necessarily in that order._____no_output_____Let's start by reviewing Figure 1 from the original paper. We've seen the individual panels, but now let's look at the whole thing, along with the caption: <img width="500" src="https://github.com/datacarpentry/astronomy-python/raw/gh-pages/fig/gd1-5.png">_____no_output_____### Exercise Think about the following questions: 1. What is the primary scientific result of this work? 2. What story is this figure telling? 3. In the design of this figure, can you identify 1-2 choices the authors made that you think are effective? Think about big-picture elements, like the number of panels and how they are arranged, as well as details like the choice of typeface. 4. Can you identify 1-2 elements that could be improved, or that you might have done differently?_____no_output_____ <code> # Solution # Some topics that might come up in this discussion: # 1. The primary result is that the multiple stages of selection # make it possible to separate likely candidates from the # background more effectively than in previous work, which makes # it possible to see the structure of GD-1 in "unprecedented detail". # 2. The figure documents the selection process as a sequence of # steps. Reading right-to-left, top-to-bottom, we see selection # based on proper motion, the results of the first selection, # selection based on color and magnitude, and the results of the # second selection. So this figure documents the methodology and # presents the primary result. # 3. It's mostly black and white, with minimal use of color, so # it will work well in print. The annotations in the bottom # left panel guide the reader to the most important results. # It contains enough technical detail for a professional audience, # but most of it is also comprehensible to a more general audience. # The two left panels have the same dimensions and their axes are # aligned. # 4. Since the panels represent a sequence, it might be better to # arrange them left-to-right. The placement and size of the axis # labels could be tweaked. The entire figure could be a little # bigger to match the width and proportion of the caption. # The top left panel has unnused white space (but that leaves # space for the annotations in the bottom left)._____no_output_____ </code> ## Plotting GD-1 Let's start with the panel in the lower left. You can [download the data from the previous lesson](https://github.com/AllenDowney/AstronomicalData/raw/main/data/gd1_data.hdf) or run the following cell, which downloads it if necessary._____no_output_____ <code> from os.path import basename, exists def download(url): filename = basename(url) if not exists(filename): from urllib.request import urlretrieve local, _ = urlretrieve(url, filename) print('Downloaded ' + local) download('https://github.com/AllenDowney/AstronomicalData/raw/main/' + 'data/gd1_data.hdf')_____no_output_____ </code> Now we can reload `winner_df`_____no_output_____ <code> import pandas as pd filename = 'gd1_data.hdf' winner_df = pd.read_hdf(filename, 'winner_df')_____no_output_____import matplotlib.pyplot as plt def plot_second_selection(df): x = df['phi1'] y = df['phi2'] plt.plot(x, y, 'ko', markersize=0.7, alpha=0.9) plt.xlabel('$\phi_1$ [deg]') plt.ylabel('$\phi_2$ [deg]') plt.title('Proper motion + photometry selection', fontsize='medium') plt.axis('equal')_____no_output_____ </code> And here's what it looks like._____no_output_____ <code> plt.figure(figsize=(10,2.5)) plot_second_selection(winner_df)_____no_output_____ </code> ## Annotations The figure in the paper uses three other features to present the results more clearly and compellingly: * A vertical dashed line to distinguish the previously undetected region of GD-1, * A label that identifies the new region, and * Several annotations that combine text and arrows to identify features of GD-1._____no_output_____### Exercise Choose any or all of these features and add them to the figure: * To draw vertical lines, see [`plt.vlines`](https://matplotlib.org/3.3.1/api/_as_gen/matplotlib.pyplot.vlines.html) and [`plt.axvline`](https://matplotlib.org/3.3.1/api/_as_gen/matplotlib.pyplot.axvline.html#matplotlib.pyplot.axvline). * To add text, see [`plt.text`](https://matplotlib.org/3.3.1/api/_as_gen/matplotlib.pyplot.text.html). * To add an annotation with text and an arrow, see [plt.annotate](). And here is some [additional information about text and arrows](https://matplotlib.org/3.3.1/tutorials/text/annotations.html#plotting-guide-annotation)._____no_output_____ <code> # Solution # plt.axvline(-55, ls='--', color='gray', # alpha=0.4, dashes=(6,4), lw=2) # plt.text(-60, 5.5, 'Previously\nundetected', # fontsize='small', ha='right', va='top'); # arrowprops=dict(color='gray', shrink=0.05, width=1.5, # headwidth=6, headlength=8, alpha=0.4) # plt.annotate('Spur', xy=(-33, 2), xytext=(-35, 5.5), # arrowprops=arrowprops, # fontsize='small') # plt.annotate('Gap', xy=(-22, -1), xytext=(-25, -5.5), # arrowprops=arrowprops, # fontsize='small')_____no_output_____ </code> ## Customization Matplotlib provides a default style that determines things like the colors of lines, the placement of labels and ticks on the axes, and many other properties. There are several ways to override these defaults and customize your figures: * To customize only the current figure, you can call functions like `tick_params`, which we'll demonstrate below. * To customize all figures in a notebook, you use `rcParams`. * To override more than a few defaults at the same time, you can use a style sheet._____no_output_____As a simple example, notice that Matplotlib puts ticks on the outside of the figures by default, and only on the left and bottom sides of the axes. To change this behavior, you can use `gca()` to get the current axes and `tick_params` to change the settings. Here's how you can put the ticks on the inside of the figure: ``` plt.gca().tick_params(direction='in') ```_____no_output_____### Exercise Read the documentation of [`tick_params`](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.axes.Axes.tick_params.html) and use it to put ticks on the top and right sides of the axes._____no_output_____ <code> # Solution # plt.gca().tick_params(top=True, right=True)_____no_output_____ </code> ## rcParams If you want to make a customization that applies to all figures in a notebook, you can use `rcParams`. Here's an example that reads the current font size from `rcParams`:_____no_output_____ <code> plt.rcParams['font.size']_____no_output_____ </code> And sets it to a new value:_____no_output_____ <code> plt.rcParams['font.size'] = 14_____no_output_____ </code> As an exercise, plot the previous figure again, and see what font sizes have changed. Look up any other element of `rcParams`, change its value, and check the effect on the figure._____no_output_____If you find yourself making the same customizations in several notebooks, you can put changes to `rcParams` in a `matplotlibrc` file, [which you can read about here](https://matplotlib.org/3.3.1/tutorials/introductory/customizing.html#customizing-with-matplotlibrc-files)._____no_output_____## Style sheets The `matplotlibrc` file is read when you import Matplotlib, so it is not easy to switch from one set of options to another. The solution to this problem is style sheets, [which you can read about here](https://matplotlib.org/3.1.1/tutorials/introductory/customizing.html). Matplotlib provides a set of predefined style sheets, or you can make your own. The following cell displays a list of style sheets installed on your system._____no_output_____ <code> plt.style.available_____no_output_____ </code> Note that `seaborn-paper`, `seaborn-talk` and `seaborn-poster` are particularly intended to prepare versions of a figure with text sizes and other features that work well in papers, talks, and posters. To use any of these style sheets, run `plt.style.use` like this: ``` plt.style.use('fivethirtyeight') ```_____no_output_____The style sheet you choose will affect the appearance of all figures you plot after calling `use`, unless you override any of the options or call `use` again._____no_output_____As an exercise, choose one of the styles on the list and select it by calling `use`. Then go back and plot one of the figures above and see what effect it has._____no_output_____If you can't find a style sheet that's exactly what you want, you can make your own. This repository includes a style sheet called `az-paper-twocol.mplstyle`, with customizations chosen by Azalee Bostroem for publication in astronomy journals. You can [download the style sheet](https://github.com/AllenDowney/AstronomicalData/raw/main/az-paper-twocol.mplstyle) or run the following cell, which downloads it if necessary._____no_output_____ <code> download('https://github.com/AllenDowney/AstronomicalData/raw/main/' + 'az-paper-twocol.mplstyle')_____no_output_____ </code> You can use it like this: ``` plt.style.use('./az-paper-twocol.mplstyle') ``` The prefix `./` tells Matplotlib to look for the file in the current directory._____no_output_____As an alternative, you can install a style sheet for your own use by putting it in your configuration directory. To find out where that is, you can run the following command: ``` import matplotlib as mpl mpl.get_configdir() ```_____no_output_____## LaTeX fonts When you include mathematical expressions in titles, labels, and annotations, Matplotlib uses [`mathtext`](https://matplotlib.org/3.1.0/tutorials/text/mathtext.html) to typeset them. `mathtext` uses the same syntax as LaTeX, but it provides only a subset of its features. If you need features that are not provided by `mathtext`, or you prefer the way LaTeX typesets mathematical expressions, you can customize Matplotlib to use LaTeX. In `matplotlibrc` or in a style sheet, you can add the following line: ``` text.usetex : true ``` Or in a notebook you can run the following code. ``` plt.rcParams['text.usetex'] = True ```_____no_output_____ <code> plt.rcParams['text.usetex'] = True_____no_output_____ </code> If you go back and draw the figure again, you should see the difference. If you get an error message like ``` LaTeX Error: File `type1cm.sty' not found. ``` You might have to install a package that contains the fonts LaTeX needs. On some systems, the packages `texlive-latex-extra` or `cm-super` might be what you need. [See here for more help with this](https://stackoverflow.com/questions/11354149/python-unable-to-render-tex-in-matplotlib). In case you are curious, `cm` stands for [Computer Modern](https://en.wikipedia.org/wiki/Computer_Modern), the font LaTeX uses to typeset math. Before we go on, let's put things back where we found them._____no_output_____ <code> plt.rcParams['text.usetex'] = False plt.style.use('default')_____no_output_____ </code> ## Multiple panels So far we've been working with one figure at a time, but the figure we are replicating contains multiple panels, also known as "subplots". Confusingly, Matplotlib provides *three* functions for making figures like this: `subplot`, `subplots`, and `subplot2grid`. * [`subplot`](https://matplotlib.org/3.3.1/api/_as_gen/matplotlib.pyplot.subplot.html) is simple and similar to MATLAB, so if you are familiar with that interface, you might like `subplot` * [`subplots`](https://matplotlib.org/3.3.1/api/_as_gen/matplotlib.pyplot.subplots.html) is more object-oriented, which some people prefer. * [`subplot2grid`](https://matplotlib.org/3.3.1/api/_as_gen/matplotlib.pyplot.subplot2grid.html) is most convenient if you want to control the relative sizes of the subplots. So we'll use `subplot2grid`. All of these functions are easier to use if we put the code that generates each panel in a function._____no_output_____## Upper right To make the panel in the upper right, we have to reload `centerline_df`._____no_output_____ <code> filename = 'gd1_data.hdf' centerline_df = pd.read_hdf(filename, 'centerline_df')_____no_output_____ </code> And define the coordinates of the rectangle we selected._____no_output_____ <code> pm1_min = -8.9 pm1_max = -6.9 pm2_min = -2.2 pm2_max = 1.0 pm1_rect = [pm1_min, pm1_min, pm1_max, pm1_max] pm2_rect = [pm2_min, pm2_max, pm2_max, pm2_min]_____no_output_____ </code> To plot this rectangle, we'll use a feature we have not seen before: `Polygon`, which is provided by Matplotlib. To create a `Polygon`, we have to put the coordinates in an array with `x` values in the first column and `y` values in the second column. _____no_output_____ <code> import numpy as np vertices = np.transpose([pm1_rect, pm2_rect]) vertices_____no_output_____ </code> The following function takes a `DataFrame` as a parameter, plots the proper motion for each star, and adds a shaded `Polygon` to show the region we selected._____no_output_____ <code> from matplotlib.patches import Polygon def plot_proper_motion(df): pm1 = df['pm_phi1'] pm2 = df['pm_phi2'] plt.plot(pm1, pm2, 'ko', markersize=0.3, alpha=0.3) poly = Polygon(vertices, closed=True, facecolor='C1', alpha=0.4) plt.gca().add_patch(poly) plt.xlabel('$\mu_{\phi_1} [\mathrm{mas~yr}^{-1}]$') plt.ylabel('$\mu_{\phi_2} [\mathrm{mas~yr}^{-1}]$') plt.xlim(-12, 8) plt.ylim(-10, 10)_____no_output_____ </code> Notice that `add_patch` is like `invert_yaxis`; in order to call it, we have to use `gca` to get the current axes. Here's what the new version of the figure looks like. We've changed the labels on the axes to be consistent with the paper._____no_output_____ <code> plot_proper_motion(centerline_df)_____no_output_____ </code> ## Upper left Now let's work on the panel in the upper left. We have to reload `candidates`._____no_output_____ <code> filename = 'gd1_data.hdf' candidate_df = pd.read_hdf(filename, 'candidate_df')_____no_output_____ </code> Here's a function that takes a `DataFrame` of candidate stars and plots their positions in GD-1 coordindates. _____no_output_____ <code> def plot_first_selection(df): x = df['phi1'] y = df['phi2'] plt.plot(x, y, 'ko', markersize=0.3, alpha=0.3) plt.xlabel('$\phi_1$ [deg]') plt.ylabel('$\phi_2$ [deg]') plt.title('Proper motion selection', fontsize='medium') plt.axis('equal')_____no_output_____ </code> And here's what it looks like._____no_output_____ <code> plot_first_selection(candidate_df)_____no_output_____ </code> ## Lower right For the figure in the lower right, we'll use this function to plots the color-magnitude diagram._____no_output_____ <code> import matplotlib.pyplot as plt def plot_cmd(table): """Plot a color magnitude diagram. table: Table or DataFrame with photometry data """ y = table['g_mean_psf_mag'] x = table['g_mean_psf_mag'] - table['i_mean_psf_mag'] plt.plot(x, y, 'ko', markersize=0.3, alpha=0.3) plt.xlim([0, 1.5]) plt.ylim([14, 22]) plt.gca().invert_yaxis() plt.ylabel('$Magnitude (g)$') plt.xlabel('$Color (g-i)$')_____no_output_____ </code> Here's what it looks like._____no_output_____ <code> plot_cmd(candidate_df)_____no_output_____ </code> And here's how we read it back._____no_output_____ <code> filename = 'gd1_data.hdf' loop_df = pd.read_hdf(filename, 'loop_df') loop_df.head()_____no_output_____ </code> ### Exercise Add a few lines to `plot_cmd` to show the polygon we selected as a shaded area. Hint: pass `coords` as an argument to `Polygon` and plot it using `add_patch`._____no_output_____ <code> # Solution # poly = Polygon(loop_df, closed=True, # facecolor='C1', alpha=0.4) # plt.gca().add_patch(poly)_____no_output_____ </code> ## Subplots Now we're ready to put it all together. To make a figure with four subplots, we'll use `subplot2grid`, [which requires two arguments](https://matplotlib.org/3.3.1/api/_as_gen/matplotlib.pyplot.subplot2grid.html): * `shape`, which is a tuple with the number of rows and columns in the grid, and * `loc`, which is a tuple identifying the location in the grid we're about to fill. In this example, `shape` is `(2, 2)` to create two rows and two columns. For the first panel, `loc` is `(0, 0)`, which indicates row 0 and column 0, which is the upper-left panel. Here's how we use it to draw the four panels._____no_output_____ <code> shape = (2, 2) plt.subplot2grid(shape, (0, 0)) plot_first_selection(candidate_df) plt.subplot2grid(shape, (0, 1)) plot_proper_motion(centerline_df) plt.subplot2grid(shape, (1, 0)) plot_second_selection(winner_df) plt.subplot2grid(shape, (1, 1)) plot_cmd(candidate_df) poly = Polygon(loop_df, closed=True, facecolor='C1', alpha=0.4) plt.gca().add_patch(poly) plt.tight_layout()_____no_output_____ </code> We use [`plt.tight_layout`](https://matplotlib.org/3.3.1/tutorials/intermediate/tight_layout_guide.html) at the end, which adjusts the sizes of the panels to make sure the titles and axis labels don't overlap. As an exercise, see what happens if you leave out `tight_layout`._____no_output_____## Adjusting proportions In the previous figure, the panels are all the same size. To get a better view of GD-1, we'd like to stretch the panels on the left and compress the ones on the right. To do that, we'll use the `colspan` argument to make a panel that spans multiple columns in the grid. In the following example, `shape` is `(2, 4)`, which means 2 rows and 4 columns. The panels on the left span three columns, so they are three times wider than the panels on the right. At the same time, we use `figsize` to adjust the aspect ratio of the whole figure._____no_output_____ <code> plt.figure(figsize=(9, 4.5)) shape = (2, 4) plt.subplot2grid(shape, (0, 0), colspan=3) plot_first_selection(candidate_df) plt.subplot2grid(shape, (0, 3)) plot_proper_motion(centerline_df) plt.subplot2grid(shape, (1, 0), colspan=3) plot_second_selection(winner_df) plt.subplot2grid(shape, (1, 3)) plot_cmd(candidate_df) poly = Polygon(loop_df, closed=True, facecolor='C1', alpha=0.4) plt.gca().add_patch(poly) plt.tight_layout()_____no_output_____ </code> This is looking more and more like the figure in the paper._____no_output_____### Exercise In this example, the ratio of the widths of the panels is 3:1. How would you adjust it if you wanted the ratio to be 3:2?_____no_output_____ <code> # Solution # plt.figure(figsize=(9, 4.5)) # shape = (2, 5) # CHANGED # plt.subplot2grid(shape, (0, 0), colspan=3) # plot_first_selection(candidate_df) # plt.subplot2grid(shape, (0, 3), colspan=2) # CHANGED # plot_proper_motion(centerline_df) # plt.subplot2grid(shape, (1, 0), colspan=3) # plot_second_selection(winner_df) # plt.subplot2grid(shape, (1, 3), colspan=2) # CHANGED # plot_cmd(candidate_df) # poly = Polygon(coords, closed=True, # facecolor='C1', alpha=0.4) # plt.gca().add_patch(poly) # plt.tight_layout()_____no_output_____ </code> ## Summary In this notebook, we reverse-engineered the figure we've been replicating, identifying elements that seem effective and others that could be improved. We explored features Matplotlib provides for adding annotations to figures -- including text, lines, arrows, and polygons -- and several ways to customize the appearance of figures. And we learned how to create figures that contain multiple panels._____no_output_____## Best practices * The most effective figures focus on telling a single story clearly and compellingly. * Consider using annotations to guide the reader's attention to the most important elements of a figure. * The default Matplotlib style generates good quality figures, but there are several ways you can override the defaults. * If you find yourself making the same customizations on several projects, you might want to create your own style sheet._____no_output_____
{ "repository": "bmg-pcl/astronomy-python", "path": "_extras/notebooks/07-plot.ipynb", "matched_keywords": [ "STAR" ], "stars": 39, "size": 738216, "hexsha": "48862199c1a2cc4f46e020730d37acbe945bd0f7", "max_line_length": 204684, "avg_line_length": 578.9929411765, "alphanum_fraction": 0.945896594 }
# Notebook from perseu912/insta_bot Path: qbits/Untitled.ipynb <code> import qutip from qutip import Bloch as b import matplotlib.pyplot as plt import numpy as np_____no_output_____from mpmath import limit from mpmath import * import numpy as np #mp.dps = 20_____no_output_____ </code> ### q_exp $$ exp_{q}(u) = \lim_{a \to q}{(1+u(1-a))^{\frac{1}{1-a}}} $$ $$$$ com isso, vemos que $$ exp_{1}(u) = \lim_{a \to 1}{(1+u(1-a))^{\frac{1}{1-a}}} $$ que igualando $ n = {\frac{1}{1-a}} $ vemos que $(1-a) = \frac{1}{n}$, e $n \to \infty$ com o $ \lim_{a \to 1}{\frac{1}{1-a}}$ da equção acima, vemos que podemos verificar que $$ exp_{1}(u) = \lim_{n \to \infty}{{ \left(1+\frac{u}{n} \right)}^{n}}$$ $$$$ ### q_ln $$ ln_{q}(u) = \lim_{a \to q}{\left(\frac{u^{1-a}-1}{1-a}\right)} $$ que pode ser reescrita em $$ \lim_{a \to q}{(u^{1-a}-1)\left(\frac{1}{1-a}\right)} $$ igualando $ h = {1-a} $, vemos que $h \to 0$ com o $ \lim_{a \to 1}{{1-a}}$ da equção acima. Então podemos verificar que $$ln_{1}(u) = \lim_{h \to 0}\frac{u^h-1}{h}$$_____no_output_____ <code> np.double(2)_____no_output_____from sympy import *_____no_output_____x = symbols('x') f = sin(x)/x limit(f,x,2)_____no_output_____from yaraah.tools import puts_____no_output_____#transfromador de iamginário para real da biblioteca mpmath def real_mp(number): return number.real() float(mpf('1').real) ############# function base q_expotion ############### def __q_exp__(u:float,q:float) -> np.double : __q__ = lambda _q_: 1/(1-_q_) param_q = limit(__q__,q) A = lambda w: 1/(w-1) #parametro limite de A param_A = np.double(limit(A,q).real) if(q > 1.9 and t >= param_A): return np.nan #function base q_expotion __e__ = lambda q_: np.power((1+u*(1-q_)),param_q) #try: return np.double(limit(__e__,q).real) if((1+u*(1-q))>=0) else 0 #except(ZeroDivisionError): ''' def q_exp(t,q=1): if(type(t) == np.ndarray or type(t) == list): res = [] for i in t: res.append(__q_exp__(i,q)) print(res) return np.array(res) else: return __q_exp__(t,q) ''' q_exp = __q_exp_______no_output_____t = np.linspace(-2,2,100) for i in t: print(q_exp(i,q=1)) type(t) == np.ndarray6.436129707112741e-29 1.9590277634216463e-28 6.007447737798518e-28 1.856172069744592e-27 5.7792556639469214e-27 1.8134232058952692e-26 5.735216668480315e-26 1.8284181601482136e-25 5.876651120692571e-25 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 plt.plot(y,t)_____no_output_____b = b() b.show() np.double(2222)_____no_output_____from qutip import * import numpy as np import matplotlib.pyplot as plt n=2#Atomic number j = n//2 psi0 = spin_coherent(j, np.pi/3, 0)#Set the initial state of the system to spin coherent state Jp=destroy(2*j+1).dag()#Up Operator J_=destroy(2*j+1)#Descending operator Jz=(Jp*J_-J_*Jp)/2#Jz H=Jz**2#System's Hamiltonian tlist=np.linspace(0,3,100)#Time list result=mesolve(H,psi0,tlist)#State evolution over time theta=np.linspace(0, np.pi, 50) phi=np.linspace(0, 2*np.pi, 50) #Calculate the husimi q function in the four states separately Q1, THETA1, PHI1 = spin_q_function(result.states[0], theta, phi) Q2, THETA2, PHI2 = spin_q_function(result.states[30], theta, phi) Q3, THETA3, PHI3 = spin_q_function(result.states[60], theta, phi) Q4, THETA4, PHI4 = spin_q_function(result.states[90], theta, phi) #Draw the husimi q function in the four states in the four subgraphs fig = plt.figure(dpi=150,constrained_layout=1) ax1 = fig.add_subplot(221,projection='3d') ax2 = fig.add_subplot(222,projection='3d') ax3 = fig.add_subplot(223,projection='3d') ax4 = fig.add_subplot(224,projection='3d') plot_spin_distribution_3d(Q1, THETA1, PHI1,fig=fig,ax=ax1) plot_spin_distribution_3d(Q2, THETA2, PHI2,fig=fig,ax=ax2) plot_spin_distribution_3d(Q3, THETA3, PHI3,fig=fig,ax=ax3) plot_spin_distribution_3d(Q4, THETA4, PHI4,fig=fig,ax=ax4) for ax in [ax1,ax2,ax3,ax4]: ax.view_init(0.5*np.pi, 0) ax.axis('off')#Do not display the axis fig.show() <ipython-input-7-319edc4e50b1>:43: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure. fig.show() from qutip import * import numpy as np import matplotlib.pyplot as plt alpha=1#Coherent light parameter alpha n=2#Atomic number j = n/2 psi0 = tensor(coherent(10,alpha),spin_coherent(j, 0, 0))#Set the initial state of the system a=destroy(10)#Light field annihilation operator a_plus=a.dag()#Light field generation operator Jp=destroy(n+1).dag()#Atomic ascending operator J_=destroy(n+1)#Atomic descending operator Jx=(Jp+J_)/2#Atomic Jx operator Jy=(Jp-J_)/(2j)#Atomic Jy operator, where j is the imaginary unit Jz=(Jp*J_-J_*Jp)/2#Atomic Jz operator H=tensor(a,Jp)+tensor(a_plus,J_)#System's Hamiltonian tlist=np.linspace(0,10,1000)#Time list result=mesolve(H,psi0,tlist)#State evolution over time fig=plt.figure() ax1 = fig.add_subplot(221) ax2 = fig.add_subplot(222) ax3 = fig.add_subplot(223) ax4 = fig.add_subplot(224) ax1.plot(tlist,expect(tensor(qeye(10),Jx),result.states))#Jx average value over time graph ax2.plot(tlist,expect(tensor(qeye(10),Jy),result.states))#Jy's average over time graph ax3.plot(tlist,expect(tensor(qeye(10),Jz),result.states))#Jz average change over time graph ax4.plot(tlist,expect(tensor(qeye(10),Jx**2+Jy**2+Jz*2),result.states))#J squared average over time graph fig.subplots_adjust(top=None,bottom=None,left=None,right=None,wspace=0.4,hspace=0.4)#Set the sub-picture spacing fig.show() <ipython-input-8-3af7aac6cfd3>:35: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure. fig.show() </code>
{ "repository": "perseu912/insta_bot", "path": "qbits/Untitled.ipynb", "matched_keywords": [ "evolution" ], "stars": null, "size": 290957, "hexsha": "488768a6178dd5f766bd9ebb7dd7872c9ed5c95c", "max_line_length": 154664, "avg_line_length": 636.6673960613, "alphanum_fraction": 0.9385991744 }
# Notebook from ShepherdCode/Soars2021 Path: Notebooks/GenCode_Explore_209.ipynb # GenCode Explore Explore the human RNA sequences from GenCode. Assume user downloaded files from GenCode 38 [FTP](http://ftp.ebi.ac.uk/pub/databases/gencode/Gencode_human/release_38/) to a subdirectory called data. Improve on GenCode_Explore_101.ipynb Use ORF_counter. Use MatPlotLib to make box plots and heat maps._____no_output_____ <code> import time def show_time(): t = time.time() s = time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t)) print(s) show_time()2021-06-11 15:25:30 UTC import numpy as np import pandas as pd import gzip import sys try: from google.colab import drive IN_COLAB = True print("On Google CoLab, mount cloud-local file, get our code from GitHub.") PATH='/content/drive/' #drive.mount(PATH,force_remount=True) # hardly ever need this drive.mount(PATH) # Google will require login credentials DATAPATH=PATH+'My Drive/data/' # must end in "/" import requests s = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py') with open('RNA_describe.py', 'w') as f: f.write(s.text) # writes to cloud local, delete the file later? s = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/GenCodeTools.py') with open ('GenCodeTools.py', 'w') as f: f.write(s.text) s = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/plot_generator.py') with open('plot_generator.py', 'w') as f: f.write(s.text) s = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_gen.py') with open('RNA_gen.py', 'w') as f: f.write(s.text) from RNA_describe import * from GenCodeTools import * from plot_generator import * from RNA_gen import * except: print("CoLab not working. On my PC, use relative paths.") IN_COLAB = False DATAPATH='../data/' # must end in "/" sys.path.append("..") # append parent dir in order to use sibling dirs from SimTools.RNA_describe import * from SimTools.GenCodeTools import * from SimTools.plot_generator import * from SimTools.RNA_gen import * MODELPATH="BestModel" # saved on cloud instance and lost after logout #MODELPATH=DATAPATH+MODELPATH # saved on Google Drive but requires login if not assert_imported_RNA_describe(): print("ERROR: Cannot use RNA_describe.")On Google CoLab, mount cloud-local file, get our code from GitHub. Drive already mounted at /content/drive/; to attempt to forcibly remount, call drive.mount("/content/drive/", force_remount=True). PC_FILENAME='gencode.v38.pc_transcripts.fa.gz' NC_FILENAME='gencode.v38.lncRNA_transcripts.fa.gz'_____no_output_____ </code> ## Load the GenCode data. Warning: GenCode has over 100K protein-coding RNA (mRNA) and almost 50K non-coding RNA (lncRNA)._____no_output_____ <code> # Full GenCode ver 38 human is 106143 pc + 48752 nc and loads in 7 sec. # Expect fewer transcripts if special filtering is used. PC_FULLPATH=DATAPATH+PC_FILENAME NC_FULLPATH=DATAPATH+NC_FILENAME loader=GenCodeLoader() show_time() loader.set_label(1) loader.set_check_list(None) loader.set_check_utr(True) pcdf=loader.load_file(PC_FULLPATH) print("PC seqs loaded:",len(pcdf)) show_time() loader.set_label(0) loader.set_check_list(None) loader.set_check_utr(False) ncdf=loader.load_file(NC_FULLPATH) print("NC seqs loaded:",len(ncdf)) show_time()2021-06-11 15:25:30 UTC PC seqs loaded: 70825 2021-06-11 15:25:35 UTC NC seqs loaded: 48752 2021-06-11 15:25:36 UTC print("Sorting PC...") pcdf.sort_values('seqlen', ascending=True, inplace=True) print("Sorting NC...") ncdf.sort_values('seqlen', ascending=True, inplace=True) show_time()Sorting PC... Sorting NC... 2021-06-11 15:25:36 UTC # This is a fast way to slice if you have length thresholds. # TO DO: choose length thresholds and apply to PC and NC RNA. # For example: 200, 400, 800, 1600, 3200, 6400 (e.g. 200-399, etc.) #mask = (ncdf['sequence'].str.len() < 1000) #subset = ncdf.loc[mask]_____no_output_____ </code> ###Bin sequences by length ---_____no_output_____ <code> def subset_list_by_len_bounds(input_list, min_len, max_len): return list(filter(lambda x: len(x) > min_len and len(x) < max_len, input_list))_____no_output_____bins = [(200, 400), (400, 800), (800, 1600), (1600, 3200), (3200, 6400), (6400, 12800), (12800, 25600), (25600, 51200)] NUM_BINS = len(bins)_____no_output_____ </code> Generate simulated non-random codon selection based RNA sequences_____no_output_____ <code> #TODO: improve simulator = Collection_Generator() #simulator.set_reproducible(True) #Not sure if good to do or bad in this case sim_sequences = [] for bin in bins: seq_len = (bin[0] + bin[1]) // 2 #Fits the sequences to the bins which could be misleading simulator.get_len_oracle().set_mean(seq_len) simulator.get_seq_oracle().set_sequences(['ATG','CCC','TAG']) simulator.get_seq_oracle().set_frequencies([1,100,1]) seq_cnt = (len(pcdf) + len(ncdf)) // 2 // NUM_BINS #Guarantees same number of sequences for each bin seq_set = simulator.get_sequences(seq_cnt) for seq in seq_set: sim_sequences.append(seq) show_time()2021-06-11 15:29:07 UTC #Bin the RNA sequences binned_pc_sequences = [] binned_nc_sequences = [] binned_sim_sequences = [] for i in range(0, NUM_BINS): bin = bins[i] binned_pc_sequences.append([]) binned_nc_sequences.append([]) binned_sim_sequences.append([]) binned_pc_sequences[i] = subset_list_by_len_bounds(pcdf['sequence'].tolist(), bin[0], bin[1]) binned_nc_sequences[i] = subset_list_by_len_bounds(ncdf['sequence'].tolist(), bin[0], bin[1]) binned_sim_sequences[i] = subset_list_by_len_bounds(sim_sequences, bin[0], bin[1]) show_time()2021-06-11 15:29:08 UTC </code> ##Gather data on ORF lengths and the number of contained and non-contained ORFs ---_____no_output_____ <code> #TODO: properly implement numpy data structures pc_max_len_data = np.empty(NUM_BINS, dtype=object) pc_max_cnt_data = np.empty(NUM_BINS, dtype=object) pc_contain_data = np.empty(NUM_BINS, dtype=object) nc_max_len_data = np.empty(NUM_BINS, dtype=object) nc_max_cnt_data = np.empty(NUM_BINS, dtype=object) nc_contain_data = np.empty(NUM_BINS, dtype=object) sim_max_len_data = np.empty(NUM_BINS, dtype=object) sim_max_cnt_data = np.empty(NUM_BINS, dtype=object) sim_contain_data = np.empty(NUM_BINS, dtype=object) oc = ORF_counter() for bin in range(0, NUM_BINS): pc_max_len_data[bin] = np.zeros(len(binned_pc_sequences[bin])) pc_max_cnt_data[bin] = np.zeros(len(binned_pc_sequences[bin])) pc_contain_data[bin] = np.zeros(len(binned_pc_sequences[bin])) nc_max_len_data[bin] = np.zeros(len(binned_nc_sequences[bin])) nc_max_cnt_data[bin] = np.zeros(len(binned_nc_sequences[bin])) nc_contain_data[bin] = np.zeros(len(binned_nc_sequences[bin])) sim_max_len_data[bin] = np.zeros(len(binned_sim_sequences[bin])) sim_max_cnt_data[bin] = np.zeros(len(binned_sim_sequences[bin])) sim_contain_data[bin] = np.zeros(len(binned_sim_sequences[bin])) #Gather protein-coding sequence data for seq in range(0, len(binned_pc_sequences[bin])): oc.set_sequence(binned_pc_sequences[bin][seq]) pc_max_len_data[bin][seq] = oc.get_max_orf_len() pc_max_cnt_data[bin][seq] = oc.count_maximal_orfs() pc_contain_data[bin][seq] = oc.count_contained_orfs() #Gather non-coding sequence data for seq in range(0, len(binned_nc_sequences[bin])): oc.set_sequence(binned_nc_sequences[bin][seq]) nc_max_len_data[bin][seq] = oc.get_max_orf_len() nc_max_cnt_data[bin][seq] = oc.count_maximal_orfs() nc_contain_data[bin][seq] = oc.count_contained_orfs() #Gather simulated sequence data for seq in range(0, len(binned_sim_sequences[bin])): oc.set_sequence(binned_sim_sequences[bin][seq]) sim_max_len_data[bin][seq] = oc.get_max_orf_len() sim_max_cnt_data[bin][seq] = oc.count_maximal_orfs() sim_contain_data[bin][seq] = oc.count_contained_orfs() show_time()2021-06-11 15:31:11 UTC </code> ##Prepare data for heatmap ---_____no_output_____ <code> def mean(data): if len(data) == 0: return 0 return sum(data) / len(data)_____no_output_____#Get the means of all of the data mean_pc_max_len_data = np.zeros(NUM_BINS) mean_pc_max_cnt_data = np.zeros(NUM_BINS) mean_pc_contain_data = np.zeros(NUM_BINS) mean_nc_max_len_data = np.zeros(NUM_BINS) mean_nc_max_cnt_data = np.zeros(NUM_BINS) mean_nc_contain_data = np.zeros(NUM_BINS) mean_sim_max_len_data = np.zeros(NUM_BINS) mean_sim_max_cnt_data = np.zeros(NUM_BINS) mean_sim_contain_data = np.zeros(NUM_BINS) for i in range(0, NUM_BINS): mean_pc_max_len_data[i] = mean(pc_max_len_data[i]) mean_pc_max_cnt_data[i] = mean(pc_max_cnt_data[i]) mean_pc_contain_data[i] = mean(pc_contain_data[i]) mean_nc_max_len_data[i] = mean(nc_max_len_data[i]) mean_nc_max_cnt_data[i] = mean(nc_max_cnt_data[i]) mean_nc_contain_data[i] = mean(nc_contain_data[i]) mean_sim_max_len_data[i] = mean(sim_max_len_data[i]) mean_sim_max_cnt_data[i] = mean(sim_max_cnt_data[i]) mean_sim_contain_data[i] = mean(sim_contain_data[i]) show_time()2021-06-11 15:31:12 UTC </code> ###Prepare data for plot of bin sizes ---_____no_output_____ <code> pc_bin_sizes = np.zeros(NUM_BINS) nc_bin_sizes = np.zeros(NUM_BINS) sim_bin_sizes = np.zeros(NUM_BINS) for i in range(0, NUM_BINS): pc_bin_sizes[i] = len(binned_pc_sequences[i]) nc_bin_sizes[i] = len(binned_nc_sequences[i]) sim_bin_sizes[i] = len(binned_sim_sequences[i]) show_time()2021-06-11 15:31:12 UTC </code> ###Prepare data for plot of number of sequences with no ORFs and plot of number of sequences with max ORF lengths equal to or less than 100 ---_____no_output_____ <code> """ Counts the number of RNA sequences that fit the given constraints. Sequence range constraints are exclusive. Max ORF length constraints are inclusive. TODO: possibly optimize. """ def count_constraint_valid_sequences(data, min_seq_len, max_seq_len, min_max_orf_len, max_max_orf_len): count = 0 oc = ORF_counter() if isinstance(data, list): sequences = data else: #Therefore is pandas dataframe sequences = data['sequence'].tolist() for seq in sequences: if len(seq) > min_seq_len and len(seq) < max_seq_len: oc.set_sequence(seq) max_orf_len = oc.get_max_orf_len() if max_orf_len >= min_max_orf_len and max_orf_len <= max_max_orf_len: count += 1 return count _____no_output_____pc_no_orf_count = np.zeros(NUM_BINS) nc_no_orf_count = np.zeros(NUM_BINS) sim_no_orf_count = np.zeros(NUM_BINS) pc_max_orf_len_less_than_100 = np.zeros(NUM_BINS) nc_max_orf_len_less_than_100 = np.zeros(NUM_BINS) sim_max_orf_len_less_than_100 = np.zeros(NUM_BINS) for i in range(0, NUM_BINS): pc_no_orf_count[i] = count_constraint_valid_sequences(pcdf, bins[i][0], bins[i][1], 0, 0) nc_no_orf_count[i] = count_constraint_valid_sequences(ncdf, bins[i][0], bins[i][1], 0, 0) sim_no_orf_count[i] = count_constraint_valid_sequences(sim_sequences, bins[i][0], bins[i][1], 0, 0) pc_max_orf_len_less_than_100[i] = count_constraint_valid_sequences(pcdf, bins[i][0], bins[i][1], 0, 100) nc_max_orf_len_less_than_100[i] = count_constraint_valid_sequences(ncdf, bins[i][0], bins[i][1], 0, 100) sim_max_orf_len_less_than_100[i] = count_constraint_valid_sequences(sim_sequences, bins[i][0], bins[i][1], 0, 100) show_time()2021-06-11 15:35:21 UTC </code> ## Plot the data ---_____no_output_____ <code> #Generate x-axis labels x_axis_labels = [] for bin in bins: x_axis_labels.append(str(bin[0]) + "-" + str(bin[1])) data_set_names = ['mRNA', 'lncRNA', 'sim'] #Set up plot generator pg = PlotGenerator() pg.set_text_options(45, 'right', 0, 'center') #Bar plots pg.set_text('Number of Sequences per Sequence Length Range', 'Sequence Length Ranges', 'Number of Sequences', x_axis_labels, None) pg.bar_plot([pc_bin_sizes, nc_bin_sizes, sim_bin_sizes], data_set_names) pg.set_text('Number of Sequences without ORFs', 'Sequence Length Ranges', 'Number of Sequences', x_axis_labels, None) pg.bar_plot([pc_no_orf_count, nc_no_orf_count, sim_no_orf_count], data_set_names) pg.set_text('Number of Sequences of Max Length Equal to or Less than 100', 'Sequence Length Ranges', 'Number of Sequences', x_axis_labels, None) pg.bar_plot([pc_max_orf_len_less_than_100, nc_max_orf_len_less_than_100, sim_max_orf_len_less_than_100], data_set_names) #Box plots pg.set_axis_options('linear', 10, 'log', 2) pg.set_text('Length of Longest ORF in RNA Sequences', 'Sequence Length Ranges', 'ORF Length', x_axis_labels, None) pg.box_plot([pc_max_len_data, nc_max_len_data, sim_max_len_data], data_set_names, True) pg.set_text('Number of Non-contained ORFs in RNA Sequences', 'Sequence Length Ranges', 'Number of Non-contained ORFs', x_axis_labels, None) pg.box_plot([pc_max_cnt_data, nc_max_cnt_data, sim_max_cnt_data], data_set_names, True) pg.set_text('Number of Contained ORFs in RNA Sequences', 'Sequence Length Ranges', 'Number of Contained ORFs', x_axis_labels, None) pg.box_plot([pc_contain_data, nc_contain_data, sim_contain_data], data_set_names, True) #Heatmaps pg.set_axis_options('linear', 10, 'linear', 10) pg.set_text('mRNA Mean Longest ORF Length', 'Sequence Length Ranges', '', x_axis_labels, ['']) pg.heatmap([mean_pc_max_len_data]) pg.set_text('mRNA Mean Number of Non-contained ORFs', 'Sequence Length Ranges', '', x_axis_labels, ['']) pg.heatmap([mean_pc_max_cnt_data]) pg.set_text('mRNA Mean Number of Contained ORFs', 'Sequence Length Ranges', '', x_axis_labels, ['']) pg.heatmap([mean_pc_contain_data]) pg.set_text('lncRNA Mean Longest ORF Length', 'Sequence Length Ranges', '', x_axis_labels, ['']) pg.heatmap([mean_nc_max_len_data]) pg.set_text('lncRNA Mean Number of Non-contained ORFs', 'Sequence Length Ranges', '', x_axis_labels, ['']) pg.heatmap([mean_nc_max_cnt_data]) pg.set_text('lncRNA Mean Number of Contained ORFs', 'Sequence Length Ranges', '', x_axis_labels, ['']) pg.heatmap([mean_nc_contain_data]) pg.set_text('sim Mean Longest ORF Length', 'Sequence Length Ranges', '', x_axis_labels, ['']) pg.heatmap([mean_sim_max_len_data]) pg.set_text('sim Mean Number of Non-contained ORFs', 'Sequence Length Ranges', '', x_axis_labels, ['']) pg.heatmap([mean_sim_max_cnt_data]) pg.set_text('sim Mean Number of Contained ORFs', 'Sequence Length Ranges', '', x_axis_labels, ['']) pg.heatmap([mean_sim_contain_data])_____no_output_____ </code> ## Plotting examples [boxplot doc](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.boxplot.html) [boxplot demo](https://matplotlib.org/stable/gallery/pyplots/boxplot_demo_pyplot.html) [heatmap examples](https://stackoverflow.com/questions/33282368/plotting-a-2d-heatmap-with-matplotlib) - scroll down! _____no_output_____
{ "repository": "ShepherdCode/Soars2021", "path": "Notebooks/GenCode_Explore_209.ipynb", "matched_keywords": [ "RNA" ], "stars": 1, "size": 404972, "hexsha": "4887759b893ae07162c2b10ac900dc542106e497", "max_line_length": 51694, "avg_line_length": 433.1251336898, "alphanum_fraction": 0.9292197979 }
# Notebook from Yu-Group/adaptive-wavelets Path: notebooks/biology/init_nbs/08e_init_random_coif2.ipynb <code> %load_ext autoreload %autoreload 2 %matplotlib inline import os import random import numpy as np import torch import matplotlib.pyplot as plt opj = os.path.join import pickle as pkl from ex_biology import p device = 'cuda:0' if torch.cuda.is_available() else 'cpu' # adaptive-wavelets modules import awd from awd.mdata.biology import get_dataloader, load_pretrained_model from awd.utils import get_wavefun, get_1dfilts from awd.visualize import plot_1dfilts, plot_wavefun_____no_output_____ </code> # init fit_____no_output_____ <code> p.wave = 'coif2' p.J = 4 p.mode = 'zero' p.init_factor = 1 p.noise_factor = 0 p.const_factor = 0 p.num_epochs = 100 p.attr_methods = 'Saliency' lamWaveloss = 1 p.lamlSum = lamWaveloss p.lamhSum = lamWaveloss p.lamL2sum = lamWaveloss p.lamCMF = lamWaveloss p.lamConv = lamWaveloss p.lamL1wave = 0.01 p.lamL1attr = 0.0 p.target = 0_____no_output_____# load data and model train_loader, test_loader = get_dataloader(p.data_path, batch_size=p.batch_size, is_continuous=p.is_continuous) model = load_pretrained_model(p.model_path, device=device) # prepare model random.seed(p.seed) np.random.seed(p.seed) torch.manual_seed(p.seed) wt = awd.DWT1d(wave=p.wave, mode=p.mode, J=p.J, init_factor=p.init_factor, noise_factor=p.noise_factor, const_factor=p.const_factor).to(device) wt.train() # train params = list(wt.parameters()) optimizer = torch.optim.Adam(params, lr=p.lr) loss_f = awd.get_loss_f(lamlSum=p.lamlSum, lamhSum=p.lamhSum, lamL2norm=p.lamL2norm, lamCMF=p.lamCMF, lamConv=p.lamConv, lamL1wave=p.lamL1wave, lamL1attr=p.lamL1attr) trainer = awd.Trainer(model, wt, optimizer, loss_f, target=p.target, use_residuals=True, attr_methods=p.attr_methods, device=device, n_print=5)_____no_output_____# run trainer(train_loader, epochs=p.num_epochs)Starting Training Loop... Train Epoch: 0 [1044/2936 (97%)] Loss: 0.044454 ====> Epoch: 0 Average train loss: 0.0494 Train Epoch: 5 [1044/2936 (97%)] Loss: 0.039843 ====> Epoch: 5 Average train loss: 0.0443 Train Epoch: 10 [1044/2936 (97%)] Loss: 0.051937 ====> Epoch: 10 Average train loss: 0.0445 Train Epoch: 15 [1044/2936 (97%)] Loss: 0.040974 ====> Epoch: 15 Average train loss: 0.0442 Train Epoch: 20 [1044/2936 (97%)] Loss: 0.045932 ====> Epoch: 20 Average train loss: 0.0443 Train Epoch: 25 [1044/2936 (97%)] Loss: 0.047290 ====> Epoch: 25 Average train loss: 0.0443 Train Epoch: 30 [1044/2936 (97%)] Loss: 0.040873 ====> Epoch: 30 Average train loss: 0.0441 Train Epoch: 35 [1044/2936 (97%)] Loss: 0.047012 ====> Epoch: 35 Average train loss: 0.0442 Train Epoch: 40 [1044/2936 (97%)] Loss: 0.042061 ====> Epoch: 40 Average train loss: 0.0440 Train Epoch: 45 [1044/2936 (97%)] Loss: 0.042565 ====> Epoch: 45 Average train loss: 0.0441 Train Epoch: 50 [1044/2936 (97%)] Loss: 0.044483 ====> Epoch: 50 Average train loss: 0.0440 Train Epoch: 55 [1044/2936 (97%)] Loss: 0.047312 ====> Epoch: 55 Average train loss: 0.0445 Train Epoch: 60 [1044/2936 (97%)] Loss: 0.044428 ====> Epoch: 60 Average train loss: 0.0439 Train Epoch: 65 [1044/2936 (97%)] Loss: 0.046679 ====> Epoch: 65 Average train loss: 0.0439 Train Epoch: 70 [1044/2936 (97%)] Loss: 0.045574 ====> Epoch: 70 Average train loss: 0.0438 Train Epoch: 75 [1044/2936 (97%)] Loss: 0.039275 ====> Epoch: 75 Average train loss: 0.0437 Train Epoch: 80 [1044/2936 (97%)] Loss: 0.047735 ====> Epoch: 80 Average train loss: 0.0438 Train Epoch: 85 [1044/2936 (97%)] Loss: 0.046059 ====> Epoch: 85 Average train loss: 0.0437 Train Epoch: 90 [1044/2936 (97%)] Loss: 0.039651 ====> Epoch: 90 Average train loss: 0.0436 Train Epoch: 95 [1044/2936 (97%)] Loss: 0.041035 ====> Epoch: 95 Average train loss: 0.0436 plt.plot(np.log(trainer.train_losses)) plt.xlabel("epochs") plt.ylabel("log train loss") plt.title('Log-train loss vs epochs') plt.show()_____no_output_____print('calculating losses and metric...') model.train() # cudnn RNN backward can only be called in training mode validator = awd.Validator(model, test_loader) rec_loss, lsum_loss, hsum_loss, L2norm_loss, CMF_loss, conv_loss, L1wave_loss, L1saliency_loss, L1inputxgrad_loss = validator( wt, target=p.target) print("Recon={:.5f}\n lsum={:.5f}\n hsum={:.5f}\n L2norm={:.5f}\n CMF={:.5f}\n conv={:.5f}\n L1wave={:.5f}\n Saliency={:.5f}\n Inputxgrad={:.5f}\n".format(rec_loss, lsum_loss, hsum_loss, L2norm_loss, CMF_loss, conv_loss, L1wave_loss, L1saliency_loss, L1inputxgrad_loss)) calculating losses and metric... Recon=0.00000 lsum=0.00000 hsum=0.00000 L2norm=0.00000 CMF=0.00000 conv=0.00000 L1wave=4.27200 Saliency=0.37390 Inputxgrad=0.28642 filt = get_1dfilts(wt) phi, psi, x = get_wavefun(wt) plot_1dfilts(filt, is_title=True, figsize=(2,2)) plot_wavefun((phi, psi, x), is_title=True, figsize=(2,1))_____no_output_____ </code> # later fit_____no_output_____ <code> p.lamL1wave = 0.0001 p.lamL1attr = 2.0 p.num_epochs = 100_____no_output_____# train params = list(wt.parameters()) optimizer = torch.optim.Adam(params, lr=p.lr) loss_f = awd.get_loss_f(lamlSum=p.lamlSum, lamhSum=p.lamhSum, lamL2norm=p.lamL2norm, lamCMF=p.lamCMF, lamConv=p.lamConv, lamL1wave=p.lamL1wave, lamL1attr=p.lamL1attr) trainer = awd.Trainer(model, wt, optimizer, loss_f, target=p.target, use_residuals=True, attr_methods=p.attr_methods, device=device, n_print=5)_____no_output_____# run trainer(train_loader, epochs=p.num_epochs)Starting Training Loop... Train Epoch: 0 [1044/2936 (97%)] Loss: 0.901990 ====> Epoch: 0 Average train loss: 0.7627 Train Epoch: 5 [1044/2936 (97%)] Loss: 0.720028 ====> Epoch: 5 Average train loss: 0.7539 Train Epoch: 10 [1044/2936 (97%)] Loss: 0.834074 ====> Epoch: 10 Average train loss: 0.7563 Train Epoch: 15 [1044/2936 (97%)] Loss: 0.770845 ====> Epoch: 15 Average train loss: 0.7533 Train Epoch: 20 [1044/2936 (97%)] Loss: 0.742599 ====> Epoch: 20 Average train loss: 0.7553 Train Epoch: 25 [1044/2936 (97%)] Loss: 0.742735 ====> Epoch: 25 Average train loss: 0.7540 Train Epoch: 30 [1044/2936 (97%)] Loss: 0.694868 ====> Epoch: 30 Average train loss: 0.7543 Train Epoch: 35 [1044/2936 (97%)] Loss: 0.795443 ====> Epoch: 35 Average train loss: 0.7540 Train Epoch: 40 [1044/2936 (97%)] Loss: 0.778785 ====> Epoch: 40 Average train loss: 0.7554 Train Epoch: 45 [1044/2936 (97%)] Loss: 0.705887 ====> Epoch: 45 Average train loss: 0.7533 Train Epoch: 50 [1044/2936 (97%)] Loss: 0.789478 ====> Epoch: 50 Average train loss: 0.7540 Train Epoch: 55 [1044/2936 (97%)] Loss: 0.890213 ====> Epoch: 55 Average train loss: 0.7564 Train Epoch: 60 [1044/2936 (97%)] Loss: 0.836623 ====> Epoch: 60 Average train loss: 0.7554 Train Epoch: 65 [1044/2936 (97%)] Loss: 0.775239 ====> Epoch: 65 Average train loss: 0.7540 Train Epoch: 70 [1044/2936 (97%)] Loss: 0.600040 ====> Epoch: 70 Average train loss: 0.7510 Train Epoch: 75 [1044/2936 (97%)] Loss: 0.719130 ====> Epoch: 75 Average train loss: 0.7535 Train Epoch: 80 [1044/2936 (97%)] Loss: 0.732523 ====> Epoch: 80 Average train loss: 0.7533 Train Epoch: 85 [1044/2936 (97%)] Loss: 0.645887 ====> Epoch: 85 Average train loss: 0.7520 Train Epoch: 90 [1044/2936 (97%)] Loss: 0.859398 ====> Epoch: 90 Average train loss: 0.7559 Train Epoch: 95 [1044/2936 (97%)] Loss: 0.704002 ====> Epoch: 95 Average train loss: 0.7529 plt.plot(np.log(trainer.train_losses)) plt.xlabel("epochs") plt.ylabel("log train loss") plt.title('Log-train loss vs epochs') plt.show()_____no_output_____print('calculating losses and metric...') model.train() # cudnn RNN backward can only be called in training mode validator = awd.Validator(model, test_loader) rec_loss, lsum_loss, hsum_loss, L2norm_loss, CMF_loss, conv_loss, L1wave_loss, L1saliency_loss, L1inputxgrad_loss = validator( wt, target=p.target) print("Recon={:.5f}\n lsum={:.5f}\n hsum={:.5f}\n L2norm={:.5f}\n CMF={:.5f}\n conv={:.5f}\n L1wave={:.5f}\n Saliency={:.5f}\n Inputxgrad={:.5f}\n".format(rec_loss, lsum_loss, hsum_loss, L2norm_loss, CMF_loss, conv_loss, L1wave_loss, L1saliency_loss, L1inputxgrad_loss)) calculating losses and metric... Recon=0.00132 lsum=0.00000 hsum=0.00033 L2norm=0.00001 CMF=0.00024 conv=0.00001 L1wave=4.29012 Saliency=0.36556 Inputxgrad=0.28340 filt = get_1dfilts(wt) phi, psi, x = get_wavefun(wt) plot_1dfilts(filt, is_title=True, figsize=(2,2)) plot_wavefun((phi, psi, x), is_title=True, figsize=(2,1))_____no_output_____ </code>
{ "repository": "Yu-Group/adaptive-wavelets", "path": "notebooks/biology/init_nbs/08e_init_random_coif2.ipynb", "matched_keywords": [ "biology" ], "stars": 22, "size": 134297, "hexsha": "488a28eb78182dc11aafb06cf0f798cf80c49ed9", "max_line_length": 34932, "avg_line_length": 272.9613821138, "alphanum_fraction": 0.9153890258 }
# Notebook from jazzcoffeestuff/blog Path: _notebooks/2021-02-27-Sonny-Side-Up-Arnoldo-Perez-Hydro.ipynb # "Sonny Side Up and Arnoldo Perez Hydro Natural" > "Back with Plot Coffee Roasting we look at another coffe from Finca La Senda in Guatemala - this time their hydro-natural processed lot. Alongside we take a look at Dizzy Gillespie's 1957 'Sonny Side Up' release featuring Sonny Stitt and Sonny Rollins." - toc: false - author: Lewis Cole (2021) - branch: master - badges: false - comments: false - categories: [Jazz, Coffee, Dizzy-Gillespie, 1950s, Plot, Guatemala, Hydro-Natural, Pache] - hide: false - search_exclude: false - image: https://github.com/jazzcoffeestuff/blog/raw/master/images/045-Sonny-Side-Up/Sonny-Side-Up.jpg_____no_output_____> youtube: https://youtu.be/Iz0CcsmAelw This week we're heading back to Plot Coffee Roasting again. A few blog posts ago we featured the rather special Champagne yeast processed coffee from Finca La Senda in Guatemala - this time we feature another experimental processed coffee from the very same farm. As before the coffee is of the Pache varietal. The process is described as a "hydro-natrual" - the cherries are picked and dried in the sun for 4 days before being soaked in water overnight. This "rehydrates" the cherries which are then dried in the shade for another 28-30 days. The folks at Finca La Senda are certainly experimental and they should be commended for it. I think processing variation is likely to be the key to unlocking new flavour potential in coffee. Of course genetic diversity of coffee plants can lead to variation but this is a slow process driven by evolution (aside from the rare "freak discovery" of an unknown varietal) - processing is something that has a shorter feedback loop and so is a more commercially viable for the farmer. I have to admit however I'm not a fan of all the experimental processings I've had. In fact I have a rather low hit rate of enjoyment from them, I often find them unbalanced or just plain odd. Based on the Champagne yeast processed coffee I have high hopes for this one however. Lets just dive straight into the coffee itself. Like the Champagne yeast offering Plot have opted for a pretty light roast, maybe a hair darker than the Champagne yeast but I never compared them side by side. On the dry aroma it is clear this is a very different beast. The first thing that I notice is the red berry note - it also has a sort of bramble note too, it reminds me of a fruit picking farm. In the background there is just a hint of tropical fruit, but it is not your typical "fruit bomb". As usual we start with a filter brew: just as on the nose the primary thing I notice is the red berry flavours. The label lists blueberry and raspberry, which I don't disagree with. The tropical fruit flavours are also more prominent in the cup, mango being the predominant note. On the acidity scale this one comes in quite high but it is balanced with a lot of sweetness. It also has quite a big mouthfeel and body. Interestingly I find this coffee is better at higher cup temperatures, usually I find the flavours difficult to distinguish at the higher temperatures but this coffee keeps its clarity. At lower cup temperatures the acidity can take over a bit. Moving onto espresso: as you'd expect for a light roast the machine temp is set to maximum. For a ratio I found 1:2.5 works best but I prefered to pull a bit slower than usual at around 30-32s. This allowed more of the body to come through and added some intensity to the shot. I didn't find this added any bitterness at all and everything balanced nicely. The flavour profile was much the same as the filter brew: mangos and raspberries. Despite being a fairly acidic light roast it is surprisingly easy to drink as an espresso and one I have enjoyed. The only downside to the espresso compared to the filter is less of the complexity comes through. Overall another great offering. For me the Champagne yeast version was more up my street, but I could easily see others having the opposite preference. Drinking two coffees from the same farm with different processings is always interesting as you can see what a huge difference there is in the cup. These coffees are night and day different, in a blind tasting I don't think I'd be able to identify they were from the same farm. Plot offer a third processing from Finca La Senda (carbonic maceration) but unfortunately at the time I couldn't justify getting all three since I already had too much coffee to be getting on with. If it's still available next time I do a Plot order I'll pick up a bag. On the jazz side this week we're heading back to an all time classic from 1957. The seminal "Sonny Side Up" from Dizzy Gillespie. ![](https://github.com/jazzcoffeestuff/blog/raw/master/images/045-Sonny-Side-Up/Sonny-Side-Up.jpg) Dizzy has featured on the blog in the past as a co-headliner but this is his first showing in his own right. Dizzy of course needs no introduction - one of the forefathers of be-bop, he is one of the few jazz artists who became known outside of the jazz community. Potentially down to his look: often sporting a beret, horn-rimmed glasses and of course the puffed out cheeks when playing. Or possibly down to his jovial and light hearted personality. But, there is no doubting his musical pedigree. > youtube: https://youtu.be/Rezdck_Roog By 1957 be-bop had pretty much run its course, it had become fairly "mainstream" in jazz by then so just playing be-bop was not enough to gain recognition. Jazz artists were beginning to look at the new "post-bop" world and where they could make their mark. For somebody like Dizzy who was one of the originators this could present a problem, it is often easy for an artist to get stuck in their stylistic box and unable to adapt with the times. To combat this Dizzy apointed two tenor saxophonists to help him out: Sonny Stitt and Sonny Rollins. Besides just providing am easy pun for the album's title the tenor pair provided the necessary artistic direction for the album. Both Sonny's were clearly disciples of Gillespie's buddy Charlie Parker - but instead of simply copying Bird licks they each developed upon the Parker framework in different ways. Stitt was most directly compared to Parker, originally playing alto he was often labelled a "Parker Clone". To combat this he made the switch to the tenor and developed a more forceful approach, however he still adopted the belly-fire of Parker and made use of the wild lines we associate with be-bop. In contrast Rollins adapted the Parker ethos in a different way, he made use of the be-bop framework but explored thematic and motif led ideas. You could argue it is more of an intellectual approach, often adopting more complex harmonic devices and crafting lines that fall further from typical be-bop. Hearing the two tenor-men back-to-back is part of the joy of this album. You could never mistake one for the other since their styles are so different (although I believe one version of the liner notes did just that, stating that a Rollins solo appears first or vice versa) Yet both players serve the music being played and neither sounds out of place. With three titans in Gillespie, Rollins and Stitt - it would be easy to overlook the rest of the band: Ray Bryant on piano, Tommy Bryant on bass and Charlie Persip on drums. While these guys may not be "headline acts" as the others they do add to the quality of the album - so many albums featuring "heavyweights" fall down with poor supporting casts. This is particularly true for an album such as "Sonny Side Up" where you have two tenor players battling each other for supremacy, the rhythm section needs to be tight and stay out of their way and that is certainly achieved here. "Sonny Side Up" is afterall however a Dizzy Gillespie album so it is worth exploring a little bit of his role. The album is itself essentially a "live" studio album, all tracks were recorded in one take in a "cutting session" style. Reportedly before the date Dizzy did his best to rile up the rivalry between the Sonny's - important work to get the best out of the band. This all came to a head in the stand out tenor-duel on the track "Eternal Triangle". For me this is Dizzy Gillespie's true talent: getting the best out of people. I do not believe Charlie Parker would have been nearly as revered as he is today without Gillespie. Dizzy worked out a lot of the harmonic complexities of be-bop which allowed Parker (and to a lesser extent Monk) to flourish. > youtube: https://youtu.be/8afBciyvAeA Dizzy is of course no slouch on the trumpet either, this is particularly clear on his killer solo on "After Hours". He manages to pull of an effortlessly cool and laidback blues led solo. Not something you instantly think of when you think of a Dizzy solo. Then of course you cannot ignore the trademark Gillespie humour and wit - on this album coming through in the opening track "On the Sunny Side of the Street". Here Gillespie completely reworks the standard both melodically and rhythmically to create a fresh take on a well-trodden classic. We also get to hear Dizzy stretch his vocal chords which is always intresting too. I can see many parallels between his vocal phrasing on this tune and the phrasing used in rap music. The downside to this album is its brevity - clocking in at just over 30 minutes. It would have been great to have another cut or two on the album but it doesen't take away from the enjoyment. For the jazz historians and those interested in the progression of be-bop it is a very interesting recording, but it is also a great listen for those that just want to hear a great old-fashioned cutting session._____no_output_____
{ "repository": "jazzcoffeestuff/blog", "path": "_notebooks/2021-02-27-Sonny-Side-Up-Arnoldo-Perez-Hydro.ipynb", "matched_keywords": [ "evolution" ], "stars": null, "size": 10781, "hexsha": "488a2c6047336ca038f9f1e26d1ac003b1ddacc3", "max_line_length": 1277, "avg_line_length": 115.9247311828, "alphanum_fraction": 0.7439940636 }