Title of Competition
stringclasses
343 values
Title of Writeup
stringlengths
6
139
User
stringclasses
1 value
Writeup
stringlengths
419
80.4k
Santander Customer Transaction Prediction
7th place The Zoo
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It definitely was and I am sure top teams would and probably even have found it themselves. But it put more pressure on us. The trick with private and public split is genius btw, would not have found that. I guess noone made use of that though.</p> <p>Unfortunately only few discussions were useful to us throughout the whole competition as we found the puzzle pieces already.</p> <p>I was planning to do a much longer writeup like I did in Quora, but disappointment about not making top 5 is still too high, specifically as it is only 0.00005 gap. I only browsed the solutions of top teams and I think we do most of the things very similarly, but apparently failed on combining them to boost us on top.</p> <h2>The "Magic"</h2> <ul> <li>Count the feature frequency</li> <li>Only count for train + nonfake test data</li> <li>Profit (and be creative in how to use these features)</li> </ul> <h2>Models</h2> <p>Given our time being in top range, I assume we have the most different set of models. For a long time, our models focused on modeling each feature separately and then combining the individual predictions with product or mean logit. Here are a few models we use:</p> <p><strong>LGB</strong> Not much to say here, LGB on top of features + counts. </p> <p><strong>NN</strong> This is a structure we came up with quite early and that was for a long part very important to the submission. Basically it is same idea as before, we use 200 Dense layers for each feature separately including large dropouts (0.8) and then just have a final layer lazily combining them.</p> <p><strong>NN-Boost</strong> We use abovementioned NN structure, and predict the 200 Dense layers separately for each feature. So we end up with 200 * 512 features. We then fit an LGB separately on each feature and combine predictions. This also made it to final blend, and I am happy about that because I also wanted to make such a model work :) </p> <p>We played A LOT around with spline regression, kernel regression and similar things. Basically the idea here is that we model for each variable unique and non-unique values separately as they show different functions to be modeles. So overall we have 400 models. </p> <p><strong>Logreg</strong> This is modeling the features based on some knot transformed additional variables.</p> <p><strong>Splines</strong> We transform the features with a kernel and knots to spline feature and fit linear reg on top.</p> <p><strong>KDE</strong> We calculate KDE of unique/non-unique separately and multiply the probabilities.</p> <p><strong>Naive Bayes</strong> This is a model that was also for a long time very useful. It is a hand-crafter NB that calcualtes conditional probabilities on a smoothed version of different bins. So you bin the data (again separately for unique/nonunique) into for example 5-15 bins, then calculate NB probabilities for each bin separately, and then average.</p> <p><strong>NN</strong> Of course we also had NNs fit on single features, but they were slightly too weak in the end.</p> <p>Probably many other version that I now can't remember.</p> <p>Problem of overfitting: Unfortunately, with doing 200 models, it is very very easy to overfit. As an example, if you do LGB and early stop each variable separately, you can go to 93 in CV, but sucks in splitout test or LB. So we mostly tuned hyperparas by taking the same one for each model.</p> <p>Only in the last few days (probably too late), we went back a bit and tried to fit all features together. Here we two important models for final submission.</p> <p><strong>Pivot LGB</strong> @Giba had the idea (and I think this is similar to very top solutions) to fit one LGB on all features together, but just model it as one feature and have a separate indicator of which feature it is. The best one here had CV of 926.</p> <p><strong>Full LGB</strong> Instead of using additional count features, @dott had the idea to simply mask unique/non-unique features and produce additional features, so ending up with either 400 or 600 features. This model is also an important part of final blend.</p> <p><strong>Creative stuff:</strong> We had so many more creative and cool ideas. One is by @dott who changed underlying LGB source code to only combine feature+count in each boosting round as there are no interactions. This was a bit too weak though for the blend.</p> <h2>Features and tricks</h2> <p>We tried so many different features and nothing else helped. I don't even want to talk much about that because it is a bit frustrating.</p> <p>Regarding tricks it is a bit unfortunate for us. Of course we also tried things like pseudo tagging which also gave boosts in CV but was not that helpful on LB so we dropped it. Apparently it could have given us a slight boost and we should have experimented further with it. We also had this idea of using pseudo tags for early stopping which looked promising, but again was not that good on holdout or LB. We never tried to add those things to the blend though and probably we should have. </p> <p>We also only use train augmentation rarely and basically only for the Full LGB model. For the single fit models it does not help, because the model already learns 0/1 classes separately. We were also thinking quite a lot about doing test augmenation, but couldn't find a proper way. I am sure there is some room for improvement here.</p> <h2>Final blend</h2> <p>The final blend is a weighted combination of several different models from different categories from above. We use a hill climber algo for combining, and our top private solution is also our top CV solution with a CV score of 0.927.</p> <h2>The magical night and the following despair</h2> <p>As most of you know, we were the first to breahc the 924 gap, and it all happened within a few hours. Basically @dott found the count features and could see nice boosts on CV, but as many of you we could not directly utilize it for LB. We were tinkering around and finally a few hours later I found the 200k fake examples in test. I quickly fixed the counting and submitted a full feature LGB and boom 912. Dmitry was at Ikea and he said he will further boost it when he is home. I decided to play a quick game of Apex (which I won - I btw haven't played since) and waited. And voila when he got home we got to 920 or something by fitting each feature separately and taking product. Then a little hyperpara tuning 922 and then adding the NN model from above 924.</p> <p>What came next was frustrating. As fast as we found this solution, as long we were stuck on 924. We tried so many things but just couldn't get our score up. I think we were stuck for around a month at 924. I guess all top teams know that things get hard after 924, and at least we could crack 925 in the end. But I feel like we should have had more of an advantage of finding and combining the puzzle pieces so early. Specifically the kernel about fake test data was a hit in the face for us :)</p> <p>I want to sincerely thank my two partners @dott and @giba who are both incredible data scientists and combining our ideas and also slightly different ways of warking was very fruitful and I could learn a lot. I am sure I have forgotten many things already and might add a few details here and there if I remember, maybe my teammates will also chime in or make separate posts.</p> <p>Clean version of first 922 model fitting within ~3 minutes by @dott: <a href="https://www.kaggle.com/dott1718/922-in-3-minutes">https://www.kaggle.com/dott1718/922-in-3-minutes</a></p> <p>LGB by @giba: <a href="https://www.kaggle.com/titericz/giba-single-model-public-0-9245-private-0-9234">https://www.kaggle.com/titericz/giba-single-model-public-0-9245-private-0-9234</a></p> <p>No worries, you did amazing detective work there and it's part of the game. You did not hinder us from improving the score :)</p>
Santander Customer Transaction Prediction
9th place solution (nagiss part)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>In my case, the key to improve the score was modeling independency (no interaction) between variables.</p> <p>I made two types of models with such property.</p> <h2>model 1 - 2step lgbm (LB 0.922)</h2> <p>The summary of the model is as follows. <img src="https://cdn-ak.f.st-hatena.com/images/fotolife/n/nagiss/20190413/20190413021151.png" alt="2step_lgbm_model"> Kernel: <a href="https://www.kaggle.com/nagiss/9-solution-nagiss-part-1-2-2step-lgbm">https://www.kaggle.com/nagiss/9-solution-nagiss-part-1-2-2step-lgbm</a></p> <p><code>num_leaves=2</code> means the model considering no interaction.</p> <h2>model 2 - weight sharing NN (LB 0.923)</h2> <p><img src="https://cdn-ak.f.st-hatena.com/images/fotolife/n/nagiss/20190413/20190413014451.png" alt="weight_sharing_nn"> Kernel: <a href="https://www.kaggle.com/nagiss/9-solution-nagiss-part-2-2-weight-sharing-nn">https://www.kaggle.com/nagiss/9-solution-nagiss-part-2-2-weight-sharing-nn</a> If using NN, we can control interaction directly. (Actually, weight sharing was caused by accident, but it works very well.)</p> <p>Microsoft PowerPoint!</p>
Santander Customer Transaction Prediction
A summary of #18's solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you</p> <p>From the first couple days of the competition, I noticed neural networks were failing horribly and the data was mostly uncorrelated. Also shallow GBT models were performing better than deep ones. Based on all this I assumed the data was independent. I was able to score above a 0.901 with just a tuned a generalized additive model for each feature and taking the product of the probabilities, tuning with stepwise regression. I think this honestly gave me a decent edge over other teams, GAM's seemed to perform well but seemed to be completely overlooked. </p> <p>After I found the Magic, I then instantly started looking for more magic - I tried using ratios of EMA's of the count table and VWMA's (volume weighted moving avg.). I then took the counts of the rounded variables to 2 &amp; 3 decimal places. I realized if you round to 3 decimal places a 0.0006 &amp; 0.00014 get rounded to the same thing, so I actually just took the sum of all the unique counts in a 0.0005 radius of each number. Most of these features gave poor improvements except the rounded to 3 places feature gave a small, but significant improvement in the GAM.</p> <p>My final model was a mildly stacked version of three models - I used neural networks with the main feature + magic feature for each variable - this CV score was a 0.924, a shallow LGBM for each feature + magic feature - this CV was a 0.92, and a GAM with those features + the rounded to 3 places feature - CV 0.922. </p> <p>Since I get three vectors of predictions for each variable, I just took the sum of the three vectors and took the product for all the variables. I also got some small improvements by stacking these vectors using simple methods - to stack I took a PCA of each of the three vectors, than stabled the variance and mean of the first component. I then took a weighted sum/product of that PCA matrix and my 3 other matrices. </p> <p>Advice to people in the future: I played this competition with a mindset of "how can I move up today" as oppose to the "how can I win". By the last few days I realized I had a shot at being in the money, but realized my code was not well organized and if I did win, I'd have no shot at reproducing my submission. So the last few days, instead of spending time fine tuning, I spent a ton of time just fitting easily reproducible models and cleaning code. I think this, my poorly tuned LGBM, and not teaming up put me out of the money. </p> <p>Off topic: If anyone here works at a trading firm in Chicago or NY and could help get me an interview for a junior trading position or junior quant type position, please let me know. Thank you</p>
Santander Customer Transaction Prediction
Best graduation gift(#24 solution to consider magic)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We kept on fighting till the last minute, and we are so happy our efforts pay off with a gold medal. Now I'll share how I found magic things and improve it gradually.</p> <p>If you plot those variables by sns.distplot, there are two weird things:The bumps around edge and the abnormal density in the center. Any information from bumps can be captured by lightGBM out of calculation method. Now considering the second weird things, what lightGBM cannot capture is different values in the center. If a value repeat two times in the center, lightGBM only calculate twice in the loss function without considering times. That leads me to find out what I called the first magic:</p> <p>train['weight_' + var] = train[var].map(train.groupby([var])[var].count()) </p> <p>this will boosted your cv approximately 0.001, but it will only lower your lb. At that time I thought fake data may damage the performance and thank for the brilliant work from <a href="/yag320">@yag320</a>. We cannot achieve it without your work. So the first magic can be revised as followed(cv 0.900 -&gt; 0.901) .</p> <p>data = pd.concat([train[['ID_code', var]], test_real[['ID_code', var]]])</p> <p>train['weight_' + var] = train[var].map(data.groupby([var])[var].count())</p> <p>Figuring out differences of values, the next step is to connect it with value itself. I tried many combination from those variables, such as</p> <p>train['value_' + var] = train[var].map(data.groupby([var])[var].count()) * train[var]</p> <p>train['value_2_' + var] = train[var].map(data.groupby([var])[var].count() ** 2) * train[var]</p> <p>train['log_' + var] = train[var].map(data.groupby([var])[var].count()) * np.log(train[var] - train[var].min() + 1) ....</p> <p>The first one combination can lead your cv to 0.910. And many combinations can lead you to 0.918. However I was stucked at 0.918 for a long time figuring out what arised the weirdness. I built up many hypothesis for those variables(In fact, this is the wrong way again ). Like: values with different counts subject different contribution, so group standardization may be help,</p> <p>train['value_std_' + var] = (train[var] - train['weight_' + var].map(data.groupby(['weight_' + var])[var].mean() ) )/ (train['weight_' + var].map(data.groupby(['weight_' + var])[var].std()) + 1)</p> <p>train['std_count_' + var] = train['value_std_' + var] * train['weight_' + var]</p> <p>I think those transformations can lead your cv to 0.922~0.924, but those transformation also have overfitting problems too.(From our perspective, cv 0.924 only have 0.922 lb.)</p> <p>And last assumption lead us to find out what I regarded as the real magic. Value with one count can be considered as noise(Thanks for my teammate <a href="/senkin13">@senkin13</a>, he found the simplest way to express it).</p> <p>for var in features:</p> <pre><code>print(var) data = pd.concat([train[['ID_code', var]], test_real[['ID_code', var]]]) train['weight_' + var] = train[var].map(data.groupby([var])[var].count()) test['weight_'+ var] = test[var].map(data.groupby([var])[var].count()) train['binary_' + var] = train['weight_' + var].apply(lambda x: 1 if x &amp;gt; 1 else 0) * train[var] test['binary_' + var] = test['weight_' + var].apply(lambda x: 1 if x &amp;gt; 1 else 0) * test[var] </code></pre> <p>600+ feature can lead your cv to 0.923. Unfortunately, we found it in the last two days, we just had a little time to try some feature engineering.</p> <p>My teammate @brian tried data cleaning and many feature engineering, some of them are very useful: 1. Z-score standardization: data['zscore_' + var] = (data[var]-data[var].mean())/data[var].std()*5 <br> 2. Arctan feature: train['zscore_count_arctan_'+var]=(np.arctan(train['count_'+var])*2/np.pi)*train['zscore_'+var] 3. count transformation: train['zscore_count_' + var] = train['zscore_' + var] * ((train['count_'+var]-train['count_'+var].min())/((train['count_'+var].max()-train['count_'+var].min())*8+1))</p> <p>those variables with selection by <a href="/senkin13">@senkin13</a> lead our cv to 0.9247. Then we started blending in the last day.</p> <p>Our NN achieved 0.9218 auc, thanks for <a href="/alexanderliao">@alexanderliao</a> and <a href="/billbafare">@billbafare</a> . They tried many things. Our blending method is very simple: Rank average using lightgbm and NN. NN cv: 0.9218 lightgbm cv: 0.9247 rank average: 0.9253(public: 0.924 private:0.92312 a little overfitting)</p> <p>I'm glad to achieve a gold medal in my senior year. Thanks for kaggle and the efforts by everyone on my team. </p> <p>Thanks!</p> <p>Thanks! You also find out the way to fix out the CV-LB problem and achieve a gold medal. Great work!</p> <p>Yes, the latter one is faster and easier to understand. Really thanks for you to point out it</p>
Santander Customer Transaction Prediction
Collating Magic Solutions
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Multiple people have shared their key to getting the best results. Trying to collate them all in this thread and keep it updated. </p> <p>1) Use frequency/counts! Calculate combining train + real test (exclude fake rows) 2) Use lgbm on 200 individual features and ensemble 3) Replacing unique values by NA for LGBM models helped! 4) Use the property that dataset shuffling works to create artificial samples - This was shown in public kernels as well 5) More folds (5-&gt;10) resulted in significantly better LB scores. .....</p> <p>Reference: (3rd place) <a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88902#latest-512825">https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88902#latest-512825</a> (5th place) <a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88897#latest-512830">https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88897#latest-512830</a> (8th place) <a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88886#latest-512751">https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88886#latest-512751</a> (12th place) <a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88906#latest-512832">https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88906#latest-512832</a> (15th place) <a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88888#latest-512827">https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88888#latest-512827</a></p> <p>Please feel free to add comments on anything I missed. That will help me keep this information updated</p>
Santander Customer Transaction Prediction
Long journey to 27th place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>貨賣得出去,人進得來,高雄發大財</p> <p>Congrats to all the winners, and all the people who put a lot of time and effort on this competition, finally we can have a rest and good sleep now!</p> <p>Many thanks to my good teammate! <a href="/rohanrao">@rohanrao</a> , I am glad that I can work with you in this competition! I learned a lot of stacking and ensemble skill from you.</p> <p>This is a long journey for me, I joined this competition almost competition beginning and put all my efforts in this competition, I summarized my journey and solution as below.</p> <h3>0.900 to 0.901 LB: Concat all dataset to analysis, not just analysis training set</h3> <p>I think there are many people suffer the 0.900 issue at competition beginning, I do some EDA and analysis the data, I found there are some wired things on training and testing, the value_count of each variable seems have some correlation and similar portion, so I tried to do FE on value_counts, I concat training set and testing set, and mark the flag of duplicate value(value_count &gt;1) of each features (set as 1 if duplicate, else 0, create 200 new features), this make me to break 0.901 LB, so I keep following this direction and do more EDA on this. P.S. In the same time, I tried to multiple original raw data (means only keep the duplicated value, else set to 0), the CV will up to 0.920, but after I submit, the public LB only get 0.900.</p> <p>Kernel that break to 0.901: <a href="https://www.kaggle.com/super13579/lgbm-model-catboost?scriptVersionId=11574592">https://www.kaggle.com/super13579/lgbm-model-catboost?scriptVersionId=11574592</a></p> <h3>0.901 to 0.909 LB: Filter the raw data by unique_value appeared in train &amp; test data</h3> <p>I followed the same direction that I break to 0.901, still analysis the value_counts, this time I mark the unique_value that appeared in training set and testing set, and multiple the original raw data(this new features means only keep the appear value in dataset), this make me to break the score to 0.909 </p> <p>Kernel that break to 0.909: <a href="https://www.kaggle.com/super13579/fork-of-fork-of-concate-nn-46ae3a">https://www.kaggle.com/super13579/fork-of-fork-of-concate-nn-46ae3a</a></p> <h3>0.910 to 0.920 LB: Find the real data and fake data set on test data</h3> <p>I was stock at 0.910 for a long time, whatever I do any FE, I still can't break 0.910, I back to analysis the data, I found there are some test data have all features duplicated (count&gt;1), so I separate them to be fake_data(duplicated Flag sum=200) and real_data(duplicated Flag sum!=200), and I also tried to submit to confirm these two differnt data, if I only predict realdata, the LB will show normal, if I only predict fakedata, the LB will very worse, so I have highly certain these are two different dataset, fakedata is not useful in this competition.</p> <p>Kernel that separate two dataset : <a href="https://www.kaggle.com/super13579/split-test-dataset">https://www.kaggle.com/super13579/split-test-dataset</a></p> <p>After I found this, I decide to do the same thing that I did before with real_data, I concat only training set + real_data set, mark the flag of duplicate value(count&gt;1), CV up to 0.921, and LB also get 0.921, no overfitting anymore, after this, CV and LB become more stable. and this is my first time into the money zone... (excited!!!)</p> <p>Kernel that break to 0.920 : <a href="https://www.kaggle.com/super13579/lgbm-with-duplicate-flag-value-0-923">https://www.kaggle.com/super13579/lgbm-with-duplicate-flag-value-0-923</a></p> <h3>0.920 to 0.923 LB: Do more FE</h3> <p>In this time, I am team up with <a href="/rohanrao">@rohanrao</a>, he is stacking/blending expert, I learned a lot ensemble skill from him, and we try to add more feature with value_counts, EX. minus mean on raw data, count&gt;1, count&gt;2, count&gt;3.... these lead us to 0.923 with single model, but we finally can get 0.923 with single model. </p> <p>Kernel that break to 0.923 : <a href="https://www.kaggle.com/super13579/lgbm-model-3?scriptVersionId=12330297">https://www.kaggle.com/super13579/lgbm-model-3?scriptVersionId=12330297</a></p> <h3>Ensemble Model:</h3> <p>We use the same 10-fold to train different models and doing ensemble, to prevent leakage, and we also summarize the model CV and LB in google sheet, using LR stacking and simple blend, we choose the best CV ensemble model to the final submission, ensemble truly help us improve our score a lot.</p> <h3>Conclusion</h3> <p>I may not get gold medal after some violet team been removed, I still very happy to get this result, learned a lot! I made some friends from this competition, ML/DL make my life full of fun and challenge !! Still have a lot need to learn, keep going!!</p> <p>I mark the value that appeared in training set and testing set as 1, else to be 0, then I multiple the original raw data, just like a filter to keep the appear value EX. original_data = [10,9,8,7], mark_features = [1,0,0,1] ==&gt; new features = [10,0,0,7] </p> <p><a href="/cdeotte">@cdeotte</a> thanks your advise! Title changed :-)</p>
Santander Customer Transaction Prediction
Magic kernel from 26th
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi, everyone:</p> <p><a href="https://www.kaggle.com/whitebird/0-923-in-n-5-aug">https://www.kaggle.com/whitebird/0-923-in-n-5-aug</a></p> <p>magic2() is all, which just masked all features by the valuecounts. We thought that the features has different meaning by different valuecounts. And you will get a 0.924 if you set Aug N = 5. Enjoy it. </p>
Santander Customer Transaction Prediction
My Story-140th place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you Chris and congratulations to you as well. I wish you could get gold medal because you really deserve it. After failed with augmentation I didn't give another chance to it :D</p> <p>Thank you and congratulations to you as well. Thanks to microsoft competition I used frequency encoding but I didn't realize this is the magic until the last week :D</p> <p>Congrats to all winner. Special thanks to YaG320 because his kernel was the most important kernel in this competition. Also I have to say thanks to CPMP and Chris Deotte for their great contribution.</p> <p>Here is my story, in first days I used basic frequency encoding which I also used in microsoft competition(thanks again Chris) and I jumped up to the tops of 0.900. After this I started to use XGB instead of LGB and it helps me to get 0.901 but then I couldn't move. After a little time <a href="/jiweiliu">@jiweiliu</a> released a 0.901 public LB kernel and I expected to get 0.902+ with augmentation but it dropped my score. So , I thought my score was result of overfitting and I deleted those features. I made a big mistake.</p> <p>I saw some comments like 'The magic was shared in forum' and I started to search it in forum, but in wrong direction. I focused timestamp and customer Id. After a lot of failed I understood the importance of YaG320 kernel's then I use again frequency encoding but this time instead of all test data I used it without fake test data and got the 0.903 LB. After adding some features like var*freq and tuning the parameters I reached 0.916 public LB with XGB in last days.</p> <p>I learned important things in this competition and the most important one is about trusting yourself. If I hadn't delete frequency features I would try to improve my model earlier and I can get better results. But anyway, I learnt it before competition ends and I am happy for that.</p> <p>Edit: Forgot to mention, besides frequency encoding var*fe gave me good boost.</p> <p>Here is the basic fe function:</p> <p><code> def add_fe(col, dfr,dfo): vc=dfr[col].value_counts(dropna=False) dfo[col+'_fe']=dfo[col].map(vc)/vc.max() return dfo </code></p> <p><code> dfr= pd.concat([train,test_real]) dfo=pd.concat([train, test]) </code> And multiply var and fe </p> <p><code> def mult_with_fe(col,df): df[col+'_mult']=df[col]*df[col+'_fe'] return df </code></p>
Santander Customer Transaction Prediction
NN as part of 13th place solution (late joiners view)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi, and thank you all for this vivid competition. As you might know I join the competition rather late (8 days before end) So I leave it to my awesome teammates <a href="/wrosinski">@wrosinski</a> <a href="/robikscube">@robikscube</a> <a href="/lukeeee">@lukeeee</a> and <a href="/returnofsputnik">@returnofsputnik</a> to explain our solution in total, since I only worked on the NN part. When I joined, my team already found the magic (lucky me) but so far only used gradient boosting. I knew I could add some spice by implementing Neural Nets for the interpretation of the magic. So I started to go through the public kernels to see typical NNs and how they compare to LGB. To my surprise most of them performed worse. But <a href="https://www.kaggle.com/jotel1/nn-input-shape-why-it-matters">one kernel </a>specifically caught my attention. By accidentally (&lt;- my guess) adding one more input dimension this kernel outperformed all other NNs. I realized it was key to understand why adding a dimension helped so much and utilizing that to use the magic feature my team already found. I understood that the whole point was to let the NN calculate feature (axis) wise, and I start fiddling around with different architectures and found nice candidates for an ensemble. We improved their accuracy and checked correlation between them. At the end we had 5 different NNs which where low correlated and ensembled good (LB 0.923). They all have in common that the magic feature (count encoding) is stacked with the original var (=&gt; input_dim=(200,2)) and then processed along feature axis and only at the last layer the features are interpretated together. One example arch is this one (I shared another architecture in this <a href="https://www.kaggle.com/christofhenkel/nn-with-magic-augmentation/">kernel</a>):</p> <p><img src="https://i.imgur.com/KZwR70m.png" alt=""></p> <p>We then brute-forced our way up the ranks by massive bagging. We trained each of the NNs ith 8-fold with 4 different stratification seeds and run them 3 times. So 8 * 4 * 3 folds per architecture. By LB probing we set a final weight between the NN ensemble and the GradientBoosting ensemble of 70/30. </p> <p>Cheers.</p>
Santander Customer Transaction Prediction
One tiny framework for Kaggle competitions
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to everybody regardless you win the medal or not! As someone mentioned before, we are not magicians but warriors!</p> <p>I shared it once already, but I would like to emphasize it just one more time and show you the work that I spent a couple of days on at the beginning of the competition, which eventually helped so much to build the solution we have now. It helped me, so I hope one day it can help someone else ;)</p> <p><a href="https://github.com/sergeyshilin/kaggle-competitions-framework">https://github.com/sergeyshilin/kaggle-competitions-framework</a></p> <p>Also, there is a bonus repo with the code and results that my team has generated by the merger deadline and honestly, it didn't change much after that.</p> <p><a href="https://github.com/sergeyshilin/kaggle-santander-ctp-2019">https://github.com/sergeyshilin/kaggle-santander-ctp-2019</a></p> <p>Cheers and good luck with future Kaggle competitions!</p>
Santander Customer Transaction Prediction
Public LB 0.922 magic notebook
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>922 to 925 is feature engineering and ensembling (linear blend of models). Features combine raw feature and count in various ways. </p> <p>Here is the magic I used to move to 0.922 on public LB. It is available as a notebook at <a href="https://github.com/jfpuget/Kaggle_Santander_2019/blob/master/notebooks/lgb_070_069_one_feature.ipynb">https://github.com/jfpuget/Kaggle_Santander_2019/blob/master/notebooks/lgb_070_069_one_feature.ipynb</a></p> <p>First magic: compute frequency of values using all data. This moved me to 0.901 public LB. I gave hints about it, for instance when I said features were categorical or almost categoricals. Frequency encoding has been discussed a lot in the forum by many people. When someone wrote that I said it wasn't working I denied saying it.</p> <p>Second magic: remove the fake test rows when computing frequency. This moved me to 0.913 public LB. Fake test rows were disclosed in <a href="https://www.kaggle.com/yag320/list-of-fake-samples-and-public-private-lb-split">https://www.kaggle.com/yag320/list-of-fake-samples-and-public-private-lb-split</a> </p> <p>Third magic: use Naive Bayes with 200 models. Each model is lgb trained on one of the original feature plus the associated frequency feature. This moved me to 0.922 public LB. Use of Naive Bayes made sense as soon as Branden Murray showed that features were independent. NB has been discussed quite a lot by Chris Deotte, and me, 2-3 weeks ago. I did say that I had a decent NB model in particular. And Balazs Strenner later provided a NB model with 200 lgb submodels: <a href="https://www.kaggle.com/b5strbal/lightgbm-naive-bayes-santander-0-900">https://www.kaggle.com/b5strbal/lightgbm-naive-bayes-santander-0-900</a></p> <p>That's it.</p> <p>By looking at target mean per count. It had a nice trend with small values of count, then became erratic.</p> <p>Authman, NB assumes elementary models are are independence of conditional probabilities. Raw var and associated count aren't independent by any mean, which is why your model did not work.</p> <blockquote> <p>actual linear model</p> </blockquote> <p>We have not used data augmentation in the end because it was irrelevant with our 200 models + naive Bayes method. This is to say I didn't spent much time understanding why this work. I hope others can answer you better.</p> <blockquote> <p>What was the actual point which solved the LB CV issue?</p> </blockquote> <p>Remove fake rows in computing counts.</p> <p>I find very interesting that several top teams implement this naive bayes with a single model, basiclaly by concatenating all the data of y 200 models into a large single data set. Onodera and Evgeni solutions are great examples of that:</p> <p>Onodera: <a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88939#latest-513518">https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88939#latest-513518</a> Evgeni: <a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88970#latest-513476">https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88970#latest-513476</a></p> <p>Very clever. The single dataset is probably way betetr for NNs.</p> <blockquote> <p>how to finetune parameters..</p> </blockquote> <p>Try and see what works best. Focus on important ones which are the ones I tuned in my notebook.</p> <blockquote> <p>Can I ask you how you came up with the idea of removing fake rows? Is that a common way?</p> </blockquote> <p>No, this is specific to this competition data.</p> <p>Because features are independent.</p>
Santander Customer Transaction Prediction
Rapids.ai Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you! I'm really happy it helped you and some other teams!</p> <p>Congrats to you as well!!! You guys kill it. </p> <p>And there are countless times that I want to tell everyone (including you at some point): are you looking for magic? look no further than Rob's posts and kernels!!</p> <p>Sure, I just updated the post, please see above.</p> <p>This is a great competition. I would like to thank Chris, CPMP and Rob for their great kernels and discussions about the data property. I would also like to thank my teammates, Chris Chen, @30CrMnSiA and Xuan Cao for their brilliant ideas and tremendous efforts. Special congrats to @30CrMnSiA for earning GM title! </p> <h3>Edit: The primitive solution using rapids.ai libraries can be found here:</h3> <p><a href="https://github.com/daxiongshu/santander/blob/master/carl/santander_model.py">https://github.com/daxiongshu/santander/blob/master/carl/santander_model.py</a> You can found rapids install instructions here: <a href="https://rapids.ai/start.html">https://rapids.ai/start.html</a></p> <p>Our solution is also LGB/XGB + NN like other top teams. For the LGB/XGB models, we use the original features, original features where count==1 are set to null, original features where count&lt;=2 are set to null, count encode of original features and mean target encode of original features. All the new features are generated without fake test data. We observe that LGB can always produce marginal improvement over XGB at 0.001 level but XGB GPU is way faster than LGB for both CPU and GPU versions.</p> <p>For neural network models, we group original features based on their target conditional PDFs and apply a dense layer to each feature independently. It is based on the fact that all features are completely uncorrelated. However the conditional target PDF of features can be very similar as shown in the <a href="https://www.kaggle.com/sibmike/are-vars-mixed-up-time-intervals">kernel</a> Therefore we allow the dense layers for features from the same group to share weights. Such weight sharing let the model learn the trend within each group collectively using the same weight parameters, while stay independent of other feature groups. It is critical to the prediction accuracy.</p> <p>In addition to that, I had a great experience using rapids libraries, especially at the early stage when I needed to do a lot of exploring and experiments. The beauty of rapids is it is highly GPU optimized for almost every key dataframe operations such as groupby, merge and masking, where I had 10~100x speedup using rapids. Interesting thing is that I first discovered the <code>magic</code> in the forms of mean target encoding with rolling mean and rapids really kill it for the fast agg-mean and groupby. Later on, my teammates found better and simpler way of generating the <code>magic</code> features using masking and rapids speedup drops to the range of 5~10x for generating the masking featues. However, target and count encode features are also useful, which benefits more from rapids acceleration. One unfortunate thing is integration of rapids to kaggle kernel is delayed for obvious reasons but I promise it will be soon and there will be a better data set to showcase the rapids power.</p> <p>I'll publish the full rapids solution later on and for now I have two kernels to share for our solution. First is a simple visual inspection of magic features, <a href="https://www.kaggle.com/jiweiliu/pdf-of-original-and-magic-features">https://www.kaggle.com/jiweiliu/pdf-of-original-and-magic-features</a></p> <p>And second is our best NN, which is a multi-branch architecture, thanks to Xuan Cao, with weight sharing of var groups that have similar pdfs. <a href="https://www.kaggle.com/jiweiliu/lb-0-9228-nn-weight-sharing-of-var-group">https://www.kaggle.com/jiweiliu/lb-0-9228-nn-weight-sharing-of-var-group</a> If it is not shown well, please check later. I just committed it ;)</p> <p>My teammates will add more details. Thank you all!</p> <p>Install from kernel is slow. Please try the following steps: 1. in the setting on the right hand side, enable internet connection. 2. Enable GPU usage 3. try the following command: <code> import sys !conda install --yes --prefix -c nvidia -c rapidsai -c pytorch -c numba -c conda-forge \ cudf=0.6 cuml=0.6 python=3.6 </code></p> <p>That's a lot of augmentation! I think this is one of the reason that NN gets better result because it can utilize augmentation on the fly.</p>
Santander Customer Transaction Prediction
Token of Gratitude (109th Place Solution)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all. I would like to extend my heartfelt thanks to all those who have shared so much of knowledge in this competition and forum. Special thanks to <a href="https://www.kaggle.com/cdeotte">Chris Deotte</a>, <a href="https://www.kaggle.com/yag320">YaG320</a>, <a href="https://www.kaggle.com/cpmpml">Jean-François Puget</a>, I followed most of the discussions and kernels of yours to learn more about this competition.</p> Turning Points <ul> <li>This discussion : <a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/87016#latest-503628">Chris's hints may be helpful</a></li> <li>This Kernel : <a href="https://www.kaggle.com/yag320/list-of-fake-samples-and-public-private-lb-split">list-of-fake-samples-and-public-private-lb-split</a> I saw <a href="https://www.kaggle.com/c/microsoft-malware-prediction">Microsoft Malware Prediction</a> had many discussions about the winning solutions. One of the solution discussion was on <a href="https://www.kaggle.com/c/microsoft-malware-prediction/discussion/86888">Count Encoding</a>, which worked for me too after tuning some hyperparameters for LGBM. Also this FE worked for me on LB only after removing the noise from the test data.</li> </ul> <h3>Magic Magic Magic</h3> <p>It was just DS and following what has been done in the kernels, discussions of this competition and others of similar kind. Magic is here: <strong>M</strong> --- Most <strong>A</strong> --- Advantageous <strong>G</strong> --- Gain <strong>I</strong> --- In <strong>C</strong> --- Competition</p> <p>A brief write up of the solution is as follows: 1. Removed Fake Data from the Test 2. Concat Train and Test 3. FE of Count Encoding 4. FE by taking count encoding columns (200) and generating new columns by doing simple arithmetic operations (multiplication and division gave me boost from .908 to .916) 5. Final blend of catboost and lgb model</p> <p><strong>Training CatBoost model was very fast on GPU which helped to try many things in less time.</strong></p> <p>Once again thanks and congratulations to all. <strong>Some won money, some won medal and who did not get money or medal, got learning and knowledge which is what will remain forever with you.</strong></p>
Santander Customer Transaction Prediction
Why the magic works- is each raw variable actually a mix of two?
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for share your thoughts.</p> <p>Before the competition over, I had the same explanation as yours here about why the magic setting feature values with count=1 to NAN improved the performance., (i.e., the original data has NAN, but imputed by those count=1 values by some trick, which would be noise).</p> <p>However, after the competition is over, and after I saw the post that , creating additional features where we set all feature values with count&gt;=2 also to NAN is another magic, then the above explanation is no longer valid, since it indicates that actual values for those feature values with count=1 also contain much valuable information and they are not actually imputed noise.</p> <p>That's why I came up with the conjecture in this post, i.e., actually each raw variable is a mix of two, sampled from two different distributions. This would perfectly explain why the magic works, but I just don't understand why Santander do this in data preparation if this is the case.</p> <p>First, congrats to the winners, and also thanks to my great teammates, Guanshuo, Joao and Eran! I really enjoy the collaboration. </p> <p>This is really a tough competition and an unusual competition. Now the competition has ended, I would like to discuss a bit about why the magic works, and would also love to hear your thoughts on this.</p> <hr> <p>One magic our team found is that, replace count == 1 features to NaN -&gt; LB 0.917. It was shared in this post (Link: <a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88913#latest-512883">https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88913#latest-512883</a>) that, if created additional 200 features where we replace count &gt;=2 features to NaN, it would lead to LB 0.922 (we missed this magic).</p> <p><strong>Such an observation leads to a natural conjecture that, is that true each of the provided raw variable is actually a mix of two variables, sampling from two different distributions</strong>? The first variable that all have count=1 might possibly due to sampling without replacement.</p> <hr> <p>If this is indeed the case, and it was done intentionally by Santander, then <strong>what would be the possible reasonable motivation for them to provide us the data in this way</strong>, rather than provide us 400 raw variables (i.e., separating the two mixed variables, which is easy to be done if realized, which is the case for the sponsor)? Preparing the data in such a way seems would only confuse competitors and the ML models without adding any benefits, unless Santander try to test Kaggle competitors' skills about tackling the puzzle or reverse engineering the data preparation steps.</p> <p>Any of your shared thoughts would be appreciated.</p> <p>&gt;you can just have 200 features with the value 250 added to each unique value. </p> <p>Interestingly, this idea is actually the same/similar as one of the two key innovations included in the NIPS2017 LGB paper, in which it was called 'exclusive feature bundling(/EFB)' technique (the other key technique in LGB is called gradient based one sided sampling, or GOSS). I happened to read that paper about 1 year ago in detail.</p> <p>p.s.: also congratulation to your great achievement, and thank you for the very generous sharing in the forum that helps lots of people in the competition^_^</p> <p>Hi Joe,</p> <p>I meant, for each raw variable (from var_0 to var_199), e.g., var0, actually it might be a result of mixing two variables, let's said var_0_A and var_0_B, where var_0_A (with value count=1) is sampled from one distribution, and var_0_B(with value count&gt;=2) is sampled from a different distribution, and then they just mix (concatenate in axis=0) these two variables together into one variable var_0. And the same operation seems to be performed on each of the 200 variables.</p> <p>And by using the criteria of count==1, the variable could be easily separated back to the two original variables var_0_A and var_0_B, which remove the confusion introducing by the mixing procedure, and that's why the two NAN replacement magic tricks work.</p> <p>This magic was initially discovered by my teammate Guanshuo. Based on the forum discussions about unique value counts, Guanshuo tried a couple of ideas to capture such a pattern. His first attempt is to create a binary indicator variable (whether the feature value is unique) for each of the 200 raw variable, in total 200 binary new features. This boosted the CV and LB score from 0.900 to 0.904. His 2nd attempt was then replaced these unique values by 0 (rather than creating 200 binary indicator variables), and this boosted the score to 0.914. He finally tried to replace these unique values by NAN, and this further boosted the score to 0.917.</p> <p>This magic really surprised me though when I was first told by Guanshuo that this simple magic works so well. We tried to come up with a reasonable explanation for this, after some discussions, we finally thought that these unique values might possibly just NAN in the real actual data, and Santander imputed such NAN values by these unique values using some unknown trick, which introduced noise. Unfortunately, at that time we didn't test the complimentary part of the magic, i.e., create 200 additional features where we set the nonunique values as NAN. Otherwise, it would be a totally different story for our team.</p> <p>Hi Xuan,</p> <p>&gt; I am not sure how they use the "count &gt;= 2 as NaN" feature. </p> <p>What they did is that, first created 200 features where the unique feature values in the raw data was replaced by NAN (using these 200 features only gave LB 0.917), and second, created another 200 features where the non-unique feature values in the raw data was replaced by NAN (using these 200+200 FEs together gave LB0.922). </p> <p>So basically, when applying the above two magics (set unique values=NAN and set non-unique values to NAN, in total 400 FEs) together, what it did is exactly separating each of the original raw variable into two variable (thus 200 raw variables converted to 400 FEs) by using the condition count==1, and all of the original values are still kept (but separated in two different columns now). And these two magics works so well might implied that the separated 400 FEs might possibly the actual original data format but Santander mixed each two variables into one variable and provided us 200 raw variables rather than 400 raw variables. And the two magics just reverted this data preparation step to remove the confusion for ML models.</p> <p>&gt;If it is another set of features used together with the raw feature, I guess it may behave similarly to "count &lt;= 2 as NaN" features: a filter mask?</p> <p>Based on my understanding, it not another set of features used together with the raw feature, instead, it is another set of feature used together with the first 200 FEs (where unique feature values were set as NAN). They mentioned in forum that in total they only used these 200+200 FEs (and dropped the 200 raw variables) to achieve LB0.922. (P.S.: I didn't test it yet, but I think it would also not behave exactly same as using together with raw features + setting count&lt;=2 as NAN features, since the raw features seems to be a mix of two variables and more dirty/noisy for lgb to extract the signal; separating each raw variable into such two different variables provided a much cleaner version data that remove the confusion introduced by the mixing procedure for ML models.) Hope this makes it clearer for you.</p> <p>P.S.: my name is Shize, not Shizhu^_^</p>
Santander Customer Transaction Prediction
part of #4 solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>but you have only 3 columns and just 100-200 iterations for train. Around 10 minutes per one fold</p> <p>mean of prediction was also very good, but slightly lower than product. approx .923 against 925. But if you put mean and other aggregates to linear regression you could get score even better than product. we put together aggregated features from many models, selected the best set and built logreg as final submission</p> <p>Congratulation for all winners and thanks to my teammates @alijs and <a href="/zfturbo">@zfturbo</a> for this hard fight.</p> <p>Our base model was simple - I put all vars together in one column (200000*200, 1), add counts as second column, name of features as 3rd categorical column and used LightGBM. AUC for this model was not very high - near .53, but product of all predictions grouped by ID returned .9258 locally and .924 on LB. With other aggregates of predictions (min, max, std etc) we got more by logistical regression model.</p> <p>There was few tricks before union all data together - we need normalize each vars (standard scaling was the best but minimax, binning and one factor logistical regression were useful for blending). I reversed features which had individual AUC less than .5 - the idea was to get all features sorted similar against target to help boosting. And one more trick - I removed some vars from train which predictions by long model had AUC near .5 (before grouping) </p> <p>The reason why this approach was the best is the data. All vars had no interactions between each other, but GBM found some fake interactions. With long model it became much harder to find inter-vars interactions, boosting mainly used only feature + count pairs. Another approach how to kill inter-vars interactions was shuffling within each target, but long model was much effective.</p> <p>This competition helped me to improve my skills how to work with unanimous data. Thank you for all who shared their ideas.</p> <p>I used similar approach in Homecredit competition, where was more reasons for it (many rows from additional table with one target from main table), and here just tried during search the way to break .900. It didn't help to break .900, but helped with counts</p> <p>I'm not ready to explain it from math side, I just tried to help the gbm to make it's job better.</p> <p>'var_0' х 200 rows, 'var_1' х 200 rows ... converted as factor in R (similar to label encoding in python) and marked as categorical for LightGBM</p> <p>no, new shape of train 40000000, 3</p>
Santander Product Recommendation
#1 solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Product Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Tom. Congrats to you as well.</p> <p>For this competition I used an i7-5820k, gtx-970, and 64gb ram. Most of the models ran overnight, the longest one took around 10 hours. The ram was the biggest bottleneck for me since all the data prep was done in R.</p> <p>This solution is an ensemble of 12 neural nets and 8 GBMs with several hundred features, so I'll try to stick to the highlights. Please let me know if I inadvertently gloss over anything interesting.</p> <ul> <li><strong>Features</strong></li> </ul> <p>For the most part, these are similar to what people mentioned on the forums: lags of products, time since presence of products, average of products, time since last purchase of products, etc.</p> <p>Some features which I have not seen mentioned elsewhere were time since change and lags for a few non-product attributes: segmento, ind_actividad_cliente, cod_prov, canal_entrada, indrel_1mes, tiprel_1mes. </p> <p>Features which seemed to hurt individual submodels but made their way into the ensemble anyway were historical averages of products segmented by combinations of: canal_entrada, segmento, cod_prov.</p> <ul> <li><strong>GBM Models</strong></li> </ul> <p>These submodels are all similar to each other but slightly different from the popular approach on the forum. They are 17-class multinomial models targeting the 16 most popular product additions. The remaining 17th class indicates either no additions or an addition of one of the eight remaining products. </p> <p>As described elsewhere, multiple product additions are handled by adding duplicate rows with different targets. These duplicate rows are weighted by the reciprocal of the number of added products. </p> <p>Adding the 17th "no addition" class expands the amount of training data substantially. The largest submodel covers Jun-15 to May-16 and weighs in around 10 million rows. LightGBM is key to building these models in an expeditious manner.</p> <p>Differences between submodels are primarily due to training on different time frames and using varying combinations of features.</p> <ul> <li><strong>NN Models</strong></li> </ul> <p>Rather than target the addition of a product, these submodels target presence of product in a given month. They are ambivalent whether the product is new or whether the customer has carried the product all along.</p> <p>These models are multi-target rather than multinomial. They target a length 16 vector of the more more popular products and are trained on all customers regardless of whether they added a product.</p> <p>The structure of all the nets is the same. They have an input layer, two hidden layers of 512 nodes, and the 16 node output layer. The largest training set here is also around 10 million rows but Keras made these nets easy to set up and relatively quick to build.</p> <p>Differences between these models are again on time frames and features as well as multiple runs with different seeds.</p> <ul> <li><strong>Post Processing</strong></li> </ul> <p>Where applicable, each submodel is scored once as Jun-16, once as Jun-15, and once as Dec-15. By "score as" I mean that fecha_dato is incorporated into the models by converting it to numeric, 1 to 18. Modifying fecha_dato on the test set causes us to "score as" a different month.</p> <p>Generally speaking, the Jun-16 scores for each submodel are retained with the exception of the reca score, which is replaced by the Jun-15 score, and the cco score, which is replaced by the Dec-15 score.</p> <p>Submodel scores are set to zero when the customer had a product in the previous month. The individual product scores are then baselined to level of public leaderboard. This is accomplished by multiplying by the ratio of the average product score to the value obtained on the public leaderboard by submitting only that product.</p> <p>The final ensemble is a weighted average of the submodels with weights obtained from leaderboard feedback.</p> <hr> <p>Thanks for reading and thanks to Santander and Kaggle for making this competition happen.</p> <p>@sh1ng</p> <p>If a person added n products, then the weight is 1/n for each of the duplicate rows. One idea behind the choice of weight is that the map@7 also weights the individual products by 1/n when there are multiple additions. I never tested it, so I can't say whether the weighting improved results or made them worse.</p>
Santander Product Recommendation
#11 Solution and Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Product Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi,</p> <p>We were quite late to the party, with our first 0.03+ submission just 8 days back. But, since then, we've been working day and (mainly) night, and are quite satisfied with the insights, models, progress and final results we managed to achieve.</p> <p>Here are some key points of our approach:</p> <ul> <li><p>We modelled only 19 of the 24 target variables. Removed ahor, aval, deco, deme and viv, completely from the data.<br><br></p></li> <li><p>We had two models, one using June-15 data, the other using May-16 data. This was key for us. The idea behind it was to capture seasonality using June and to capture trend using May.<br><br></p></li> <li><p>We faced an interesting situation, where SRK's model on only new products always performed a bit worse compared to my model on the entire data. We couldn't really figure this part out, so we had to stick with my 'much longer' modelling time, unfortunately. I think we missed something here.<br><br></p></li> <li><p>Main features for June model: Lag1 to Lag5 target variables (in the form of historic averages), raw features as numeric or label-encoded, count of products owned in previous months, 19-digit binary string of products owned in previous month, binary flag for new products. We had some other small minor features, but they were negligible.<br><br></p></li> <li><p>Additional features for May model: Lag6, Lag9, Lag12 target variables (in the form of historic averages).<br><br></p></li> <li><p>Our 10-fold CV scores moved in sync with the LB, so we kept faith in it. Bagging or parameter tuning did not give any improvement, so we spent most our time in feature engineering.<br><br></p></li> <li><p>We did a very 'unnatural' form of ensembling the two models, which pushed our score from 0.0303 to 0.0306.<br><br></p></li> </ul> <p>While exploring the data, we found a glaring mix of seasonal and trend components.</p> <p>It was evident that certain products have a very different distribution in June-15 compared to Jan-15 to May-15, so we termed them as ‘seasonal products’ and felt the seasonality model would do better, and certain products have a very different distribution in Jan-16 to May-16 compared to Jan-15 to May-15, so we termed them as ‘trend products’ and felt the trend model would do better.</p> <p>We tested this by adding 50% of May-16 data to our June model and sure enough, we went from 0.0301 to 0.0303. Then, we built separate models for Jun and May, but the ensemble didn’t work. We weren’t surprised because June data is better for seasonal products, and May data is better for trend products. And vice-versa, June data is bad for trend products and May data is bad for seasonal products. So, they sort of cancelled each other out.</p> <p>And that’s when SRK made a magical suggestion of ensembling the two models using different weights for each product individually. I was skeptical and unsure since this leads to the sum of probabilities of a customer across all products not equal to 1. But I guess since it is a recommendation challenge where we’ve to pick the ‘top’ products, this form of ensembling worked wonders.</p> <p>Eg: cco and reca show sharp spikes in Jun-15, so we gave higher weightage to the seasonal model over the trend model for these products. Similarly, dela and fond substantially decrease over the months upto May-16, so we gave higher weightage to the trend model over the seasonal model for these products.</p> <p>Essentially, it helped the model give better weightage to seasonal and trend products.</p> <p>You can find a complete working code here: <a href="https://github.com/rohanrao91/Kaggle_SantanderProductRecommendation">https://github.com/rohanrao91/Kaggle_SantanderProductRecommendation</a></p> <p>I’ve tried to squeeze in most of our main features in a single script for convenience and this scores ~ 0.03102 on the private LB (ranked ~ 11th or 12th).</p> <p>Forgive me for the exorbitant number of times I’ve used the word ‘prev’ in the code :-) Feel free to post any queries here or on GitHub, we’ll be happy to respond.</p> <p>Thanks to SRK for teaming up, and it was a fun contest to collaborate and work on together. Thanks to Kaggle and Santander for the dataset and hosting this competition.</p> <p>A little disappointed on being bumped out of Top-10, but it seems like <a href="https://www.kaggle.com/c/santander-product-recommendation/forums/t/26562/leaderboard-shakeup-with-map?forumMessageId=151709#post151709">raddar wanted it more</a> than us, and I was just praying for a Top-13 finish so that I get my 5th gold and become a GrandMaster.</p> <p>So I guess everyone goes home happy :-)<br> Kudos to all the wonderful Kagglers who make the forums so resourceful. Wish you all Merry Christmas and Happy Holidays!</p> <p><strong>P.S.</strong> Interesting to see that at the time of posting this, the teams who have shared their solutions are #5, #7, #9, #11, #13. So, is it 'odd' to share your solution? ;-)</p> <p>[quote=RB;152147]</p> <p>Your code is quite short and neat, thanks for sharing and congratulations! </p> <p>[/quote] Thanks! One of the main threads of this competition was 'Less is More'... I copied the same theme in my code :-)</p>
Santander Product Recommendation
#13 solution sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Product Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>my solution was focused mainly around feature engineering. I did very little when it came to tuning model hyper parameters, ensembling or post processing or advanced data selections.</p> <p><strong>Features</strong></p> <ul> <li>product related</li> </ul> <p>for each product of the 24 products given</p> <ul> <li>status of product in previous 3 months (0, 1)</li> <li>indicator if the product was added, dropped or unchanged (1, -1, 0)</li> <li>number of months the product was in last month state. if you are in 2016-05 and a product was 0 from 2016-01 to 2016-04 then this feature would be equal to 4. </li> <li><p>number of months the product was 1 and 0 in the last 5 months.</p> <ul><li>customer related</li></ul></li> <li>almost all given numerical and category features. for categorical features i factored them rather than one hot encoding.</li> <li><p>different grouping for renta and age. for example, the renta ratio to the mean renta in country and province. age groups and renta ratio in certain age groups</p> <ul><li>Others</li></ul></li> <li>month indicator with 0-17 base and real month id 1-12.</li> <li>decimal presentation of the previous month products outlook to capture pattern</li> </ul> <p><strong>Things that didnt work</strong></p> <ul> <li>doing any advanced cleaning for the data like imputing with mean or median and other methods had very little effect and at the end i dropped almost all cleaning and stick to the original data. ofcourse i did basic things like replacing string NA with np.NaN and then replace NaN with zero before training.</li> <li>could not get any value from fecha_alta even though it gave a good boost in cross validation </li> <li>ensembling or stacking. i did bag my predictions and this seems to improve the score and most likely make it more stable (run xgb with the same params but different seed and average the predictions</li> </ul> <p><strong>Training</strong> i used xgb and did training on the following months: April, May and June 2015 and March, April and May 2016.</p>
Santander Product Recommendation
#5 Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Product Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>[quote=Mikelangel;152105]</p> <p>Hi Jared ! Thanks for posting ! Your code is very big but it allowed me to better understand it , so thanks again ! I'm trying to understand the logic behind it ^^ </p> <p>Just a question ? Why there are so many line with " # " ? Those lines are useless? You put "#" to remind to yourself what you did before any improvement maybe ? </p> <p>Thanks for answering me ;) and thanks again for your knowledge sharing ! </p> <p>[/quote]</p> <p>Yeah, basically it is to remind me that I tried this already and it didn't help. Or some of the comments are I looked at the result of this code, but having seen it once I don't need to see it when I rerun the code. I have not had time to clean the code up yet, when I clean up the code I'll remove the comments.</p> <p>I will describe my part of the #5 solution, for BreakfastPirate's part see: <a href="https://www.kaggle.com/c/santander-product-recommendation/forums/t/26786/solution-sharing">https://www.kaggle.com/c/santander-product-recommendation/forums/t/26786/solution-sharing</a></p> <p>The challenge of this dataset was to capture both the seasonality and the over-time trends in the data. Specifically reca and cco products were seasonal, while there were trends in some of the other products. To capture this trend I created separate xgboost models for different subsets of the data, and then combined them with a weighted average of the probabilities based on lb feedback. Roughly half the final ensemble came from models trained on just june 2015, about 30% was from the entire time range, 10% from the last 3 months and 10% from 12/2015. I also found the number of active months a customer was present for to be an important feature, and weighted the models differently depending on this (months active is the difference between the current month and the first month a customer joined).</p> <p>In terms of building the xgboost models, I added a few features which helped. The lag features have already been discussed, I also added a feature which was the average of each of the products in the past. To save time I trained and tested only on data that bought a new product, and only for the top 17 products. If a customer purchased multiple products I selected one at random. My computer only has 8 gb of ram, so this step was necessary for me to be able to compete (a big thanks to BreakfastPirate's forum post early in the competition). I also added features such as the first month a customer was in the dataset (customers present all months were much more likely to have tjcr), the number of months a customer had been active for (customers only active 1 month were much more likely to purchase ctma and cco). Another thing which inexplicably helped quite a bit, consistently both on cv and on lb, was to average probabilities from xgboost models trained with multi-output and trained one product at a time (which allowed products which were purchased in previous month to be excluded from dataset). One interesting thing I tried was removing some features such as renta and antiguedad, the effect of this was not clear. I also looked at things such as max_age, min_age, min_antiguedad, whether or not antiguedad changes and so on, but none of these things conclusively helped.</p> <p>I also utilized lb feedback to update the average odds of the different products. The manner in which I did this was a bit of a hack because the distribution of people who buy products is quite different than the majority of customers who don't buy new products. So in order to update only the relevant odds, I looked at the mean predictions of the models on may 2017 customers who purchased a product, if I lied to the model and told it that these customers had fecha_dato_month of 6. The main effect of this was to boost reca and cco, and to cut down on dela. This procedure provided large gains on lb, of about 0.0002.</p> <p>To combine with BreakfastPirate we used a weighted average of the probabilities, with the weights determined mainly by lb feedback. In addition we weighted BreakfastPirate's model much more heavily when he predicted a high probability for ind_nomina or ind_nom_pens.</p> <p>My code is available at: <a href="https://github.com/jturkewitz/SideProjects/tree/master/Kaggle/Santander_Prod">https://github.com/jturkewitz/SideProjects/tree/master/Kaggle/Santander_Prod</a>, but it is not cleaned up yet and is a mess. Thanks to Santander and all the contestants who made this competition so interesting. It is always nice to have a competition where people with limited hardware can do well.</p>
Santander Product Recommendation
#7 solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Product Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all winners!</p> <p>Thanks to Breakfast Pirate for sharing June approach and Panos for information about target structure.</p> <p>My result based on ensemble of few models. The main idea was to fit target structure (product's proportion at June16, determined by June15+trends and confirmed by Panos - thanks again) </p> <p>Main model is weighted sum of probabilities of December+January and June+December+January with weights .3 and .7. Those months have max correlation with target structure (especially without taxes). I decided to mix these 2 models together at the last day and didn't tune weights properly due lack of submissions' attempts. </p> <p>In the main model I inserted probabilities for taxes (reca) from single June15 model, because this months is best for taxes, but not good enough for another products.</p> <p>Also I built separate model for customers who not new, but have zero products at previous month. This group have small impact of seasonality and taxes and last 6 months have better score (checked on different month of 2016). I inserted result for this group to final result.</p> <p>And one more model :). I found that e-account had more popularity at 2016. I built model based on last 4 month to determine probabilities of e-account only. I increased probabilities of e-account in the main model if individual model had prob more than .35 (could be tuned better). For other products this trick was not successful.</p> <p>Mostly I used 5 lag models + max flags for each products for all customer's history, 5 months sums of flags of products, frequencies of combinations of all products at previous months + some of clients features. </p> <p>I cleaned data before training - restore some clients missing variables in some month using next non-missing values for each customer. Also I significantly reduced spikes at pensions payments. 3 months (March15, May15 and Jan16) had wide drop at nom_pens and nomina. This led to high level of "new" product at next months. My guess it was due of some calendar factors - regular payments was made at next or previous month because holidays or something similar. I filled these months by 1 for customers who had these products constantly before and after for prev 2-5 and next 6 months.</p> <p>Because of decrease of interest rate for deposits at EU, I killed near 90% of deposits as a new products before training (for VIP clients less) and all short-term and medium- term deposits at prediction.</p> <p>Last trick - I switched order at submissions for nom_pens and nomina if they predicted together in another order. </p> <p>This competition help me to learn many ways how to fit training model for seasonality. It was very hard and interesting for me. I hope my information will be useful for somebody else.</p>
Santander Product Recommendation
#8 solution code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Product Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Moved notes to top post.</p> <p>This is my part of the code which produces 0.0306377 submission on public, about the best single model of our team: </p><p> <a href="https://github.com/yaxinus/santander-product-recommendation-8th-place">https://github.com/yaxinus/santander-product-recommendation-8th-place</a> </p><p> Some notes:</p><p> Data preprocessing:</p><p> Nothing worked but we tried to impute missing months for customers, fix age and antiguedad, fill 2014 to make lags older than 5 months work, but it didn't help.</p><p> Features:</p><p> Nothing special, ordinary features, lags up to 5 and aggreagates for lags (min, max, std etc). We couldn't make older lags work.</p><p> Base models and merging:</p><p> We've made set of LightGBM and XGBoost mlogloss models with different set of features, weights and params. Than merged base models with greedy algorithm which starts with probailities from some base prediction and attaches next one with lowest Pearson corrleation. This method woked little beter for us than just merging lots of base predictions.</p> <p>[quote=Sameh Faidi;151981]</p> <p>Thanks for sharing. Did you measure how much gain you got from using weights in xgb for example?</p> <p>[/quote] About 0.1%. My teammate found that if we have two cusomers, first one added products [1,2], second [3]. But predicting 1 for both gives map=0.25 and predicting 3 gives map=0.5. We couldn't figure out mathematically grounded weight but from our experiments w=1/exp(1/count-1) seems to be a good choice, 1==count*log(w).</p>
Santander Product Recommendation
#9 solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Product Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>its just a previous month value, if a ncodper had a product in that month or not. </p> <p>Lets say your observation date is 2016-01-28. then:</p> <p>lag1 - product value in month 2015-12-28</p> <p>lag2 - product value in month 2015-11-28</p> <p>...</p> <p>lag12 - product value in month 2015-01-28</p> <p>EDIT: forgot to mention, that these lags are calculated for every product, so in total 12*24 new features for our 1) model</p> <p>First of all, we only modelled only on top 15 largest products based on Panos public LB scores. Our approach considered 3 models only.</p> <p>1) mlogloss model for all 15 products for &gt;=2016-01 period, using lag1-lag12 features</p> <p>2) seperate model for "cco" product on months 2015-06-28, 2015-12-28, using lag1-lag5 features</p> <p>3) seperate model for "reca" model on month 2015-06-28 using lag1-lag5 features</p> <p>Then cco and reca models were combined with 1st model predictions taking maximum probability for each "cco" and "reca" product. Such approach helped to adjust both to seasonal component and general trend for these products.</p> <p>And finally some LB based ad hoc calibrations for "cco" and "reca" products, which was very significant to our score; Calibration was done on logit transformations.</p> <p>For me the most interesting observation was, that non-product related features were almost irrelevant (except probably age and renta) - score is 95%+ based on previous products usage.</p> <p>[quote=Jacques Peeters;151843]</p> <p>Personally i was using not raw lag features but "From how long a product is possessed?" (NA if it isn't possessed right now) and "Frow how long a product was cancelled" (NA if it was never possessed or possessed right now) therefore i had only 2*24 new features. And i feel like it sums up the same information in a more condensed way. </p> <p>Thought? </p> <p>[/quote]</p> <p>This is what i thought could work as well. But then i took a look at "tjcr" product, which showed quite random patterns. as this product represented big part of new purchases, i let ML learn if there is some pattern between that alternating behaviour. Your features ignored that fact:)</p> <p>[quote=Pallavi;151854]</p> <p>@raddar, Congratulations on great ranking! Could you please explain, how did you combine the model results of 3 models as your 2nd model is only for cco and 3rd for reca products? </p> <p>[/quote]</p> <p>I have 2 probabilities for product "cco" (from models in step 1 and step2). I simply take maximum value of it. repeat the same with 1 and 3 model. All other products (not in 2&amp;3 steps) have probabilities from model 1.</p> <p>[quote=Andrey Akhmetov;151846]</p> <p>@raddar, were you using users who has all 12 month history? if not how did you fill the gaps?</p> <p>[/quote]</p> <p>i used all customers. ones who had no 12 month history, would have NA values for that months when he was inactive.</p>
Santander Product Recommendation
2nd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Product Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi there,</p> <p>Please read the <a href="https://ttvand.github.io/Second-place-in-the-Santander-product-Recommendation-Kaggle-competition/#baseModelComb">relevant section</a> in my blog and be enlightened. I simply take a weighted sum of the base model predictions. Leaderboard probing is used in combination with my best guess of generalizability to select the weights. The <a href="https://ttvand.github.io/Second-place-in-the-Santander-product-Recommendation-Kaggle-competition/#postProcessing">post-processing section</a> explains how I handled product dependencies.</p> <p>I hope this answers your questions!</p> <p>Hi there, thanks for the invitation!</p> <p>Anyway, we are on a mission to solve intelligence and that's sufficiently challenging for now so I'll respectfully skip the competition :)</p> <p>Hi all,</p> <p>I have written a <a href="https://ttvand.github.io/Second-place-in-the-Santander-product-Recommendation-Kaggle-competition/">blog post</a> on my solution which gives a little more background on all the steps to get to my final submissions. Any feedback is more than welcome!</p> <p>Thanks @smota and @Prakul! Will I really become a Master? The progression rules say that it now takes two silver medals and a gold but I found some <a href="https://www.kaggle.com/bochen0909">counter</a><a href="https://www.kaggle.com/devinanzelmo">examples</a>. Any further info would allow me to sleep at night :)</p> <p>I'll share the code after some cleaning and confirmation from Santander or an admin that the code can be shared.</p> <p>Thanks Alan! positive flank = new product, negative flank = dropped product.</p> <p>Congratulations to all contestants, you made this another great competition where I learned a lot from the amazing knowledge sharing. Idle_speculation was substantially ahead of the rest in the end on the private leaderboard and I look forward to learn about his approach. The fight with Jack for the second spot was decided by a coin toss so I hope luck will be on his side in the next competition. Thank you Santander for hosting this cool competition. The funny thing is that I would never have participated had I not quit my pilgrimmage on the famous Spanish "Camino del Norte" because of food poisoning in... Santander :-D</p> <p>I am very happy with the second spot but I would be lying if I said that I hadn’t hoped for more for a long time. There was a large gap between first and second for several weeks but this competition lasted a couple of days too long for me to secure the top seat. I was able to make great progress during my first 10 days and could only achieve minor improvements during the last four weeks.</p> <p>In this topic I intend to discuss how I approached the feature engineering, the base model building and combination of the models as well as the major post processing rules of the predictions.</p> <p><strong>Feature Engineering and exploratory analysis</strong></p> <p>Most of the valuable features are present in the lag information of previously owned products. I posted a basic version of my exploratory analysis in an earlier <a href="https://www.kaggle.com/c/santander-product-recommendation/forums/t/25831/interactive-exploratory-analysis-using-a-shiny-app?forumMessageId=146781#post146781">post</a> and have now uploaded the <a href="https://tvdwiele.shinyapps.io/Santander-Product-Recommendation/">full version</a> which allows a richer analysis and can for example be used to study the fascinating age distribution of users that buy nom_pens in a certain month but don't buy nomina. I added lagged features of all products and also included features of the number of months since the (second) last positive (new product) and negative (dropped product) flanks. The counts of the positive and negative flanks during the entire lag period were also added as features for all the products as well as the number of positive/negative flanks for the combination of all products in several lags. One interesting observation was the fact that the income (renta) was non-unique for about 30% of the user base where most duplicates occured in pairs and groups&lt;10. I assumed that these represented people from the same household and could result in valuable features since people in the same household could show related patterns. Sadly all of the features I tried added little value.</p> <p><strong>Base models</strong></p> <p>I built xgboost models for all 24 products for all of the 16 months that showed positive flanks (Feb 15 - May 16). <strong>My main idea here was to use all the available data, both in the features as well as in the positive flanks</strong>. This means that the models trained on for example June 15 were trained using 5 lag months of data and the models of May 16 used 16 lag months of data. Initially I was using a "marginal" model to calculate the probability of any positive flank and a "conditional" model to calculate the probability of a product positive flank given at least one positive flank but I found that I got consistently better results when modeling using all of the data (the "joint" model).</p> <p>The hyperparameters were decided based on the number of training positive flanks, the more positive flanks I observed in the train data, the deeper the trees.</p> <p>All models were built using all of the train data as well as on 10 random folds. I tried several ways to stack the base model predictions but it seemed that the pattern differences were too variable over time for the value of stacking to kick in compared to a weighted average of the base model predictions.</p> <p>I tried to bootstrap the base models but this gave results that were consistently worse. I am curious if anybody got bootstrapping to work in this setting.</p> <p><strong>Base model combination</strong></p> <p>The base models from the previous section were fit to the test data where the number of used test lags was set to the number of lags in the train data. Most weight was given to June 15 but other months all contained valuable information too although I set the weight to zero for products where the patterns changed over time. In this respect it was a good idea to look at the data interactively. The <a href="https://tvdwiele.shinyapps.io/Santander-Product-Recommendations-MonthModels/">second shiny application</a> compares the base model predictions on the test set for the most important products and also shows other base model related information such as the confidence in the predictions. </p> <p>Different weights were used for different products. Cco showed particularly different behavior in June and December 15 compared to other months and it seemed like a mixed distribution of typical cco patterns and end of (tax) year specific patterns.</p> <p>Some product positive flanks such as nomina and nom_pens mostly rely on information from the previous lag but product positive flanks like recibo get more confident when more lag information is available. Let's say for example that recibo was dropped in Oct 15 and picked up in Nov 15. The June 15 model would not be able to use this information since only 5 test lag months are used to evaluate the model on the test set (Jan 15 - May 15). In these cases I adjusted the probability to the probabilities of the models that use more data in some of my submissions.</p> <p><strong>Post-processing</strong></p> <p>At this point in the pipeline there are positive flank probabilities for all users and products. Four more steps were taken before ranking the products by probability:</p> <ol> <li><p>Confidence incorporation: Through a simulation study I was able to confirm my suspicion that in order to optimize the expected MAP, less confident predictions should be shrunk with respect to more confident predictions. I applied this in some of my submissions and this added limited but significant value to the final ensembles.</p></li> <li><p>Normalisation: The probabilities are transformed by elevating them to an exponent such that the sum of the product probabilities matches the extrapolated public leaderboard count. An exponential transformation has the benefit over a linear transformation that it mostly affects low probabilities. Here it was important to realise that the probed public leaderboard scores don't translate directly into positive leaderboard counts. Products like nomina which are frequently bought together with nom_pens and cno to less extent are thus more probable than their relative MAP contribution.</p></li> <li><p>Like BreakfastPirate mentioned: nomina is never bought without nom_pens if nom_pens was not owned in the previous month. I simply swapped their predicted probabilities if nomina was ever ranked above nom_pens and if both were not owned in the previous month.</p></li> <li><p>MAP optimization: Imagine this situation: cco has a positive flank probability of 0.3 and nomina and nom_pens both have a probability of 0.4 but they always share the same value. All other product probabilities are assumed to be zero. Which one should you rank on top in this situation? cco! Do the math :) I also closed "gaps" between nomina and nom_pens when the relative probability difference was limited. This MAP optimization had great effects in local validation (~0.2% boost) but limited value on the leaderboard. I guess this was mainly because of bias in the predictions. Anyway, trying many unrelated approaches is always good with the final ensemble in mind.</p></li> </ol> <p><strong>Ensembling</strong></p> <p>I submitted two ensembles: one using my last 26 submissions where the weighted probability was calculated and where the weights were calculated based on the correlation with the other submissions and the public leaderboard score. The second ensemble consisted of a manual selection of 4 of these 26 submissions that were again selected and weighted using their correlation with other submissions and public leaderboard feedback.</p> <p>Link to the code on <a href="https://github.com/ttvand/Santander-Product-Recommendation">GitHub</a>. Extended story on <a href="https://ttvand.github.io/Second-place-in-the-Santander-product-Recommendation-Kaggle-competition/">my blog</a>.</p> <p>Hi guys, I am currently cleaning my code and intend to share it as a Christmas gift :)</p> <p>The models took about three days to generate on my system but it can be reduced to about 8 hours if you don't consider the 10 folds.</p> <p>Below are my workstation specs, I added 32GB RAM for a total of 48GB: HP Workstation Z240 - Tower - 1 x Core i7 6700 / 3.4 GHz - RAM 16 GB - SSD 512 GB - HP Z Turbo Drive G2 - DVD SuperMulti - HD Graphics 530 - GigE</p> <p>Thanks @kos_. Making progress wasn't the frustrating part. Stagnating on the leaderboard and seeing the rest creep closer for close to a month while trying fresh ideas nearly every day was!</p> <p>Hi sh1ng, I did not do anything special to handle the imbalanced data. Bagged trees tend to do this for you. Log loss was my main criterion when I was looking at individual base models using cross-validation and I also considered what I refer to as "confidence" in the post above.</p> <p>Hi Lam,</p> <p>Good question, I calculated "confidence" as mean(prediction|actual positive flank)/mean(prediction|no positive flank) where prediction was calculated on the out of fold records in the 10-fold cross validation.</p> <p>Thanks @Rohan, that's very exciting news :-D Congratulations on the well deserved Grandmaster status!</p> <p>@Russ. You are partly right: I considered data that are flatlines of 0s but not the 1s since the probabilities are always set to 0 for products that were previously owned so they should also not be modeled. The "Test count" is proportional to the public leaderboard count. The "Test predictions sum" shows the summed predictions of the base models for the considered subset of the app data (only 50,000/929,615 due to memory constraints). Lag 5 for example refers to the model trained on June 15.</p> <p>@sh1ng: My base models indeed only consider one product at a time but I wouldn't call it "one versus all" but rather "one at a time". Since only one product is considered at a time it makes no sense to adjust the record weights. The base models are intended to predict the probability of new products for a given month. Optimizing the MAP score is handled later on in the pipeline.</p> <p>Initially I was building my models on records that have at least one new product (conditional models, these are indeed more like "one versus all"). To normalize the product importances I calculated the predicted number of new products as the product of the marginal month models (probability at least one new product) * conditional base model probabilities. The conditional probabilities were then rescaled exponentially such that the product of marginal*conditional was in line with the public leaderboard information.</p> <p>We had some celebration drinks too so we're probably both wrong :)</p> <p>Hey Alan,</p> <p>I do actually think we have a different view on the mean average precision in the example I quoted. </p> <p>Consider the simple problem where there are two products a1 and a2 that always occur together and the third product b. The probabilities are assumed independent.</p> <p>This leaves four possible scenarios: a1/a2 either have a positive flank or not and the same is true for b. The expected MAP for the order &lt; a1 a2 b&gt; is P(a) * P(b) * MAP(a and b) + P(!a) * P(b) * MAP(b) + P(a) * P(!b) * MAP(a). This simplifies to P(a) * P(b) + P(!a) * P(b) <strong>* 1/3</strong> + P(a) * P(!b)</p> <p>The ordering &lt; b a1 a2&gt; has an expected MAP of P(a) * P(b) * MAP(a and b) + P(!a) * P(b) * MAP(b) + P(a) * P(!b) * MAP(a). This simplifies to P(a) * P(b) + P(!a) * P(b) + P(a)*P(!b)<strong>*7/12</strong>.</p> <p>Working this out will teach you that the second ranking for P(a) = 0.4 is preferable if P(b)&gt;=0.294.</p> <p>I backed the calculation up with the attached simulation study</p> <p>[quote=Alan (AJ) Pryor, Jr.;152289]</p> <p>Another question. You said that for products that had changing patterns over time you set the weight to 0. Could you elaborate how you captured such trends if your models were trained on only one month each?</p> <p>[/quote]</p> <p>Alan, there's actually a <a href="https://github.com/ttvand/Santander-Product-Recommendation/tree/master/Validation/Predictions%20aligner/Shiny%20app%20-%20predictions%20aligner">third shiny app</a> I created to compare product predictions for the base models for a particular user. In the attachment you can see a snapshot of the application where I compare the weighted average for the validation period May 16 versus the adjusted weighted average that only considers base months where the predictions are similar to the predictions of the last months. These patterns occur when there's a positive flank in the beginning of the training period and not all models capture it since they all use different numbers of lags.</p> <p>There is an additional piece of logic in the last submission scripts that ignores base months that differ too much from the last months. This step takes place before the predictions are normalised.</p> <p>Hi everyone,</p> <p>The code is now available on <a href="https://github.com/ttvand/Santander-Product-Recommendation">GitHub</a>. I will provide instructions on how to use it to generate my submission files soon.</p>
Santander Product Recommendation
3rd place solution (with code)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Product Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>@syllogismos It means 'mean of the target (response) variables', that is, the ratio of peple who purchased the product newly.</p> <p>@workingloong One of the reason is to train each product by different month of data, especially for cco_fin (Dec 2015) and reca_fin (Jun 2015). Also, I thought that soft_max is a little bit inaccurate for this problem because people can purchase multiple products at the same time. But in fact, I tried both approaches.</p> <p>@kweonwooj In this part of code, I averaged target variable (purchase of certain product) in order to extract how frequent new purchase may be depending on categorical (factor) variables. I think it is effective than substituting the index of lexicographic order. Sorry for confusing description, but does it make sense?</p> <p>Thank you all for your comments and upvoting!</p> <p>@workingloong It means a combination of some variables. For example, if 'tiprel_1mes' of certain month = "A" and that of last month = "I", then concatenation of them means "AI". Similarly, concatenation of 20 products becomes like "10000001100001110001" as a string.</p> <p>Sorry for my late post, and congratulations to idle_speculation, Tom and everyone!</p> <p>There were many twists and turns, but I would like to explain my final approach here. I think that my approach is not so complicated, and it was performed by my 8GB RAM laptop (but a little more RAM is recommended).</p> <p>The overview is illustrated in attached figure, and attached codes make 'prediction based on 2016-05-28 data' in the figure, which scores 0.0308403 in public LB and 0.0312074 in private LB (6th).</p> <p><strong>Models</strong></p> <ul> <li>The probability of new purchase of each product (denoted by Pr) was calculated by xgboost model (for 20 products except for ahor_fin, aval_fin, deco_fin and deme_fin).</li> <li>Pr of cco_fin was predicted only by 2015-12-28 data.</li> <li>Pr of reca_fin was predicted only by 2015-06-28 data.</li> <li>Pr of other 18 products were predicted by 2016-05-28, 2016-04-28, 2016-03-28, 2016-02-28, 2016-01-28 and 2015-12-28 respectively.</li> <li>Pr of other 18 products were also predicted by data from 2015-12-28 to 2016-04-28 using only records which have new purchased products.</li> <li>Pr of ahor_fin, aval_fin, deco_fin and deme_fin were fixed to 1^-10.</li> </ul> <p><strong>Features</strong></p> <p>The number of used features was 142. Details are as given below.</p> <ul> <li>original features except for 'fecha_dato', 'ncodpers', 'fecha_alta', 'ult_fec_cli_1t', 'tipodom' and 'cod_prov' (18 features)</li> <li>concatenation of 'ind_actividad_cliente' and last month value of that (1 feature)</li> <li>concatenation of 'tiprel_1mes' and last month value of that (1 feature)</li> <li>last month values of 20 products (20 features)</li> <li>concatenation of above 20 features as a character (1 feature)</li> <li>the number of purchased products in the last month (1 feature)</li> <li>count of index change pattern (0 to 0, 0 to 1, 1 to 0 and 1 to 1) until last month of 20 products (80 features)</li> <li>length of continuous 0 index until last month of 20 products (20 features)</li> </ul> <p>Character (factor) variables are replaced with target mean in each modeling process.</p> <p><strong>Other remarks</strong></p> <ul> <li>I basically used '2016-05-28' data as a validation dataset, but when I used '2016-05-28' data as a train dataset, I used '2016-04-28' data as a validation dataset.</li> </ul> <p>Thank you for reading my clumsy post!</p> <p><strong>Edit: Replaced the code (twice) which makes training data because there were severe mistakes (now v3). Sorry!</strong></p> <p>Because some products have strong seasonal trend. Please see this post. <a href="http://blog.kaggle.com/2017/02/22/santander-product-recommendation-competition-3rd-place-winners-interview-ryuji-sakata/">http://blog.kaggle.com/2017/02/22/santander-product-recommendation-competition-3rd-place-winners-interview-ryuji-sakata/</a></p> <p>These product are purchased very rarely, and there are not enough positive training data. I thought that they have almost 0 probability to be purchased, so I fixed their probability to 1e-10 without constructing models of them.</p>
Santander Product Recommendation
4th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Product Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>@BreakfastPirate - Thank you so much, I've learned a lot for you during this competition!</p> <p>@workingloong - I didn't pick a specific month, but rather used data from all of the available dates. An improvement (?) I could have made is to weight data from different months differently.</p> <p>@zhaokk just got on github, check out this commit - <a href="https://github.com/yoniko/xgboost/commit/4810da590ce7477444ef70f679fcd653817d2d09">https://github.com/yoniko/xgboost/commit/4810da590ce7477444ef70f679fcd653817d2d09</a>. As you can see it's a copy-paste of the rank:map function with some changes in GetGradient.</p> <p>The way I'm using it is as follows:</p> <ol> <li>Initialize my features as I normally would.</li> <li>Create a label set with num_classes*num_rows_in_training_data labels. Such that 0 is a negative label for MAP (shouldn't recommend), 1 is positive (should recommend) and &gt;1.5 is don't-care (does not affect the ranking score). As an example, this is the code I've used to create my training set:</li> </ol> <pre><code>def get_multi_rank_train_data(current_train): len_multi_train = len(current_train) nclasses = len(products) multi_train_y = np.zeros(len_multi_train*nclasses, np.int8) for i,product in enumerate(products): # Use 0 for products that shouldn't be recommended and 1 for those that would. # Add 2 to those that were previously present so that they will be considered as don't-care. multi_train_y[i::nclasses] = current_train[product] multi_train_y[i::nclasses] += 2*current_train[product+"_prev"] multi_train_x = current_train[features] return multi_train_x, multi_train_y </code></pre> <ol start="3"> <li>Create an xgboost model instance as I normally would with the objective set to <code>mutli:map</code>. Using the sklearn interface required some more changes which I didn't include here (don't have the time to make them reasonable :( ) - especially by making it not force settings on me (it usually chooses <code>multi:softmax/softprob</code> automatically and re-calculates the <code>nclasses</code> parameter).</li> </ol> <p>Hopefully I'll have more time in the next couple of days to cleanup and post the code for my full solution.</p> <p>I'd like to start by saying that this is my first competition and it was a hell of a ride. Props to everyone competing and participating in the discussions, it's been a pleasure to compete and learn from you guys.</p> <p>During this competition I've tried multiple approaches, most of them have failed, my final best submission is actually pretty simple. I'll try to portray here what worked and what failed.</p> <p><strong>Data</strong></p> <p>After trying multiple approaches I've ended up using all the rows in which a product was added. In truth, most of my big public score bumps were just due to the fact that I've used more data (started with 200K rows, moved to 80% and then 100%).</p> <p>Dealing with this kind of data on my laptop and in a time efficient manner required smart usage and tricks with numpy alongside using numba when applicable, I've learned alot about both.</p> <p>Missing values were just left as-is (actually, based on a forum post, I've assigned -1 to missing values to free up space).</p> <p>I've tried multiple cleaning techniques during this competition, the thing that bothered me the most was the fact that it seemed like some user-ids were re-used (seniority suddenly drops to 0 or they would disappear for a few months after going inactive, etc..) while some of the data (mostly age and income) seemed to belong to the old client. Trying to clean these turned out to only hurt my score, I still don't fully understand why.</p> <p>One of the biggest problems I've had in this competition was having a reliable CV. Each time I thought I've figured something out, it turned out to be not consistent enough with the public. I ended up not having and just relying on my public score, I still feel sore about that.</p> <p><strong>Features</strong></p> <p>My most prominent features were:</p> <ul> <li>Exponential weighted average of each product's presence per client as time goes. I've used two different alphas - 0.5 and 0.1 (tried others, but these two seemed to be the best). The reason I chose EWM is that I wanted features that could hold some temporal meaning but that would at the same time portray long lasting effect, and I wanted to do it a way that is least susceptible to the amount of given data points each time it's generated (for example, do I have 3 or 12 months of data for a given client).</li> <li>Ratios of income to mean income of city, age group and join-channel.</li> <li>Month &amp; Year of current row (fecha_dato) - these ones are tricky as they might really hurt when overfitting. I've included them because I wanted the model to understand what's "special" in month 6 and how is 2016 different than 2015. This is due to the different trends observed in both. I feel like I've managed to achieve it.</li> </ul> <p>Additional features were mostly similar to what others have used:</p> <ul> <li>Lag products (didn't add much on top on the EWM).</li> <li>Lag previous categorial data.</li> <li>For every categorial data - was it changed since last month.</li> </ul> <p><strong>Model</strong></p> <p>I've implemented a custom objective function for XGBoost - "multi:map" which I ended up using for the last two weeks of the competition and my final submission. Basically, I wanted the LambdaRank with MAP objective function but using an interface similar to multiclass classification. The reasons being that using the normal rank:map objective required using groups, which basically meant I needed to multiply each row 24 times and assign some OHE to tell the model which product is being predicted (which also meant that I'd probably need my trees to be deeper). I've based my code on XGBoost's original rank:map implementation, but have changed the interface to receive row_num*n_classes labels so I could mark each row,class pair as pos/neg/don't-care (don't care was important as I truly didn't care about products that were present the month before).</p> <p>My best submissions (which I ended up not choosing for private LB) scored 0.0313112 and was one XGBoost model with the multi:map objective, 400 estimators, 0.05 learning rate and depth 6.</p> <p>I've also used simple ensembles (summing different models' values), those seemed to provide nice moderate bumps in public score. As for post-processing I've only used the nomina =&gt; nom_pens rule, giving nom_pens the score of nomina if it's higher.</p> <p><strong>Compute</strong> For most of this competition I've used a Macbook pro 15' with 16 GB Ram and a 2.5GHz Core i7. At the last two weeks of the competition I've also used EC2 X1 instances at times.</p> <p>Until next time, yoniko</p>
Santander Product Recommendation
Insights sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Product Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is more a draft for a moment. But I will try to improve it further</p> <p><strong>Model</strong> </p> <ul> <li>Neural net with categories embedding for canal entrada, province and country.</li> <li>5 month Historical products portfolios injected using GRU layer. I didn't manage to use more than 5 months because June 2015 is too important </li> <li>For denses layers I tried 2 different architectures: “normal” feed-forward net and Densely Connected Net. I actually find that Densely connected Net allow for much better (performance / # params) ratios but take quite longer to run. However I did not fully tune both architecture so this result is to be challenged. </li> <li>Output layer predict purchases on all products independently using logloss</li> </ul> <p><strong>(Too) Many interesting insights:</strong></p> <ul> <li>Historical products hold contain pattern as confirmed in other posts. I wonder however if some of the patterns are artefacts due to business practice or data management. This features allow to get to ~0.03</li> <li>Different months have different trends. We added other months than June in train set with a indicator for each month. This is very effective to fight overfitting at the cost of drastically increasing models complexity and train time. This made a LB gain of about ~0.0015</li> <li>Many products has clear trends over time (ecue, deco, dela…). We discovered this by doing a cross-validation without month variables then analyzing the residual between ground truth and predictions. This help to correct effects from other variables and show the real trends. We used this insight by predicting these products with month variable = May 2016 instead of June 2015. This improved our LB score around ~ 0.0025</li> <li>In July 2016, there are many people just appeared although based on fecha_alta, they are not new client. They account for ~ 175K customers. These customer have different behaviour comparing to the other group. I think this i another database being concatenated to the baseline one, maybe another business line. I did not test the effect of isolating these people independently from other features</li> <li>Based on fecha_alta, there is a subsription peak every year from July to Dec. Separating people who joined in these periods, they have much lower activity than others. Adding a indicator variable to mark these customer improved a lot my local cv but make very little effect on the LB, only ~0.0005. This is still a mystery for me :)</li> <li>And finally, ensembling xgboost from Guillaume and my nets allow to get ~ 0.002 on the LB. We did not finish tuning our ensembling so maybe it can get a bit further</li> </ul> <p><strong>Other insights that we didn’t manage to exploit:</strong></p> <ul> <li>One of our main issue in the end was tjcr, this product is ranked 3rd on leaderboard but 6th in our predicted prob.</li> <li>We didn’t use the correlations between products, e.g recibo/nom_pens. Or the complemetarity of different account type.</li> <li>I found a bit less than 1000 customers with clear periodicity in recibo. It may give another tiny scores boost</li> <li>As said above, some products have clear trends in the last months. We just extrapolated the trend using last observations but ideally I think we could try some smoothing technique to further improve it</li> </ul> <p>EDIT: I added the chart on temporal bias - diff between ground truth and prédictions with all variables except month variable</p>
Santander Product Recommendation
Simple model solution: 0.0305221- Top 5%
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Product Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Lag for products was a binary indicator saying "Did I have this product in the last n months". For total_products, renta and age it was the minimum of the last n months. </p> <p>What do you mean by "changes in the products sets as features" ?</p> <p>Hi Kagglers !</p> <p>First, thanks to everybody and all the help that was shared on the forum.</p> <p>I wanted to expose my solution as it is a single xgboost model which got me quite far. I think, by its simplicity, that it can be interesting to some.</p> <p><strong>Overview :</strong> Single XGBoost model trained on May to December, lag5&amp; lag3 for all products, lag-1 of consumer characteristics from the test file and... that's pretty much it</p> <p><strong>1. Cleaning :</strong> Stripping of all columns, split of dates, removing consumer with missing rows and once the featuring is done, those who will buy nothing. As XGBoost handles NA, i did not go any farther</p> <p><strong>2. Features :</strong> missing renta filled with average of the province (Thanks Alan (AJ) Pryor, Jr. !), 'will_buy' column with the bought products of next month, total products, Lag5 &amp; 3 of all products (did not have time to add all the lags, unfortunately), lag2 of age, renta, total_products and future characteristics from next month.</p> <p>All categoricals where One Hot encoded, and their values with the less impact (canal_entrada &amp; nomprov) agregated in an "other" column instead of being ignored</p> <p>-- Cleaning and feature generation were processed in chunks to allow fast iteration / testing</p> <p><strong>3. Fit</strong> : With may to December, 5 folds CV, mlogloss metric</p> <p>So, pretty simple and in the end, quite efficient</p> <p>Special thanks to Alan (AJ) Pryor, Jr. !, Tom Van de Wiele and BreakfastPirate for their huge help on the forum</p> <p>No problem.</p> <ul> <li><p>"total_products is a sum of ones for each row ?" : yes</p></li> <li><p>"features Lag 5 and Lag 3 represent each 24 columns" : yes. The lags were to keep track of the evolution, at a particular time frame, of one particular column. So, i had 2 lagged column per product (wanted to make 5 or more), and one lagged for renta, total_products... </p></li> </ul> <p>Those lagged features &amp; learning on may to december made easily 95% of the performance of the model.</p> <p>@ Trottefox : The "will_buy" was what I wanted to predict, it was the positive difference of current and futur owned products, concatenated as bits, splitted later if more than one product was added (to only have one product per line, then encoded to be fed to the model</p> <p>@Andrey : I dropped the users with missing months. I did not have time to see this it was relevant or not, but as my lagged feature were done with a rolling max or min it seemed more correct like that.</p>
Santander Product Recommendation
Some ideas from my solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Product Recommendation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, congratulations to the winners! And thanks to BreakfastPirate for sharing so valuable information at the middle of the competition!</p> <p>I won't go deep in details of my solution, because it is only #14 on private LB, but still there are some interesting ideas/insights, which could be interesting for others.</p> <p><strong>Data:</strong></p> <ul> <li>June-2015 was core month for my solution. But I found that (June-2015 + May-2016) together performed better - probably due to seasonality + trends nature of the data. (June-2015 + Feb-2016) performed even more better. I ended up by ensamble for many different pairs of June-2015 + some other month.</li> <li>I tried also sampling train data from different months according to desired distribution. This alone didn't perform as good, but still gave some points to my final ensamble.</li> </ul> <p><strong>Features</strong></p> <ul> <li>Mainly the same as mentioned by others (lag-5, products of month-1, products of month-2, etc). Probably this was my mistake, because I concentrated more on data selection and post-processing rather than to new unique feature engineering.</li> </ul> <p><strong>Models</strong></p> <ul> <li>Every run was made on one XGB model, one Keras NN model and ensambled together as weighted average.</li> </ul> <p><strong>Post-processing</strong></p> <ul> <li>Set prediction of nom_pens to prediction of nomina + 0.00001, if prediction of nomina &gt; prediction of nom_pens.</li> <li>Set prediction of cno to (predictions(cno) + predictions(nom_pens)) / 2, if prediction of cno &lt; prediction of nom_pens.</li> <li>Reduced prediction of dela by -0.15 </li> <li>Increased prediction of reca by *1.4 </li> <li>Removed from predictions deme, viv and deco </li> </ul> <p>Post-processing actually gave quite a big boost, although I was very afraid of overfitting because of this. At the end for final submission I picked one solution with maximum of post-processing (and maximal public LB score) and another solution with medium post-processing (and not so good public LB score). Don't know yet, which one gave my best private LB score...</p> <p>Thank you for attention and good luck in other competitions!</p>
Santander Value Prediction Challenge
#1 Solution: Leak Part
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Value Prediction Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Kagglers!</p> <p>Since not everyone might be interested in the leak part of this competition, we've decided to split our solution into a part about ML and a part about leak hunting and various data quirks. </p> <p>I would like to thank Kaggle and Santander for such a thrilling competition. Also, huge thank you to Gilberto, who revealed the leak and decided to team up with such a Kaggle newbie like me :-) It was an amazing experience. Lastly, thanks to @Mohsin, @Raddar, @Paradox, @authman and all others for your insights about data and the leakage.</p> <h1>Leak</h1> <p>At the end of the competition we were able to find 113 full feature sets, 40 feature each. With those sets it was possible to find 3887 leaks in train set and 7886 leaks in test set. We have basically used a 'strict' method posted by Paradox, with small modifications to speed up computations and to test various ways of handling duplicates. Then, by manually inspecting groups in test set we have found additional 11 leaks, and corrected 9 previously found, so we had 7897 leaks in total.</p> <p>First thing in addition to finding the leak was to create row groups that formed the time series sequences. We can basically reuse the algorithm that finds the leak labels, changing the minimal lag form 2 to 1, to find the next row in each group. Here is an example result:</p> <pre><code> id1 id2 lag 1 10 8914 9 2 43 47478 2 3 48 28881 2 4 54 2266 1 5 59 17149 1 6 74 7407 13 7 82 36672 15 8 97 35417 1 9 102 44664 2 10 106 11330 1 </code></pre> <p>Now we can use the above data to recreate row groups. Example row group:</p> <pre><code> lag order row group ID 1 4 0 21165 0 6c1c5322d 2 7 1 24365 0 7d3e9a47d 3 3 2 35663 0 b8f888ff1 4 5 3 42105 0 da9596c6b 5 5 4 28058 0 90ccdf4ec 6 5 5 774 0 03ec5eed1 7 4 6 38461 0 c7bc88ec1 8 1 7 15479 0 4efbf5a9d 9 19 8 24577 0 7e4e44daf 10 3 9 22282 0 71cc2e16e </code></pre> <p>Below one of many datasets we created based on row groups to train many-to-many RNN model for next target prediction, since each row group contained one or two missing leaks:</p> <pre><code> id group ID pos leak l size leak_size train 1 26320 339 6fc89a914 1 NA NA 37 35 FALSE 2 26320 339 95d7a4d28 2 NA 1 37 35 FALSE 3 26320 339 a22dd3e47 4 14.98458 2 37 35 FALSE 4 26320 339 d15f4b2f9 6 12.46844 2 37 35 FALSE 5 26320 339 6a08d9f35 14 13.41005 8 37 35 FALSE 6 26320 339 cd63f6e00 25 14.87069 11 37 35 FALSE 7 26320 339 feba04e65 33 14.69098 8 37 35 FALSE 8 26320 339 440f622c5 41 13.81551 8 37 35 FALSE 9 26320 339 c3512cdcf 43 14.22098 2 37 35 FALSE 10 26320 339 e15d025ab 45 15.48070 2 37 35 FALSE </code></pre> <p>We utilized a simple brute force approach to find new feature sets. We try to extend the current set one-by-one with a feature that is an exact match on all row groups and has at least 1 nonzero value.</p> <p>Now, since to find the row groups, we merge on 1 position more for each feature set, there may be additional leaks that we did not find using the basic leak algorithm. Sadly, there turned out to be no new leaks in both train and test. However, in 9 test set groups, we found that there are row pairs, that seem to be 3 steps away from the target instead of standard 2. This resulted in a weird missing leaks in the middle of groups:</p> <pre><code> ID leak f190486d6 58e2e02e6 eeb9cd3aa 9fd594eec 15 5551b7e8c 1600000.0 0 4680000 0 0 16 f83685172 4000000.0 0 0 4680000 0 17 a304cde42 NA 0 0 0 0 18 d72fad286 320000.0 0 0 0 0 19 c45b36dca 980000.0 320000 0 0 0 20 3ab8ce131 553333.3 980000 1560000 320000 0 21 fd6b914e3 375000.0 0 980000 1560000 320000 22 20f660a4e 1210000.0 0 0 0 375000 </code></pre> <p>In the group above it happens for values <code>1560000</code> and <code>320000</code>. To fix this group we change leak label of <code>d72fad286</code> to <code>1560000</code> and <code>a304cde42</code> to <code>320000.0</code>. This change applied to an older blend improved in Private from 0.53679 to 0.52115.</p> <h1>Data Quirks</h1> <p>As mentioned first by Raddar, some feature sets seem to be mean aggregates of 6 other sets. Here is an example for the famous <code>f190486d6</code> feature regressed on linear nonzero mean of features <code>"87ffda550", "c928b4b74", "9a9fc1aba", "9d5c7cb94", 'f3cf9341c', '2c7e41e16'</code></p> <pre><code> Residuals: Min 1Q Median 3Q Max -0.66451 -0.00381 0.00000 0.00000 0.42836 Coefficients: Estimate Std. Error t value Pr(&gt;|t|) predicate 1.0003063 0.0001013 9873 &lt;2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.05806 on 4458 degrees of freedom Multiple R-squared: 1, Adjusted R-squared: 1 F-statistic: 9.748e+07 on 1 and 4458 DF, p-value: &lt; 2.2e-16 </code></pre> <p>More examples of the '6 property' for feature sets that correspond to files uploaded:</p> <pre><code> 0 1 23 28 36 46 64 5 42 47 54 57 67 76 10 3 22 33 35 63 81 17 34 38 40 45 61 77 21 8 19 20 32 37 84 30 31 56 83 87 90 97 85 72 75 86 88 105 106 </code></pre> <p>What was weird about the '6 property` is that it hold only for grouped rows in train and test set. Now after the private leaderboard has been revealed, it is obvious that only around 9k of test samples constitued the union of Private and Public test sets. We believe that rows that did not follow this property were not counted at all.</p> <p>Another thing we've noticed is that the target has been censored to include only values between <code>30000</code> and <code>40000000</code>, while features that formed the time series were not censored.</p> <p>And probably many other small things that I can't recall right now :-)</p> <p>Lukasz</p>
Santander Value Prediction Challenge
10th place write up (simple)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Value Prediction Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to everyone who tried really hard to find the leaks &amp; build ML models! I myself also had a hard time trying to push my place up during the last few weeks, but the outcome showed that it was definitely worth it. Here I want to briefly describe what I have done, mainly because I saw a lot of people asking how simple the solution can be. Here's what I did for the final submission:</p> <h2>Find The Leaks</h2> <p>I tried several methods to find the leaks. By using the rows found in the first few kernels I was able to find roughly 80+ sets. After some futile works, I did what @Paradox said in the forum to find a total of 100 good (40) column sets. I also used the rest of the column sets but they don't seem to help too much. This gives 7841 leaky rows in the test set.</p> <pre><code>def get_order(data, feats, extra_feats, offset = 2): f1 = feats[:(offset * -1)] f2 = feats[offset:] for ef in extra_feats: f1 += ef[:(offset * -1)] f2 += ef[offset:] d1 = data[f1].apply(tuple, axis=1).to_frame().rename(columns={0: 'key'}) d2 = data[f2].apply(tuple, axis=1).to_frame().rename(columns={0: 'key'}) d2['pred'] = data[feats[offset-2]] d3 = d2[~d2.duplicated(['key'], keep=False)] d3['i3'] = d3.index d4 = d1[~d1.duplicated(['key'], keep=False)] d4['i4'] = d4.index d5 = d4.merge(d3, how='inner', on='key') return d5 def get_order_T(data, feats, offset = 2): f1 = [] f2 = [] for ef in feats: f1 += ef[:(offset * -1)] f2 += ef[offset:] d1 = data[f1].apply(tuple, axis=1).to_frame().rename(columns={0: 'key'}) d2 = data[f2].apply(tuple, axis=1).to_frame().rename(columns={0: 'key'}) d3 = d2[~d2.duplicated(['key'], keep=False)] d3['i3'] = d3.index d4 = d1[~d1.duplicated(['key'], keep=False)] d4['i4'] = d4.index d5 = d4.merge(d3, how='inner', on='key') return d5 def get_sets(cor): c1 = cor.iloc[:, 0] c2 = cor.iloc[:, 1] d = dict((x1, x2) for x1, x2 in zip(c1, c2)) sets = [] for k in d.keys(): set_k = [] k_left = k while d[k_left] in d.keys(): set_k.append(k_left) k_left = d[k_left] set_k.append(k_left) set_k.append(d[k_left]) EXIST = False for i in range(len(sets)): for item in set_k: if item in sets[i]: EXIST = True break if EXIST: break if EXIST: if len(set_k) &gt; len(sets[i]): sets[i] = set_k continue sets.append(set_k) return sets def merge_two_lists(list1, list2): ''' useful if want to merge two lists ''' if list1[0] in list2 and list1[-1] in list2: return list2 elif list2[0] in list1 and list2[-1] in list1: return list1 elif list1[0] in list2 and list1[-1] not in list2: start = 0 for i in range(len(list2)): if list2[i] == list1[0]: break else: start += 1 return list2 + list1[len(list2) - start:] elif list2[0] in list1 and list2[-1] not in list1: start = 0 for i in range(len(list1)): if list1[i] == list2[0]: break else: start += 1 return list1 + list2[len(list1) - start:] def merge_two_sets(sets1, sets2): ''' useful if want to merge two sets (e.g., from train and test) ''' heads = [] tails = [] joints = [] for s in sets1: heads.append(s[0]) tails.append(s[-1]) for h in tqdm(range(len(heads))): for t in range(len(tails)): head = heads[h] tail = tails[t] for joint in sets2: for val1, val2 in zip(joint[:-1], joint[1:]): if tail == val1 and head == val2: joints.append((t, h)) return joints d5 = get_order(train, cols, cols_extra, 1) cor = d5[['i4','i3']] sets = get_sets(cor) # the sets of rows d5_T = get_order_T(train.T, sets, 1) cor_T = d5_T[['i4','i3']] sets_T = get_sets(cor_T) # the sets of columns </code></pre> <h2>The ML Model</h2> <p>I simply used the 1.37 LB kernel by @Emmanuel Perry. Of course, I tried a lot of other things but I decided not to use them.</p> <p>So, here we are, a simple solution. It is actually a little weird that this can end up in top 10. If anyone is interested I can post my code used to find the leaky rows and columns :)</p> <p>Hi Hyun Ook Ryu, I have added the code I used, hope its useful for you</p>
Santander Value Prediction Challenge
127th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Value Prediction Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I don't have to try to keep it simple as the solution itself is very simple one.</p> <p>Found 7820 test leaks using 34 sets of features. For ML, we stacked a tuned version of <a href="https://www.kaggle.com/indranilbhattacharya/row-features-xgb-yet-to-be-tuned">this kernel</a> (which gave 1.37) along with 2 other XGB models (one with selected columns and another one is with principal components) . Focused on time series feature engineering on the sorted column sets. I shared few of such features in <a href="https://www.kaggle.com/indranilbhattacharya/are-we-watching-closely">this kernel</a>. </p> <p>One major takeaway: Go through every discussion topic and kernel.</p> <p>P.S: One unsung hero for me is @Dmitry Frumkin and his <a href="https://www.kaggle.com/dfrumkin/a-simple-way-to-use-giba-s-features-v2">script</a>.</p>
Santander Value Prediction Challenge
14 place write-up
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Value Prediction Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I used adversarial validation from the beginning. Before the leak I had strong correlation between CV and LB and chose the same strategy to the end.</p> <p>Did you try adversarial validation and StratifiedKFold on it?</p> <p>I opened one kernel <a href="https://www.kaggle.com/sagol79/ensemble-pipe-adv2">https://www.kaggle.com/sagol79/ensemble-pipe-adv2</a> as an example</p> <p>A helpful guide: <a href="http://blog.kaggle.com/2016/12/27/a-kagglers-guide-to-model-stacking-in-practice/">http://blog.kaggle.com/2016/12/27/a-kagglers-guide-to-model-stacking-in-practice/</a></p> <p>"gsn models" are created by @gsn and "sgl models" are created by me :)</p> <p>After Giba published the leak, we observed that these rows also form a sequence in other columns, prompting us to search for other column groups of 40. This allowed us to identify more accurate leaks. Through an iterative and partially manual process, we discovered 7,866 leaks in the test set and 3,882 in the train set, achieving a perfect accuracy of 1.0.</p> <ol> <li>The first and most important thing that I want to mention - the construction of a good CV was the key moment. The CV should be strictly correlated with LB.</li> <li>Adversarial validation was everywhere.</li> <li>After detecting all leaks, we created a new train/test split and rebuilt all our and public models.</li> <li>And finally - more blends for the god of blends ;)<br> <img src="https://storage.googleapis.com/kaggle-forum-message-attachments/373437/10136/santander_2018.jpg" alt="enter image description here"><br> Special thanks to everyone who shared their insights about leaks and models.</li> </ol>
Santander Value Prediction Challenge
149th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Value Prediction Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It is awkward but i didn't the prameter search myself. Rather than doing them, searching for leak seemed to raise the score ...</p> <p>I'll do it at the next competition :)</p> <p>I made my Santander Repository public,(<a href="https://github.com/hukuhuku/Santander">https://github.com/hukuhuku/Santander</a>)</p> <p>I tried feature management method by amootone(<a href="https://amalog.hateblo.jp/entry/kaggle-feature-management">https://amalog.hateblo.jp/entry/kaggle-feature-management</a>)</p> <p>but These are all in Japanese,I will answer you if I have any questions.</p> <p>subsets are mainly made by (<a href="https://www.kaggle.com/nulldata/jiazhen-to-armamut-via-gurchetan1000-0-56">https://www.kaggle.com/nulldata/jiazhen-to-armamut-via-gurchetan1000-0-56</a>)</p> <p>I knew that Statics of all feature was a useful feature amount from the early in this competition. Therefore, we thought that we could represent detailed information by making some subsets and creating statics feature.</p> <p>I did other things, but in subsets statics 0.51 =&gt; 0.49 (LB), 0.57 =&gt; 0.54</p> <p>amaotone's feature management method is very useful! good luck !</p>
Santander Value Prediction Challenge
16 place mini write-up (before LB swaps)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Value Prediction Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>5 lightgbms with slightly different features 1 Et 3 xgboost models 3 MLP nns 2 lstms 1 gru</p> <p>The lags were only based on the specific 40 columns I specified (for which the order is correct) . I found lags of other 40-column sequences were not helping.</p> <p>2nd level actually moved me up 4 positions!</p> <p>@AmirH</p> <p>No problem. </p> <p>1) all of these. some models with same features but different parameters. A good way to generate diversity like this is to start with a constant parameter - lets say I will make a lightgbm model that has <code>max_depth=6</code> and while you keep this constant, you keep changing all other parameters. In other words you try to make the best model possible given that constraint (of <code>max_depth=6</code>). You can do this with multiple parameters. You can also train the same models with different features. I dont count models trained on different seeds as different models. All my models are actually bagged - so I run 10-20 of them with different seeds and average the results . E.g. when I say I made 15 models - I actually made around 150-300 (because of bagging)</p> <p>2) yes both can be used </p> <p>3) Please see this video. it will answer both 3 and 4.: <a href="https://www.youtube.com/watch?v=61p9AA7Xs38&amp;t=1264s">https://www.youtube.com/watch?v=61p9AA7Xs38&amp;t=1264s</a></p> <p>Congrats to the winners and especially to Gilberto for winning as well as for reclaiming the top 1 spot (if my calculations are correct). </p> <p>Also kudos for sharing the leak and levelling the field. At the end of the day I think ml (probably) played quite some part in getting a good score past the leak and that is still useful to the organizer.</p> <p>I also hope they fix the issue with people selecting different submissions after the end of the competitions and moving onto the leaderboard including me (<a href="https://www.kaggle.com/c/santander-value-prediction-challenge/discussion/63752">https://www.kaggle.com/c/santander-value-prediction-challenge/discussion/63752</a>)</p> <p>My solution consisted (like most people I presume) of 2 parts:</p> <h1>Finding the leaky rows</h1> <p>I have attached all the patterns I managed to find (4390 columns) that helped me reach the 3887 leaks in the training data . I found them based on rows that I knew were sequential (which I knew from the columns some people shared in kernels).</p> <p>It took me around one day to find them via exhaustive brute-force search and some heuristics to make the search faster (like group together columns that tend to have similar unique values). </p> <p>I used the Jiahzen kernel to find the leaks and mark them .</p> <h1>Modeling</h1> <p>This was used for when a leak was not found.</p> <p>I Built a few lightgbms and xgbost models using aggregated values over the whole rows (like means, stds, kurtosis , count of zeros, skewness , mode etc ). </p> <p>I created a few times series features on the first 40 rows namely:</p> <pre><code> ['f190486d6', '58e2e02e6', 'eeb9cd3aa', '9fd594eec', '6eef030c1', '15ace8c9f', 'fb0f5dbfe', '58e056e12', '20aa07010', '024c577b9', 'd6bb78916', 'b43a7cfd5', '58232a6fb', '1702b5bf0', '324921c7b', '62e59a501', '2ec5b290f', '241f0f867', 'fb49e4212', '66ace2992', 'f74e8f13d', '5c6487af1', '963a49cdc', '26fc93eb7', '1931ccfdd', '703885424', '70feb1494', '491b9ee45', '23310aa6f', 'e176a204a', '6619d81fc', '1db387535', 'fc99f9426', '91f701ba2', '0572565c2', '190db8488', 'adb64ff71', 'c47340d97', 'c5a231d81', '0ff32eb98'], </code></pre> <p>it included exponential smoothing and weighted moving averages on different lag selections.</p> <p>I had some string features on the whole row, like count of unique numbers in values, count of values with more decimals, count of integers .</p> <p>I made a few deep learning models (with lstm and gru) using these 40 features as inputs. </p> <p>Then some PCA features along with kmeans and other clustering features. </p> <p><strong>I should mention that I validated on the test data too</strong>, since I knew the leak had most probably assigned the correct labels for these cases. I also assessed which public kernels were better and final weights based on the leaky test values too. </p> <h1>Stacking</h1> <p>i stacked 15 different models . I did a 2nd level stacking with an nn, a lightgbm model and Extratrees. I averaged a few of the public kernels too. </p> <p>ExtraTreesRegressor</p>
Santander Value Prediction Challenge
2nd place solution overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Value Prediction Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks to Giba for sharing the <a href="https://www.kaggle.com/titericz/the-property-by-giba">first set of columns</a>, Moshin hasan for the <a href="https://www.kaggle.com/tezdhar/breaking-lb-fresh-start">kernel</a> with the way to use them, and all the discussion participants and kernel creators for insights. Apart from a healthy dose of luck, my solution consisted of the following:</p> <p><strong>Row and column order.</strong> The procedure I used was similar to the one that have emerged in the forums. One of the differences was that at each iterations I organised the resulting rows/columns into sequences and only used the sequences above a certain length (e.g. 5) for the next iteration to try to avoid using false-positives for matching. Creating sequences also helped to assign IDs to the groups of rows that were likely to belong to the same client.</p> <p><strong>Validation.</strong> I used a 5-fold CV but adjusted such that all the rows belonging to a single client were never split between the train and validation sets. Since the sizes of these groups varied a lot, this introduced some additional variance, so all the final predictions were averages over several CV seeds.</p> <p><strong>Data augmentation.</strong> I added the test leaks to the training set - this improved the local CV however given only two decimals it's unclear whether it had much effect on the private leaderboard.</p> <p><strong>Model.</strong> The best performing model was relatively simple: feature set consisted of various non-zero aggregations (mean, max, exponential smoothing, count, etc.) applied to all the 96 forty-observation-long sequences and their subsets of the last 7,14,21,28 "days"; plus leading non-zero values in each of the sequences and aggregations over all non-zero values in a row. This resulted in ~3500 features that were reduced to ~1000 via selection. </p> <p>The final model was a blend of lightGBM, XGBoost and a random forest, all using those ~1000 features.</p> <p>I was hesitant to make strong conclusions on my two data points, but if you had similar experience, then yes, augmenting likely helped more than I thought.</p> <p>Thank you! I found 7841 leaks in test vs 7897 in Lucasz's write-up (and the same number of train leaks). Later I found 50 more, but all of them implied target value of 0. Similar to the winning team I've discovered that the values were truncated at 30k, so I've decided against using them.</p> <p>Thanks. Now with more digits, looking at a few comparable submissions it seems like augmentation on average improved the private subs by 0.006 and reduced the public one by 0.003, so even if it contributed at all, it's likely very small. The KNN idea is very interesting - I tried to use PCA in similar fashion, but didn't get much out of it.</p> <p>Thank you! I ran an LGBM with all the features, calculated feature importance and started gradually dropping the least important features until the CV started to consistently get worse. Unfortunately I didn't have time to try more automated approaches e.g. the Boruta algorithm</p> <p>Thank you!</p> <p>Thank you. I meant taking the first/leftmost non-zero value for each of the ordered sequences</p>
Santander Value Prediction Challenge
360th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Value Prediction Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I've used <a href="https://www.kaggle.com/nulldata/jiazhen-to-armamut-via-gurchetan1000-0-56">this kernel</a> and replaced the values without leak with predictions of <a href="https://www.kaggle.com/the1owl/love-is-the-answer">this kernel</a>. The fact that one can secure a medal by using two highest-scoring public kernels shows how bad it is for competitions to publish kernels 3-4 days before their end.</p>
Santander Value Prediction Challenge
3rd place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Value Prediction Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Column and row groups.</strong> After the leak was published (thanks to <a href="https://www.kaggle.com/titericz/the-property-by-giba">Giba</a> and <a href="https://www.kaggle.com/tezdhar/breaking-lb-fresh-start">Mohsin Hasan</a>) as many of participants I started to search other groups of columns and groups of rows. Of course, I wanted to find them all :) I selected a column or a row and then searched next or previous (mainly by script but also I looked through result and correct some mistakes manually). For rows search I assumed that rows with target == 0 were eliminated from train and test. So there can be gaps. For columns search I assumed that group of columns consists exactly of 40 columns. Maximum length of group of rows was 116, but there were shorter row groups and groups with big gaps. Of course, I tried to find as much groups of columns as possible on given groups of rows and if I couldn't find new group of columns I started groups of rows search and then returned to columns' search and so on. At the end I also used groups of rows from test.</p> <p>My best submission includes gbm models based on set of 97 and 110 groups of columns. I tried to find all groups (125) but either I made mistakes or due to randomness of data blendings which contain gbm models based on 125 groups of columns were worse than those on smaller number of groups. Last ten groups (116-125) were shorter than 40 and with gaps.</p> <p>I selected the best submission by public score and the forth. On private the forth became the best and the best became the second.</p> <p><strong>Augmentation.</strong> Leak values found in test I used for augmentation train for training gbm models. Based on 97 groups of columns I found 3 894 leaks in train (99.85% of them were correct) and 7 844 leaks in test at lag # 35. Based on 110 groups of columns I found 3 898 leaks in train (99.8% of them were correct) and 7 954 leaks in test at lag # 37. I wasn't sure on leaks at lags 36 and 37 (on train there wasn't leaks at lag 36 and 2 of 2 error leaks at lag 37).</p> <p><strong>Features</strong> for gbm:</p> <ul> <li><p>Raw columns, columns from main group (which starts from column f190486d6)</p></li> <li><p>Log of mean, mean of log, log of median, log of max, log of min, number of zeros, sum, std, kurtosis, skew, number of uniques calculated on all columns and on each found group separately</p></li> <li><p>New group of columns, which was sum of three most important groups by feature importance (which start from columns f190486d6, df838756c and b6daeae32) and features from previous point calculated on this new group</p></li> <li><p>New groups of columns, which were sum of some pairs of groups among the most important groups by feature importance (It didn't give good cv increase so without details)</p></li> <li><p>I applied feature selection based on gbm feature importance and parameters tuning based on 4 fold validation</p></li> <li><p>Final submission was weighted sum of 23 models. Most of models were made with lightgbm and some with xgboost</p></li> <li><p>Weights were adjusted iteratively on oof train+leak prediction. As I didn’t use cv here and to avoid weights overfitting firstly I accepted only those weights changes which lead to rmsle decrease on more than X, but in my best submission I accepted every weight change (because among similar submissions it gave better public score).</p></li> </ul>
Santander Value Prediction Challenge
5 place mini write-up
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Value Prediction Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I don’t know yet.</p> <p>There is my code for this competition: <a href="https://github.com/vlarine/kaggle/tree/master/santander-value-prediction-challenge">https://github.com/vlarine/kaggle/tree/master/santander-value-prediction-challenge</a></p> <p>I cannot reproduce form scratch my final submission, because it is a blend of previous submissions. This code makes my best Private LB submission (0.52036 Private LB, 0.49453 Public LB). But I cannot reproduce neither this submission from scratch - I cannot correct rebuild some features. There is a folder data_add with the data I cannot reproduce. </p> <p>There was 87 groups of 40 features. A lot of feature engineereng (~900 features) and some feature selection (drop to ~500 features). There was a blend of 7 submits, but the most weighted submit (weight 0.85) was from Kazanova's StackNet.</p> <p>There was 2 layers in this stacked model. 8 models (different regressors) at the first layer and a simple linear regression at the second.</p> <p>Groups finding I do by myself and leak usage took from <a href="https://www.kaggle.com/c/santander-value-prediction-challenge/discussion/61472#363394">https://www.kaggle.com/c/santander-value-prediction-challenge/discussion/61472#363394</a> </p> <p>There is my StackNet config. I have 5 folds and 3 bins. Also I used found leak targets in training.</p>
Santander Value Prediction Challenge
6th place solution with kernel
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Value Prediction Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here is my full single model pipeline starting with just the original 40 features.</p> <p><a href="https://www.kaggle.com/joeytaj/leak-fe-ml-from-scratch-baseline/notebook?scriptVersionId=5253758">https://www.kaggle.com/joeytaj/leak-fe-ml-from-scratch-baseline/notebook?scriptVersionId=5253758</a></p> <p>I am very disappointed that the late submission feature is off. Just one more thing that is off about this competition. I really like to use this for wrapping up my learning before I move on and forget all about this problem. </p> <p>I had single models (albeit with more feature selection than here) that scored 0.52 private and a good 0.48 public. Since this kernel uses better private leak parameters than I did, it should score pretty close to this. If I didn't introduce any bugs making it.</p> <p>So most people above me have covered the same solutions I did. Here are the main 2 things I think were important for my score.</p> <ol> <li>Removing all the ugly/fake rows before leak predictions.</li> <li>Using the leaks for data augmentation.</li> </ol> <p>Some thoughts on the shake up. Obviously for some there was some major, maybe dissapointing movement. In general I would say there was not unexpected shakup which I am glad for. </p> <p>In the end the leak was effective in the private LB. And just as important the fake data did seem to be actually fake. Not just private test. I was very worried about this until I gave up and went all in on it being fake.</p> <p>Congratulations and Thanks to all. I would say I am hoping to do better next time. But it I think it takes a bit of luck getting the right problem\solution to get this far. So we will see.</p>
Santander Value Prediction Challenge
85th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Value Prediction Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The main strategy is the combination of the following:</p> <ul> <li>The single lightGBM model I've created before discovering "leak"</li> <li>The "leak" finding model shared by <a href="https://www.kaggle.com/nulldata/jiazhen-to-armamut-via-gurchetan1000-0-56">@amrrs</a></li> </ul> <p>With the single model, I got 1.37 on public LB (and 1.30 on local cv).</p> <p>In addition, I utilize "leak" in the following way:</p> <pre><code>df1[~test_leak['compiled_leak'].isnull()] = df0[~test_leak['compiled_leak'].isnull()] </code></pre> <p>where </p> <ul> <li>df1 is the submission csv file of my single model</li> <li>df0 is the submission csv file of @amrrs's</li> <li>test_leak is "test_leak.csv" of @amrrs's</li> </ul> <p>Of course, this is a simple description and I've tried a lot more.</p> <p>I'd like to thank everyone involved in Santander Value Prediction Challenge!</p>
Santander Value Prediction Challenge
Another solution (40th place) and few questions left
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Value Prediction Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Another intrigué quite astonishing (and probably meaningless?) competition with leaks, dramas and dropped public kernels. I believe we can sort out almost anything with Pandas after all :D Thanks everyone!</p> <p>Solution is quite simple:</p> <p>1) Take a <strong>public available leaks</strong> (0.56 right now). Take a look at ordered column groups.</p> <p>2) Fill not leaked test with the <strong>best no-leaks 1.37 LB kernel</strong> (<a href="https://www.kaggle.com/mannyelk/an-honest-approach">https://www.kaggle.com/mannyelk/an-honest-approach</a> what a wonderful R kernel). You should receive around 0.50.</p> <p>3) Find <strong>more ordered column</strong> groups in train. Clustering with t-sne might help (another nice one <a href="https://www.kaggle.com/sibmike/finished-grouping-sorting-with-tsne/notebook">https://www.kaggle.com/sibmike/finished-grouping-sorting-with-tsne/notebook</a>). The only problem is how to order new groups: get ordered row groups from ordered column groups (1) and sort them to make df[cols].loc[rows] a diagonal matrix.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/373411/10138/before.png" alt="Before:"></p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/373411/10139/after.png" alt="After:"></p> <p><strong>Be aware of zeros.</strong> Find index where zero columns != 0 and continue ordering. With around 72 groups I got 0.49 LB with 3878 train leaked (0.5771) and 7858 test leaked.</p> <p>4) Then I built an <strong>ensemble model</strong> based on OOF public kernel predictions I tuned a little bit. 13 models (8 of them again from that wonderful R kernel) on the first level, 3 on the second. I got 0.48 LB.</p> <p>5) <strong>Adversarial validation</strong>? I didn't know what to expect on Private LB so I trained simple adversarial xgb model to predict how close are train samples from test distribution (dropped around 30% of train with &gt;0.2 threshold). Didn't have time to re-train all 13 models so I add adv CV to the second level of ensemble and added adversarial weighting to few models on the first level. That gave me only 0.49 LB but 0.53 Private but probably my best model. </p> <p><strong>QUESTIONS</strong>:</p> <p>1) The exact test and train leaked counts?</p> <p>2) Adversarial validation. Has it helped you to keep LB positions. If not then what?</p>
Santander Value Prediction Challenge
Solution sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Value Prediction Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to the winners, and thanks everyone especially for people who shared the leaks and insights for the data. Here is the brief description of my 54th place solution: </p> <p>I found 93 groups of 40 columns and (3871, 0.9945, 7960) leaks. From other participants' solutions, my accuracy of the leaks in test seems to be lower.</p> <p>After finding leaks, I trained LGBM using train data and the leaked and non-fake subset of test. Used features are some aggregations such as mean and quantile. Using the order of the columns in a group, I also made exponential moving average which popularly used for time-series. I tried to use the difference of latest non-zero value and next non-zero value as a feature. However, it does not improve my cv score. Scores of the model without leaks are 1.316954 for cv in the increased train data, 1.07 for Public LB, and 1.06 for Private LB.</p> <p>My code can be found in <a href="https://gist.github.com/zaburo-ch/a410538ab271e68331565ea06731ee61">https://gist.github.com/zaburo-ch/a410538ab271e68331565ea06731ee61</a>. <br> <code>base</code> is my utility module (<a href="https://gist.github.com/zaburo-ch/02d99cd53f5e9d4f2fdebc104df014bf">https://gist.github.com/zaburo-ch/02d99cd53f5e9d4f2fdebc104df014bf</a>).</p> <p>Thanks, and see you next competition.</p> <p>Thanks !</p>
Santander Value Prediction Challenge
Vote of Thanks and Lessons Learnt
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Value Prediction Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="/peterhurford">@peterhurford</a> IIF you will allow me in your team :)</p> <p>Looks like they might take some more time to correct it for everyone and validate the results.....</p> <p>Wish that was a Gold :)</p> <p>Thanks Shaz</p> <p>First of all a Very BIG Thanks to Kaggle for this Bumpy competition. This is my fourth competition and I should say I had a lot of learning's. I realize that reading the forum discussions multiple times helps a lot. The last minute reading of the complete discussions helped me find a bug and it helped me push up my position by a few hundreds.</p> <p>I'm very thankful to <a href="/joeytaj">@joeytaj</a> (Your discussions are my building blocks), <a href="/titericz">@titericz</a>, <a href="/tezdhar">@tezdhar</a> <a href="/fatihozturk">@fatihozturk</a>, <a href="/raddar">@raddar</a>, <a href="/tpthegreat">@tpthegreat</a>, <a href="/authman">@authman</a></p> <p><strong>Lessons Learnt</strong></p> <ul> <li>Read discussions multiple times. I mean read it dozens of times.</li> <li>Experiment multiple things. Never ever leave any idea proposed on the forums.</li> <li>Experiment a lot in the initial days to make use of the submissions.</li> <li>Go through all of the kernels and it could bring in new ideas.</li> <li>Maintain a proper version control.</li> <li>Keep a cloud backup of the code. This will help you work globally when you have issues with a local machine as was happened in my case.</li> </ul> <p>Thanks a lot SRK...</p> <p>Glad to see you Peter.. This is one of the best accolades I receiced in the recent past... :)</p> <p>Thank You....</p> <p>Thanks... But just missed it by a silver...</p>
Sartorius - Cell Instance Segmentation
11th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Sartorius - Cell Instance Segmentation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all congratz to the winners and thanks to Sartorius and Kaggle for hosting the competition !</p> <p>Although this 11th place is a great finish, we're a bit disappointed since we spent a month at the top and the final weeks in the top 5. We knew we were not gonna win but this shake was quite a surprise for us, and we still haven't really understood why it happened. </p> <p>Our models were just better on Public than Private, there might be a small domain shift (more corts ? more small cells ? more large cells ? we will never now…) that emphasized a weakness of our pipeline.</p> <p>Anyways, here are few interesting points of the solution that I found were worth mentioning, the full code is on GitHub : <a href="https://github.com/TheoViel/kaggle_sartorius" target="_blank">https://github.com/TheoViel/kaggle_sartorius</a> <br> Also, our inference code is here : <a href="https://www.kaggle.com/theoviel/sartorius-inference-final" target="_blank">https://www.kaggle.com/theoviel/sartorius-inference-final</a></p> <h3>Validation Scheme</h3> <p>Two weeks before the end of the competition, we switched to a validation set-up we thought would be reliable : splitting on plate &amp; well as done in the Livecell paper. </p> <p><img src="https://i.ibb.co/dDjvC6j/livecell-3.png" alt=""></p> <p>This might be one of the reason we shake down ? Our best private LB (0.349, public 0.339) was our best CV before switching to the above scheme. Still, our final score of 0.348 is our best CV with this setup (public 0.342)</p> <h3>Models</h3> <p>We only used machines with single RTX 2080 Ti so we had to be ingenious to be able to train on high resolution images and detect small cells. We used mask-rcnn based models and relied on mmdet but only for model definition and augmentations, the rest of the pipeline is hand-crafted, which made it more convenient for experimenting.</p> <p><img src="https://i.ibb.co/MkwRbVW/Ensembling-drawio-1.png" alt=""></p> <h5>Main points</h5> <ul> <li>Remove the stride of the first layer of the encoder to "increase the resolution" of the models without doing any resizing !</li> <li>Random Crops of size 256x256 for training</li> <li>Pretrain on Livecell</li> <li>4000 iterations of finetuning on the training data</li> <li>Backbones : resnet50, resnext101_32x4, resnext101_64x4, efficientnet_b4/b5/b6</li> <li>Models : MaskRCNN, Cascade, HTC</li> </ul> <h5>Ensembling</h5> <p>We average predictions of different models &amp; different flips at three stages, the stages are the boxes with thicker borders above.</p> <ul> <li>Proposals : For a given feature map output by the FPN, each of its pixel is assigned a score and a coordinates prediction by the convolutions. This is what we average. </li> <li>Boxes : We re-use the ensembled proposal and perform averaging of the class predictions and coordinates for each proposal. We used 4 flip TTAs.</li> <li>Masks : starting with the ensembled boxes, we average the masks - before the upsampling back to the original image size.</li> </ul> <p>This scheme doesn't really use NMS for ensembling which can be tricky to use. Hence we stacked a bunch of models. We used 6 models per cell type.</p> <h5>Post processing</h5> <ul> <li>NMS on boxes using high thresholds, then NMS on masks using low thresholds</li> <li>Corrupt back the astro masks as we trained on clean ones (+0.002 LB)</li> <li>Small masks removal</li> </ul> <p>We did a lot of hyper-parameters tweaking on CV : NMS thresholds, RPN and bbox_head params, confidence thresholds, minimum cell sizes, mask thresholds.</p> <h3>Few more words</h3> <ul> <li><p>Pseudo Labelling didn't really work for us, we used them in the finetuning phase with the original training data and progressively decayed their proportion.</p></li> <li><p>The RoiAlign layer from mmdet has implementation issues. Masks resulting from TTA appear shifted which hurt performances, especially when trying to use vertical flips. We had to shift the boxes by 0.5 to counter this. </p> <p>I will probably add more stuff later, and fix the typos and all. Feel free to ask any questions !<br> <em>Thanks for reading !</em></p></li> </ul> <p>Thanks <a href="https://www.kaggle.com/duykhanh99" target="_blank">@duykhanh99</a> and congratz to your team as well !</p> <p>Thanks a lot <a href="https://www.kaggle.com/lucamassaron" target="_blank">@lucamassaron</a> ! </p> <p>Thanks for the kind words <a href="https://www.kaggle.com/mrinath" target="_blank">@mrinath</a> !<br> Models are defined in the configs and built by mmdet, FPN is defined here.<br> Regarding customisations you can refer to the mmdet doc : <a href="https://mmdetection.readthedocs.io/en/latest/" target="_blank">https://mmdetection.readthedocs.io/en/latest/</a> - and try looking at the code </p> <p>The csv is computed in the first part of the notebook, you need to comment the line :<br> <code>annotations = [] # do not recompute</code></p> <p>The following cell will create the csvs depending on the values of <code>SHSY5Y_ONLY</code> and <code>NO_SHSY5Y</code>. Note that the dataframe named <code>livecell.csv</code> is used for pretraining but you can change that in the first line of the pretrain function. </p> <p>Please use : </p> <pre><code>SHSY5Y_ONLY = False NO_SHSY5Y = False SINGLE_CLASS = False </code></pre> <p>to generate <code>livecell.csv</code></p> <p>I've updated the repository and uploaded a script that was actually necessary to run the inference. There's a script to replace in the mmdet package otherwise you'll get an error, please check the ReadMe !<br> Thanks for taking the time to run the code, this forces me to fix minor issues that should've been corrected much earlier.</p> <p>The script should run quite quickly (&lt;1h) as far as I remember.<br> It uses multiprocessing so you may want to turn that off.</p> <p>Regarding the error, you actually need to download the corrected masks here : <br> <a href="https://www.kaggle.com/hengck23/clean-astro-mask" target="_blank">https://www.kaggle.com/hengck23/clean-astro-mask</a> and put them in the <code>HCK_FIX_PATH</code> folder. I forgot to include this in the ReadMe, sorry about that</p> <p>I'm not sure if using 1 process works, if it doesn't try replacing :</p> <pre><code>metas = [] for _, meta in tqdm(p.imap(prepare_mmdet_data_, range(len(df))), total=len(df)): metas.append(meta) </code></pre> <p>With :</p> <pre><code>metas = [] for i in tqdm(range(len(df))): metas.append(prepare_mmdet_data_(i)) </code></pre> <p>My CPU is a 12-Core AMD Ryzen 9 3900XT - which is probably the reason why computations are fast on my setup. You don't need to use the GPU and I believe RAM is not an issue as well. <br> Having a lot of RAM is required for inference though but this can be fixed by optimizing the code a bit. I sometimes ran into oom errors even with 64 Gb.</p> <p>The reason it is taking too long might be multiprocessing, you can also turn the <code>FIX</code> parameter to False (this shouldn't affect performance too much)</p>
Sartorius - Cell Instance Segmentation
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Sartorius - Cell Instance Segmentation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We would like to thank Kaggle and Sartorius for organizing such a great competition.</p> <h1>Overview</h1> <p>Our solution is as follows. In many respects, it is similar to the 2nd place solution.</p> <p><a href="https://postimg.cc/tskb7m27" target="_blank"><img src="https://i.postimg.cc/6QJpjkZV/img.png" alt="img.png"></a></p> <p>At the very beginning of the competition, we decided to build a solution using box-based instance segmentation, and focus more on the bbox detection part. We think the mask prediction performance is mainly limited by annotation quality so we did not pay much attention to it.<br> During the competition, we used COCO mAP as our validation metric, we believe that high mAP and proper thresholding would give a high LB score. Following is the validation score we achieved at the end of the competition</p> <p>Evaluating bbox… <br> Loading and preparing results… <br> DONE (t=0.09s) <br> creating index… <br> index created! </p> <p>Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=2000 ] = 0.396 <br> Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=2000 ] = 0.764 <br> Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=2000 ] = 0.364 <br> Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=2000 ] = 0.354 <br> Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=2000 ] = 0.305 <br> Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=2000 ] = 0.572 <br> Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.386 <br> Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.491 <br> Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=2000 ] = 0.579 <br> Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=2000 ] = 0.550 <br> Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=2000 ] = 0.552 <br> Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=2000 ] = 0.757 </p> <p>+----------+-------+----------+-------+----------+-------+ <br> | category | AP | category | AP | category | AP | <br> +----------+-------+----------+-------+----------+-------+ <br> | shsy5y | 0.334 | astro | 0.399 | cort | 0.456 | <br> +----------+-------+----------+-------+----------+-------+ </p> <p>Evaluating segm… <br> Loading and preparing results… <br> DONE (t=0.57s) <br> creating index… <br> index created! </p> <p>Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=2000 ] = 0.362 <br> Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=2000 ] = 0.767 <br> Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=2000 ] = 0.294 <br> Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=2000 ] = 0.308 <br> Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=2000 ] = 0.419 <br> Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=2000 ] = 0.461 <br> Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.349 <br> Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.441 <br> Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=2000 ] = 0.516 <br> Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=2000 ] = 0.514 <br> Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=2000 ] = 0.469 <br> Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=2000 ] = 0.465 </p> <p>+----------+-------+----------+-------+----------+-------+ <br> | category | AP | category | AP | category | AP | <br> +----------+-------+----------+-------+----------+-------+ <br> | shsy5y | 0.328 | astro | 0.302 | cort | 0.456 | <br> +----------+-------+----------+-------+----------+-------+ </p> <h1>Bbox part</h1> <p>We found YOLOX performed impressively well without hyperparameter tuning. We used train/val split to monitor validation scores, and then trained models using all training data.<br> Things worked:<br> Strong feature extractor (CB DBS-FPN, EffDetD7, CSPDarknet-YOLOXPAFPN)<br> Large input size (1536)<br> Livecell pretrain</p> <p>In the livecell dataset, some images have thousands of instances, and this amount could be doubled by mixup. Some operations in SimOTA would cause OOM frequently when we use large input sizes and have many ground-truth instances. <a href="https://www.kaggle.com/tascj0" target="_blank">@tascj0</a> optimized SimOTA with some CUDA extensions to save memory and speedup training.</p> <h1>Segmentation, Mask R-CNN part</h1> <p>We used two Mask R-CNN models with CB-DBS as the backbone. According to the results of <a href="https://ai.googleblog.com/2021/09/revisiting-mask-head-architectures-for.html" target="_blank">Revisiting Mask-Head Architectures for Novel Class Instance Segmentation</a>, training was done using GT's bbox and mask. In our experiment, the score was comparable to the one using the RPN head proposal as usual.<br> At the time of inference, we passed the ensembled bbox to the mask head to get predictions.</p> <p>However, in the end, this part only slightly boosted the score. The main part on the mask side is UPerNet.</p> <h1>Segmentation, UPerNet part</h1> <p>In this part, we also used the data cropped by GT's bbox and mask to train. Since the instances are small in pixel size, it’s important to do cropping and pasting accurately. Thus we used ROIAlign to crop&amp;resize input image&amp;mask and used grid_sample to paste a prediction to its bbox location before thresholding. It’s also important to resize training target(mask) using bilinear interpolation then threshold it, instead of using nearest neighbor interpolation. Basically, the point is to follow the setting of Mask R-CNN mask head.<br> We trained 4 UPerNet models with Swin or ResNet101 as the backbone, pretrain on livecell and finetune on competition data. We used ensembled bboxes for inference. In mask ensemble, we simply averaged the probs of each model's (UPerNets and Mask R-CNNs) prediction.</p> <h1>Reranking</h1> <p>We re-scored predicted instances by the score of bbox × average of score of mask (prob &gt;= 0.5). This improves validation COCO mAP of astro by 0.01.</p> <h1>Post process</h1> <p>Simple thresholding, overlap removal and dropping small number_of_pixels instances.</p> <h1>Code</h1> <p>We used mmdetection and mmsegmentation from open-mmlab to build our pipeline. We are very grateful that such an easy to use tool is being developed in open source.</p> <p>Here we release a minimum version of our solution to show to implement the ideas given those awesome tools</p> <p><a href="https://github.com/tascj/kaggle-sartorius-cell-instance-segmentation-solution" target="_blank">https://github.com/tascj/kaggle-sartorius-cell-instance-segmentation-solution</a></p> <h1>Acknowledge</h1> <p>takuoko is a member of Z by HP Data Science Global Ambassadors. Special Thanks to Z by HP for sponsoring me a Z8G4 Workstation with dual A6000 GPU and a ZBook with RTX5000 GPU. </p>
Sartorius - Cell Instance Segmentation
22th place solution / for English edition and 中文版本
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Sartorius - Cell Instance Segmentation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to my teammates in this comp ! <a href="https://www.kaggle.com/wgz123" target="_blank">@wgz123</a> <a href="https://www.kaggle.com/deeeeeeeplearning" target="_blank">@deeeeeeeplearning</a> <a href="https://www.kaggle.com/shajiayu" target="_blank">@shajiayu</a> They do a lot in this comp!</p> <h1>English edition</h1> <p><code>SOTA</code>: Mask-RCNN<br> <code>backbone</code>: SwinTransformer<br> <code>anchor_size</code>: mutil anchor size for different sortation<br> <code>anchor_ratio</code>: different ratio for different sortation<br> <code>pre-trained model</code>: by using outer data provided by this comp<br> <code>data augmentations</code>: Resize(), Pad(), Grid(), RandomFlip()<br> <code>TTA</code>: Resize(), Pad(), RandomFlip()<br> <code>Lr-plan</code>: CosineAnnealing<br> <code>Four models ensemble</code>: One model for classifier and the rest for segmentation<br> <code>swa</code>: Stochastic Weights Averaging<br> <code>score_thr</code>: min score for each instance<br> <code>min_pixel</code>: min pixel for each instance<br> <code>fix overlap</code>: fix overlap by the index</p> <h1>中文版本</h1> <p><code>SOTA</code>:Mask-RCNN<br> <code>骨干网</code>:SwinTransformer<br> <code>框大小策略</code>:不同类别采用不同的框大小<br> <code>宽高比策略</code>:统计标注种类宽高比分布<br> <code>预训练</code>:官方提供的外部数据进行预训练<br> <code>数据增强</code>:Resize(), Pad(), Grid(), RandomFlip()<br> <code>TTA</code>:Resize(), Pad(), RandomFlip()<br> <code>学习率策略</code>:余弦学习率 <br> <code>四模型集成</code>:一模型分类三模型分割<br> <code>随机加权平均</code>:swa<br> <code>卡置信度阈值</code>:min_score<br> <code>卡最小像素阈值</code>:min_pixel<br> <code>去除重叠</code>:按照索引顺序去除</p> <p>Second comp in kaggle! keeping fighting!</p> <p>I will have a discussion with my teammates whether to publish our code</p> <p>I use the Resize() to resize the shorter edge from 800 to 1000</p> <p>Right,mmdetection</p>
Sartorius - Cell Instance Segmentation
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Sartorius - Cell Instance Segmentation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Sartorius and Kaggle for hosting this interesting competition. I also would like to thank <a href="https://www.kaggle.com/steamedsheep" target="_blank">@steamedsheep</a> for the great collaborative teamwork during the competition, we both work very hard to achieve this result.</p> <p>Our solution is an ensemble of 2 object detection model, 1 unet and 2 maskrcnn model as shown in figure below. I am in charge of object detection and Unet while <a href="https://www.kaggle.com/sheep" target="_blank">@sheep</a> focuses on maskrcnn and ensemble. <br> <img src="https://i.ibb.co/WVPHQ93/cell.png" alt=""></p> <p>We use yolov5x6 and effdetD3 for the object detection task. The training procedure is the same for both model as shown in the figure below. <br> The models are trained several rounds on Livecell and train-semi-supervised dataset before finetuning with the competition data. In the inference phase the output boxes of 2 model are ensemble with maskrcnn boxes using WBF.</p> <p><img src="https://i.ibb.co/98vHr2N/cell1.png" alt=""></p> <p>The output boxes after WBF are feeded into a Unet and Maskrcnn (mask head) to get the segmentation mask for each box. We use weighted average to ensemble the raw mask of Unet and Maskrcnn. </p> <p>We use an unet with effificientb5 encoder to do segmentation on the cropped cell. Since the cropped cell sometimes include the neighbor cell, we predict the mask of the center cell and neighbor cell as 2-class segmentation. </p> <p><img src="https://i.ibb.co/LPDPsbK/cell2.png" alt=""></p> <p><img src="https://i.ibb.co/vxK9M3R/s4.jpg" alt=""><br> From left to right: cropped cell; ground-truth of center mask; predicted mask of center mask; ground-truth of neighbor cell; predicted mask of neighbor cell.</p> <p>hi, we use the same learning rate in both stages. default yolov5 lr and 3e-5 for effdetD3</p> <p>hi, we don't have a plan to share the code yet. if we do, I will update it in the post later.</p> <p>thanks, the detection part is very important. predicted neighboor cell does not help too much. </p> <p>Thanks for the question <a href="https://www.kaggle.com/yousof9" target="_blank">@yousof9</a> <br> we started with simpler approach. At the beginning sheep use a maskrcnn and yolov5 while my approach is yolov5 and unet. After teaming up we realized we lose global information if we use only unet on the cropped cell, then we tried to ensemble both unet (for better local feature) and maskrcnn (for better global feature). Later on we notice that the detection model is the key to have a good score, so we focus on improving object detection part by tuning and adding more detection model.</p> <p>thank you</p> <p>I just resize all cell to a fixed size 128x128 for unet.</p> <p>the output mask of unet or maskrcnn is raw pixel values (the value is in range [0-1]). </p> <p>basically this</p> <pre><code>class CellDataset(Dataset): def __init__(self, cfg, df, tfms=None, fold_id = 0, is_train = True): super().__init__() self.df = df.reset_index(drop=True) self.cfg = cfg self.fold_id = fold_id self.transform = tfms self.is_train = is_train def __getitem__(self, index): row = self.df.loc[index] img_id = row.id img_path = f'{self.cfg.data_dir}/train/{img_id}.png' assert os.path.isfile(img_path), f'{img_path}!!!!' image = cv2.imread(img_path) im_h, im_w = image.shape[:2] mask = rle_decode(row.annotation, (row.height, row.width)) all_mask = cv2.imread(f'{self.cfg.mask_dir}/{img_id}.png',0) all_mask = np.uint8(1*(all_mask&gt;0)) ys, xs = np.where(mask) x1, x2 = min(xs), max(xs) y1, y2 = min(ys), max(ys) cr_w = x2 - x1 cr_h = y2 - y1 shift_ratio = 0.1 if self.is_train: pad_left = random.randint(-1*int(shift_ratio*cr_w), int(shift_ratio*cr_w)) pad_right = random.randint(-1*int(shift_ratio*cr_w), int(shift_ratio*cr_w)) pad_top = random.randint(-1*int(shift_ratio*cr_h), int(shift_ratio*cr_h)) pad_bot = random.randint(-1*int(shift_ratio*cr_h), int(shift_ratio*cr_h)) else: pad_left = 0 pad_right = 0 pad_top = 0 pad_bot = 0 cr_x1 = round(max(0, x1-pad_left)) cr_y1 = round(max(0, y1-pad_top)) cr_x2 = round(min(im_w-1, x2+pad_right)) cr_y2 = round(min(im_h-1, y2+pad_bot)) image = image[cr_y1:cr_y2, cr_x1:cr_x2] mask = mask[cr_y1:cr_y2, cr_x1:cr_x2] all_mask = all_mask[cr_y1:cr_y2, cr_x1:cr_x2] all_mask[mask&gt;0] = 0 mask = np.stack([mask,all_mask]).transpose(1,2,0) if self.transform is not None: res = self.transform(image=image, mask=mask) mask = res["mask"] image = res['image'] image = image.transpose(2,0,1) image = image/255 mask = mask.transpose(2,0,1) return torch.from_numpy(image), torch.from_numpy(mask) def __len__(self): return len(self.df) </code></pre> <p>WBF stands for Weighted Boxes Fusion. It is a method to ensemble object detection models. We have no plan for the code yet.</p> <p>thank you </p> <p>we use all unlabel images. We actually didn't focus on semi-supervised learning, maybe your method can work a little better. </p> <ul> <li>I use 2X 3090 in this competition. </li> <li>I train 50 epoch in pretraining stage and 100 epoch fine tuning with competition data.</li> </ul>
Sartorius - Cell Instance Segmentation
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Sartorius - Cell Instance Segmentation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Mask-RCNN has a segmentation head so it's needed to output segmentation compared to all the object detection architectures. The backbone (Resnest 200) was initially selected because the pretraining weights were already available in the Livecell github repo.</p> <p>Applying a modified version of weighted box fusion (<a href="https://github.com/ZFTurbo/Weighted-Boxes-Fusion" target="_blank">https://github.com/ZFTurbo/Weighted-Boxes-Fusion</a>) with summed score of fused boxes instead of averaged score. Then normal non-max suppresion is applied after weighted box fusion. Optimal IOU thresholds were tuned for each class.</p> <p>Thank you very much to Kaggle and the organizers of this competition and more specifically to <a href="https://www.kaggle.com/christoffersartorius" target="_blank">@christoffersartorius</a>, the entire Sartorius team and the brave data annotators that allow us to do countless tensor multiplications on GPUs with their training data.</p> <p>Congratulations also to all the participating teams and to the 1st and 2nd place teams for their great achievement.</p> <p>I also thank my teammates <a href="https://www.kaggle.com/slawekbiel" target="_blank">@slawekbiel</a> <a href="https://www.kaggle.com/rytisva88" target="_blank">@rytisva88</a> and <a href="https://www.kaggle.com/cpmp" target="_blank">@cpmp</a> that were absolutely amazing during this competition. I learned a lot from all of you in the past intense weeks.</p> <p><strong>Here is the general schema of our ensemble solution:</strong><br> <img src="https://i.postimg.cc/wMK8KkcS/Ensemble-chart.png"></p> <p>The two Mask-RCNN approaches are extended versions of <a href="https://www.kaggle.com/slawekbiel" target="_blank">@slawekbiel</a> great serie of notebooks using Detectron2:<br> <a href="https://www.kaggle.com/slawekbiel/positive-score-with-detectron-1-3-input-data">1) Inputs</a> ==&gt; <a href="https://www.kaggle.com/slawekbiel/positive-score-with-detectron-2-3-training">2) Training</a> ==&gt; <a href="https://www.kaggle.com/slawekbiel/positive-score-with-detectron-3-3-inference">3) Inference</a></p> <p><strong>1) Mask-RCNN models:</strong></p> <ul> <li>Resnest200-Mask-RCNN (<a href="https://github.com/chongruo/detectron2-ResNeSt" target="_blank">https://github.com/chongruo/detectron2-ResNeSt</a>)</li> <li>Weight initialization from Livecell pretraining (<a href="https://github.com/sartorius-research/LIVECell" target="_blank">https://github.com/sartorius-research/LIVECell</a>)</li> <li>Multiple size + hflip training augmentations</li> </ul> <p><strong>A) 3 classes models</strong></p> <ul> <li>5 folds</li> <li>Multiple size inference</li> </ul> <p><strong>B) Class specific models</strong></p> <ul> <li>5 folds +- full data "fold" for each class</li> <li>TTA + WBF + NMS + Mask averaging</li> <li>TTA (box inference): Multiple size, Hflip + Vflip</li> <li>TTA (mask inference): Multiple size, Hflip </li> <li>Class specific parameters and hyperparameters (anchors size, image sizes, amount of TTA, wbf IOU)</li> </ul> <p><img src="https://i.ibb.co/NtP7DK5/Mask-RCNN-class-specific.jpg"></p> <p><strong>2) Cellpose</strong><br> Slawek detailed his great Cellpose approach <a href="https://www.kaggle.com/c/sartorius-cell-instance-segmentation/discussion/297984">here </a> which was blending very well with the Mask-RCNN models.</p> <p><strong>Final week experiments:</strong></p> <p><strong>Pseudolabels</strong></p> <ul> <li>Labels generated from the ensemble of models for the unsupervised data</li> <li>Models trained on merged pseudolabels + training data</li> <li>These models had quite variable results both on public LB and private LB</li> </ul> <p><strong>Ensemble variations</strong></p> <ul> <li>Variable experiments in adding/removing number of models in ensemble, with the allowed 9h runtime, gave +- 0.002 variation both on public LB and private LB depending on configurations.</li> </ul> <p><strong>Submission selection</strong></p> <ul> <li>To decrease correlation of final submissions, we choose one ensemble with some pseudolabels models and one ensemble without any pseudolabels models. Both submissions scored 0.354 on private LB.</li> </ul>
Sartorius - Cell Instance Segmentation
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Sartorius - Cell Instance Segmentation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle and Sartorius for this interesting competition. I also thank my teammates, <a href="https://www.kaggle.com/tanakar" target="_blank">@tanakar</a> and <a href="https://www.kaggle.com/tereka" target="_blank">@tereka</a>, <a href="https://www.kaggle.com/ren4yu" target="_blank">@ren4yu</a>. I learned a lot from them.</p> <p>Our experiments are based on CBNetV2 <a href="https://github.com/VDIGPKU/CBNetV2" target="_blank">repo</a></p> <p>In the following, I want to give a summary of our solution.</p> <h1>Overall</h1> <p>Our solution consists of three parts: classification part, instance segmentation part, post-processing part. Each cell type has different instance segmentation models and post-processing, so classification is necessary.<br> <a href="https://postimg.cc/jnNJBNts" target="_blank"><img src="https://i.postimg.cc/RhpQ6QYn/overall.png" alt="overall.png"></a></p> <h1>Classification</h1> <p>3class CBNet DBS Cascade-RCNN (num_classes=3) was used as a classification model. For a given image, this model classifies the image into the cell type with the highest number of detected cells. <br> This model is first pre-trained with LIVE_CELL (num_classes=1) and then fine-tuned with the competition's train data (num_classes=3).<br> I tested this on semi-supervised data and it was able to classify them perfectly.</p> <h1>Instance Segmentation</h1> <p>We prepared at least one model for each of shsy5y, astro, and cort. These models were trained in different settings and with different data. </p> <h2>Pseudo Labeling</h2> <p><a href="https://www.kaggle.com/tereka" target="_blank">@tereka</a> <br> For all cell types, pseudo labeling on semi-supervised data improved both CV and LB.<br> Because of the small number of train data, it was better to use three types of cells when using only train data. However, because of the large number of semi-supervised data, we were able to change the data used for each cell type.<br> <a href="https://drive.google.com/file/d/1PSIxdNiwtMwTV3Wq6AJbWcp2plnoIVQA/view?usp=sharing" target="_blank">Pseudo labeling</a></p> <h3>Models for pseudo labeling</h3> <p>Data: live_cell → train (all cell types)<br> Model: 3 types of 1class CBNet DBS Cascade-RCNN<br> We changed the MMdet config file depending on the target cell type.</p> <p><a href="https://postimg.cc/DW2dsCF1" target="_blank"><img src="https://i.postimg.cc/W4Z9BKHY/train-strategy.png" alt="train-strategy.png"></a></p> <h2>shsy5y models</h2> <p><a href="https://www.kaggle.com/tyaiga" target="_blank">@tyaiga</a> <br> Data1: live_cell → train (all cell types)<br> Model1: CBNet DBS Cascade-RCNN<br> Data2: live_cell → semi-sup+train (shsy5y &amp; cort) → train (shsy5y)<br> Model2: CBNet DBS Cascade-RCNN<br> The number of cells in shsy5y and cort are very different, but the individual cells are similar, so, we used the two as training data when using semi-sup data.</p> <h2>astro models</h2> <p><a href="https://www.kaggle.com/tyaiga" target="_blank">@tyaiga</a> <br> Data: live_cell → semi-sup+train (astro) → train (astro)<br> Since astro has a very different shape and size from the other cells, we improved the score by using only astro data fot train data.<br> In astro, I only used one model because the ensemble did not work well.</p> <h2>cort models</h2> <p><a href="https://www.kaggle.com/tanakar" target="_blank">@tanakar</a> <br> Data: live_cell → semi-sup+train (shsy5y &amp; cort) → train (court)<br> Model: 2 x HTC resnext64x4d, 2 x CBNet DBS Cascade-RCNN<br> These four models were combined into one model using the method below.</p> <h3>ensemble detection model</h3> <p><a href="https://www.kaggle.com/tereka" target="_blank">@tereka</a> <a href="https://www.kaggle.com/ren4yu" target="_blank">@ren4yu</a> <br> We only used the two-stage model. So, we used an ensemble method like <a href="https://github.com/amirassov/kaggle-imaterialist" target="_blank">this</a>, where RPNs are connected and treated like a single two-stage model.<br> This improved cort score.<br> <a href="https://postimg.cc/pmvDb0m3" target="_blank"><img src="https://i.postimg.cc/vZgXnkb8/cort-models.png" alt="cort-models.png"></a></p> <h2>Augmentation</h2> <p>Multiscale: astro and cort → (1280, 1280)~(1792, 1792), shsy5y → (1280, 1280)~(1536, 1536)<br> Horizontal Flip<br> Vertical Flip</p> <h2>TTA</h2> <p>Multiscale + horizontal flip (vertical flip and diagonal flip didn't work)</p> <h1>Post-Processing</h1> <h2>astro pp</h2> <p><a href="https://www.kaggle.com/ren4yu" target="_blank">@ren4yu</a> <br> The astro annotation was broken, so I reproduced it with cv2.findContours and cv2.fillConvexPoly.</p> <h2>fix-overlap</h2> <p><a href="https://www.kaggle.com/tyaiga" target="_blank">@tyaiga</a> <br> In this competition, predictions are not allowed to overlap. So, we have to eliminate overlap part in post-processing. <br> In our fix-overlap process, first we took cells with a higher cofidence score than classwise threshold. Then, we processed from the highest score to the lowest, and deleted instances with a large percentage of already-used area. In addition, only shsy5y score was improved by removing those with pixels lower than threshold.</p> <h2>semantic re-lank</h2> <p><a href="https://www.kaggle.com/ren4yu" target="_blank">@ren4yu</a> <br> As shown in the Refinemask <a href="https://arxiv.org/abs/2104.08569" target="_blank">paper</a>, the confidence score of the instance segmentation model does not reflect the correctness of the mask. Therefore, we modified the scores and re-lank the instances with semantic segmentation model (UNet++).</p> <h2>fix-overlap ensemble</h2> <p><a href="https://www.kaggle.com/tyaiga" target="_blank">@tyaiga</a> <a href="https://www.kaggle.com/tanakar" target="_blank">@tanakar</a> <a href="https://www.kaggle.com/ren4yu" target="_blank">@ren4yu</a> <br> In this ensemble method, we first concat the output of multiple models and sort them by their semantic relank scores. Next, we applied fix-overlap pp to remove the overlap. This improved shsy5y score.<br> <a href="https://postimg.cc/FfRkNwNC" target="_blank"><img src="https://i.postimg.cc/R017Qz8m/fixoverlap-ensemble.png" alt="fixoverlap-ensemble.png"></a></p> <h2>WBF with mask</h2> <p><a href="https://www.kaggle.com/tereka" target="_blank">@tereka</a> <br> We tried WBF extended for mask. This improved Public LB score, but 'ensemble detection model' is better for Private LB.</p> <h1>Tips</h1> <p>The default config file of MMDetection is fitted for COCO. Depending on the shape of the instances and the number of instances in an image, it is necessary to change the settings for training.<br> e.g. anchor_generator.ratios, rpn_proposal.nms_pre, and so on…</p> <pre><code>rpn_proposal.nms_pre=4000, rpn_proposal.nms_post=4000, rpn_proposal.max_per_img=4000 anchor_generator.ratios=[0.25, 0.5, 1.0, 2.0, 4.0] </code></pre> <p>This improved astro score.<br> The settings for each cell type are different.</p> <p>S in 'DBS' means Small.</p>
Sartorius - Cell Instance Segmentation
5th place solution, maskrcnn, pseudo label and cellpose
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Sartorius - Cell Instance Segmentation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats! :)<br> I predict the mask of an image, get the median diameter of the cells, and then re-predict this image using the estimated diameter.</p> <p>Congrats to all the winners, and thanks to Sartorius team and Kaggle hosted this competition, it’s very challenging and interest.</p> <p><strong>Methods</strong></p> <p>My pipeline can be described in the following figure.<br> <img src="https://i.imgur.com/pF2p304.png" alt="pipeline.png"><br> Step 1: MaskRCNN models were trained on Detectron2 using LIVECell_dataset_2021 without shsy5y cells.<br> Step 2: Finetune the MaskRCNN models on competition train dataset and LIVECell shsy5y cells.<br> Step 3: Predict the train_semi_supervised dataset and used them as pseudo labels.<br> Step 4: Finetune the models using LIVECellshsy5y cells + competition trainset + train_semi_supervised(pseudo)<br> Step 5: Predict the competition trainset and LIVECellshsy5y cells using the above models.<br> Step 6: Generate the flow-x, flow-y and semantic segmentation base on Step5 results using Cellpose.<br> Step 7: Train Cellpose model on LIVECell_dataset_2021 without shsy5y cells.<br> Step 8: Finetune Cellpose model using Step7 results as additional channels.<br> Step 9: Predict and post-process using Cellpose. The diameter was set to 19 and re-predict.</p> <p>The MaskRCNN+Cellpose architecture is in the following figure.<br> <img src="https://i.imgur.com/WJnAQOH.png" alt="cellpose-arch.png"><br> <strong>Results</strong><br> <img src="https://i.imgur.com/dwA0BNE.png" alt="results.png"><br> <strong>Things tried but not worked</strong><br> 1.Deep Watershed and it’s variants.<br> 2.OmniPose </p> <p><strong>Things want to try if have more time</strong><br> 1.Cellpose as additional heads of MaskRCNN.<br> 2.Detection models using all kinds of datasets+UNET Singel Cell segmentation.</p> <p><strong>Happy New Year!</strong></p> <p><a href="https://www.kaggle.com/jy2tong" target="_blank">@jy2tong</a> The logic behind is: I treat shsy5y cells of LIVECell_dataset_2021 as a part of train set data.</p> <p>Thanks for your kind words, but time is fair to everyone, so I quite satisfied with the final result. :)</p>
Sartorius - Cell Instance Segmentation
6th place solution. Higher resolution is all you need.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Sartorius - Cell Instance Segmentation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>When using original images, my public LB was 0.320<br> When slicing images to 3x3=9 smaller overlapped images, my public LB was 0.331<br> When slicing images to 4x4=16 smaller overlapped images, my public LB was 0.334</p> <p>From 0.334 to 0.340 public LB, I used other tricks.</p> <h1>Preprocessing</h1> <p>I only used <code>train</code> images, without semi-supervised or LiveCell.</p> <p>All train images were sliced into smaller size, with window size (208, 281), and stride (104, 140). So a (520, 704) image can be slices to 16 smaller images.</p> <h1>Training</h1> <h2>Model</h2> <p>It seems that model does not really matter in private LB. My res2net-101, resnext-101, detectoRS-r50 can get 0.350 in private LB, even though their public LB are very different.</p> <p>Ensemble cannot improve both public and private LB significantly. For me, single model is enough.</p> <h2>Hyperparams</h2> <p>I trained with mmdet, default 1x schedule and 1x swa training. Image scales were set as (1333, 1333)-(800,800).</p> <h1>Inferencing</h1> <p>Test images were also sliced as training images. The test image scales were [(1333,1333), (1024,1024), (800,800)].</p> <p>Masks were iterated from higher score to lower score. Scores lower than class-wise threshold and areas lower than class-wise pixel-threshold were removed. </p> <p>When dealing with overlaps, I remove mask whose overlapped part was more than 20% of itself.</p> <p>RCNN's NMS was replaced by Weighted cluster-NMS with DIoU.</p> <h1>Submission</h1> <p>My final submission was ensemble of HTC-Res2Net101 (trained with all data), HTC-ResNeXt101 (trained with all data) and HTC-Res2Net101 (trained with fold0)</p> <h1>Weakness</h1> <p>LONG inferencing time, 2h per model.</p> <h1>Github Repo</h1> <p><a href="https://github.com/CarnoZhao/mmdetection/tree/sartorius_solution" target="_blank">https://github.com/CarnoZhao/mmdetection/tree/sartorius_solution</a></p> <p>Yes, some of my models also meeted useless TTAs. It's wierd.</p> <p>I will update to my github repo soon!</p> <p>Yes I used Horizontal + Vertical flip, [(1333,1333), (1024,1024), (800,800)] multiscale test. All of them are provided in mmdet.</p> <p>I also did some experiments about anchors. I think when using low resolution with small anchor size, the feature map downsampled from image is too small to provide useful information, especially in dense objects situation.</p> <p>I think the key point is using higher resolution in inferencing. To do this, we need training models using higher resolution (Random crop or slicing as I did)</p> <p>The biggest motivation is that the objects are small and densly distributed, which is different than other object detection dataset like COCO. However, my GPU cannot train with really large images, so cutting them and resize cutted images to bigger one seems natural.</p> <p>Good question, thats also very important.<br> E.g, say slice A's x axis ranges from 0 to 100, and slice B's x axis ranges from 50 to 150. The middle point of overlap is 75, so I kept slice A's predictions whose box center ranges from 0-75, and slices B's predictions whose box center ranges from 75-150.</p> <p>This is only a simple exsample. You can apply it to x axis overlap, y axis overlap, and also multiple sequential overlaps.</p> <p>The detail of implementation can be found at <a href="https://www.kaggle.com/carnozhao/cell-submission?scriptVersionId=83795256&amp;cellId=3" target="_blank">my submission notebook</a></p> <pre><code>valid = (small_box[:,-1] &gt; THRESHOLDS_small[class_id]) &amp; \ ~((i &lt;= 2) &amp; (small_box[:,[1,3]].mean(1) &gt; 3 * H / 10)) &amp; \ ~((i &gt;= 1) &amp; (small_box[:,[1,3]].mean(1) &lt; H / 10)) &amp; \ ~((j &lt;= 2) &amp; (small_box[:,[0,2]].mean(1) &gt; 3 * W / 10)) &amp; \ ~((j &gt;= 1) &amp; (small_box[:,[0,2]].mean(1) &lt; W / 10)) </code></pre> <p>Here, H/10, W/10, 3H/10, 3W/10 are overlap middle points of each small image.</p> <p>github repo updated</p> <p>I used H-flip, V-flip, Rot90 multiscale in training as augmentations. I didnt do any hyperparameter tuning because that's too time consuming. Again, the KEY points is smaller images :)</p> <p>In offline slicing, I used 25% as the threshold. If the mask area in slice is lower than 25% of its original area, it will be dropped.</p> <p>In inferencing, I used overlap mid point to decide whether a prediction should be kept. Details are mentioned at <a href="https://www.kaggle.com/c/sartorius-cell-instance-segmentation/discussion/297986#1633741" target="_blank">here</a>.</p> <p>Moreover, my slicing size is (208, 281), which is larger than most of cells, and stride is (104, 140), which is exactly the half of slice size. So, most of cells predictions will be fully covered by one slice. Those half predictions (at two different slices' edge) usually has lower score than a full prediction at slice center, which means that half predictions will be removed in postprocessing.</p> <p>这是我的理解:小图像的下采样会损失大量的信息,proposal包围的位置特征点更少,不利于目标检测<br> 但是增大图像可以增加信息量,proposal也可以包围更多特征点。</p> <p>尤其是这里的目标都偏小,模型偏向于在深层匹配,进一步加重了下采样的细节损失。</p> <p>对后面的RoIPooling也是同样的道理。</p> <p>Sorry, I don't understand your question😂. Can you repeat it in Chinese?</p> <p>I resized 208,281</p> <p>Sorry for my late reply. In mmet, input data including image array and its own "original infomation" named <code>img_meta</code> were passed to model. The model will resize output mask to image's original size.</p> <p>I tried train/test image scales: <br> (400,400)-(1333,1333)<br> (600,600)-(1333,1333)<br> (800,800)-(1333,1333)<br> (1024,1024)-(1333,1333)</p> <p>(800,800) performed best in local cross validation. (1024,1024) was slightly lower. (400,400) and (600,600) were much worse.</p> <p>First, make sure that you are on "sartorius" branch. Then, "DownSampleCocoDataset" is implemented in <code>mmdet/dataset/dataset_wrappers.py</code>.</p> <p>It downsamples a dataset by a given ratio. E.g, a dataset including 1000 images with downsample ratio 2 will only use 500 random sampled images in training per epoch. It is easy to implement and you can try it yourself.</p>
Sartorius - Cell Instance Segmentation
8th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Sartorius - Cell Instance Segmentation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My solution points</p> <ol> <li>cascade mask rcnn with resnext152 backbone</li> <li>pretrain on LIVECell dataset</li> <li>pseudo labeling</li> <li>WBF and WMF(weighted masks fusion)</li> <li>train and inference on each image class</li> </ol> <p>Pipeline is <a href="https://drive.google.com/file/d/1Bl5AzzTsm9_tjwezlXUvyEprOwmZcWSw/view?usp=sharing" target="_blank">this</a></p> <p>Training strategy : <br> I tried to train the model in all classes, but it did not work, so I trained and inferenced with each class.<br> I pretrained models on LIVECell dataset and finetuned with competition data. Then, I inferenced on train-semi-supervised data to generate pseudo labels. And finally finetuned these models with competition data and pseudo labels.<br> The ways to pretrain and finetune are similar to these codes. <a href="https://www.kaggle.com/markunys/sartorius-transfer-learning-train-with-livecell" target="_blank">pretrain</a>, <a href="https://www.kaggle.com/markunys/sartorius-transfer-learning-train" target="_blank">finetune</a></p> <p>Inference strategy:<br> I ensemble the boxes predicted by the cascade mask rcnn and yolov5x with WBF, and use the boxes to generate masks. I use WMF, which is WBF applied to the mask ensemble, and ensemble the masks of folds.<br> Inference code and WMF code is here.<a href="https://www.kaggle.com/markunys/8th-place-solution-inference" target="_blank">Inference code</a>, <a href="https://www.kaggle.com/markunys/ensemble-boxes" target="_blank">WMF code</a><br> WMF code is identical to <a href="https://github.com/ZFTurbo/Weighted-Boxes-Fusion" target="_blank">this</a> except for including WMF.</p> <p>I have spent hundreds of hours on this competition, and I am very happy with the results!!</p>
Sartorius - Cell Instance Segmentation
9 Place solution: pretrain & semi-supervised & postprocessing & Trust both cv and lb
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Sartorius - Cell Instance Segmentation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you!</p> <p>Thanks all my teammates @ xiuqi0 <a href="https://www.kaggle.com/guohey" target="_blank">@guohey</a> <a href="https://www.kaggle.com/kevin1742064161" target="_blank">@kevin1742064161</a> <a href="https://www.kaggle.com/gmhost" target="_blank">@gmhost</a>! We did it! We are masters! </p> <p>Here I represent my teammates to introduce out strategy, most of which are their effort.</p> <p><strong>Training</strong></p> <ol> <li><p>Pretraining: We take LIVECell as 1 class. In our experiment 1 class is better than 8 classes. In the early time of this competition, pretraining an R50 can boost up to gold zone. </p></li> <li><p>Semi supervised training: After pretraining, we got 2 models scores 0.332 in LB. Using NMS we reached 0.334. We used that pipeline to generate pseudo lables of semi-supervised-train data provided. After training a cascade rcnn with resnext152 as backbone. We reached 0.338.</p></li> </ol> <p><strong>Submission pipeline</strong></p> <p><strong>Preprocessing</strong></p> <p>Classify first. Our submission pipeline predicted instances in 2 classes in some images in our local validation set (in gt there was only one type of cell in one image). So we applied 5 folds resnet34 for classification before segmentation.</p> <p><strong>Postprocessing</strong></p> <p>We stuck in 338 for about 20 days. It’s pretty weird that no matter how we ensembled other models with 338 models, it scored worse. After applying a lot of post-processing, we finally edged past 338 in LB</p> <ol> <li><p>Handle with broken mask: As in the label of Train and Test, astro cells have a large amount of Broken Mask. Therefore, the results of model training are also easy to predict broken Mask (mask has contour, but there is a part of space-time in the middle of contour). When we use the classification model to determine whether an image is astro or Shsy5y, we use cv2.findContours. Broken Mask is generated by cv2.fillconvexpoly () according to the outer contour.</p></li> <li><p>NMS and NMW works for us, Soft-NMS and WBF not. NMW is a bit better than NMS. </p></li> <li><p>Mask Weight: <br> 1) If the maximum IOU between a box and all other boxes exceeds a certain threshold, its Mask is weighted, and if the weighted value of the Mask is greater than another threshold, the corresponding pixel is set to 1.<br> 2) Compared with NMW, the motivation of mask weight is that some mask information is utilized too little (although NMW is not discarded directly but weighted, the information of those integrated boxes must be weakened). Mask weight makes bboxes with high overlap are merged before the NMW, so that less information is lost. However, setting this threshold too low may lead to redundancy of information. This parameter is another one we need to adjust through our local validation dataset.</p></li> <li><p>Discard the misclassified classes. There is usually only one kind of cells in a map, and we have used the classification network to do the classification in advance. we discarded the mis-classified instances directly.</p></li> <li><p>We modified the source code of NMW, the original code returned some duplicate Bbox and masks. We additionally returned a indices to do slice to weight</p></li> <li><p>Weighted NMW, the score of the auxiliary model was multiplied by a coefficient less than 1.</p></li> <li><p>Weighted NMW by category (because there is no effect between classes). Since we use classification model first, we can get class of each image. Than we use grid search to get the weight of each class mentioned by 7.</p></li> </ol> <p>In our final submission, We chose the highest lb one ane another one with more model, not-bad lb and good cv.</p> <p><a href="https://www.kaggle.com/mrinath" target="_blank">@mrinath</a> I think our semi-supervised is pretty simple. Just use our submission pipeline run on the unlabeled data and saved its prediction as ground truth. </p> <p>嘻嘻嘻😄😃😏😊😁😍</p> <p>sheep 哥太猛了👍</p> <p>Thank you!</p> <p>Congrats to you <a href="https://www.kaggle.com/duykhanh99" target="_blank">@duykhanh99</a> and your team. It seems that me, your team and odede are so stable in both LB and PB👀</p> <p>Thank you!</p> <p>Yes, I think pseudo label is a kind of semi-supervised</p> <p><a href="https://www.kaggle.com/doonut" target="_blank">@doonut</a> good!! 💕</p> <p>Thank you!</p>
Sartorius - Cell Instance Segmentation
[Viettel.DGD] Train4Ever 7th place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Sartorius - Cell Instance Segmentation <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello Kagglers, I am Truong Bui Nhat from team Train4Ever. First, I would like to thank the host for providing such an interesting instance segmentation challenge, then all the other teams for making a nice race.<br> Shout out for <a href="https://www.kaggle.com/tungvs" target="_blank">@tungvs</a> <a href="https://www.kaggle.com/namgalielei" target="_blank">@namgalielei</a> <a href="https://www.kaggle.com/damtrongtuyen" target="_blank">@damtrongtuyen</a> <a href="https://www.kaggle.com/duykhanh99" target="_blank">@duykhanh99</a>.</p> <p>This our brief solution for Sartorius - Cell Instance Segmentation competition. <br> Our solution is a flow with 5 stages back to back:</p> <ol> <li>Train baseline models</li> <li>Add LiveCell Shsy5y data, clean data, retrain and finetune models and use NMS by mask</li> <li>Pseudo labeling with potential models</li> <li>Pseudo labeling round 2 and Ensemble</li> <li>Post processing model</li> </ol> <p>Stage 1, we tried a lot of models with original data:<br> - MaskRCNN Cascade ResneSt200 (with LIVE CELL pretrain)<br> - PointRend <br> - GCNet<br> - CellPose<br> - MaskRCNN Swin<br> - MaskRCNN Cascade Swin<br> - SCNet<br> - Query Instance<br> - HTC<br> - CBNet V2<br> Note that we froze batch norm layers in backbone ResneSt200 and FPN because the training batch size was only 2. We thought it helped to keep the batch statistics unchanged, thus stabilizing the training.<br> After validating on local valid set + submit to Public leaderboard we selected the below candidates for stage 2:<br> - MaskRCNN Cascade ResneSt200 (with LIVE CELL pretrain): 0.329<br> - PointRend: 0.317<br> - MaskRCNN SWin: 0.292<br> - MaskRCNN Swin: 0.291<br> - GCNet: 0.304<br> - CellPose: 0.314<br> (All are reported on public leaderboard scores)</p> <p>Stage 2, we added clean data, retrained stage 1 candidates and finetuned:<br> Add LiveCell Shsy5y data:<br> We used all the train, valid and test set of Shsy5y type from LiveCell data as additional training data for Shsy5y <br> Our cleaning method:<br> 1. Remove LIVE CELL Shsy5y images with high FN with IOU 0.5 on training set (After training a model and perform error analysis)<br> 2. Remove cort images with duplicate annotations <br> 3. Sanity check and remove cort images that we feel missing annotations<br> More fine tuning:<br> 1. Image size increased from 800 (shortest edge) to 1024<br> 2. Unfreeze all the backbone (default option of Detectron2 is freezing at the second block)<br> Our best model performance on Public leaderboard after stage 2:<br> - MaskRCNN ResNeSt200: 0.336 </p> <p>Stage 3, we generated pseudo labels on the Semi Supervised dataset and re-trained on new data<br> First, both MaskRCNN ResNeSt200 and PointRend predicted the instances, then a simple ensembling technique was used to combine their predictions. The ensembling technique was to match IOU by mask to create different clusters, then within each cluster, used pixel voting to determine which pixel was kept, which was filtered.<br> Then MaskRCNN ResNeSt200, PointRend and MaskRCNN Swin trained on this new pseudo data + original data + LiveCell Shys5y data.</p> <p>At this stage MaskRCNN ResNeSt200 could achieve 0.338 on LB. </p> <p>Stage 4, we generated pseudo labels on the Semi Supervised dataset with MaskRCNN ResNeSt200, PointRend and MaskRCNN Swin coming from stage 3 and used the simple ensembling technique similar to stage 3 to combine the predictions.<br> The new pseudo prediction was added to original data + LiveCell Shsy5y data and we retrained MaskRCNN ResNeSt200 model.</p> <p>At this stage MaskRCNN ResNeSt200 could achieve 0.340 on LB. </p> <p>Then we developed a new ensemble technique to ensemble 2 MaskRCNN ResNeSt200 models trained on round 1 pseudo and round 2 pseudo data.</p> <p>The new ensembling method described with the image below:</p> <p><img src="https://i.ibb.co/5ng4hkd/Ensemble-Flow.jpg" alt="ensembling method"></p> <p>Those only boosted less than 0.1% (still 0.340), we still thought an ensembling submission would be better if a shake-up happens.</p> <p>Stage 5, we trained a machine learning model with a view to filtering FPs. <br> We extract some features from the prediction of stage 4 such as: basic features (instance confidence score, mask pixel scores, box area, mask area, location, size, mean/median pixel values on the original image), morphology features (rotation_angle, solidity, convex hull area, perimeter, …), neighboring features (overlap amount, overlap neighbor count, distance to top k neighbors, neighbor count within a circular area, …). The total number of features was 130. <br> Model: CatBoost<br> Validation: We splitted the valid set (used for validating at the above 4 stages) into 2 half and used the first one to train CatBoost, the second one to validate and hyperparam tuning.<br> Hyperparam tuning method: Using hyperopt lib with objective function was the competition metrics on the second half of the valid set. </p> <p>This stage gave us a score of 0.341 on the leaderboard.</p> <p>Our final submission notebooks:<br> <a href="https://www.kaggle.com/namgalielei/maskrcnn-v21-v16-ensemble-stage2catboost/notebook" target="_blank">https://www.kaggle.com/namgalielei/maskrcnn-v21-v16-ensemble-stage2catboost/notebook</a></p> <p>This is the end of our solution, stay safe and enjoy the New Year’s Eve everyone.</p> <p>Thank you!</p> <p>Thank you!</p>
Sberbank Russian Housing Market
14th place decision (12th after checking LB by kaggle)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Sberbank Russian Housing Market <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I chose for final scoring, as seems for me, two my most stable models. Both models are quite simple and didn't use any ensembling or 'magic multipliers', so they were far away from top on public LB. The best model put me on 14th place (or 12th after LB checking), and the second one could gave me the place in Top20.</p> <p>Here is a description of the second model (the best model is a small modification of this one)</p> <p><strong>Predicting Price per square meter (PSQM)</strong> </p> <p>I made some experiments where I tried to predict PSQM and Price_doc training models on the same features. And PSQM-models gave a little bit better public score. So I chose PSQM. Moreover, PSQM is more informable on Russian Housing Market than the total price of an apartment.</p> <p><strong>Cleaning of the data sets</strong></p> <p>Because of choosing PSQM for prediction, I was very accurate correcting mistakes in Full_sq. </p> <p>There were some attempts to restore (from the transactions of the same building) the mistaken and missed values in the features: build_year, material, max_floor. And also there were some attempts to predict mistaken and missed values in the features: num_room, life_sq, and state (training XGB on the other non-missed features). But any restoring and predicting only worsened the public score, so I excluded them from the cleaning.</p> <p>Also I droped some rows with very low (&lt;=10000) and very high (&gt;= 600000) values of PSQM.</p> <p>Such cleaning together with some trivial generated features about apartment conditions and activity of transactions on month/day of week gave score on LB about 0.314.</p> <p>At some moment of the competition, the organizers fixed some bad addresses in data sets. And LB was revised. So I updated data sets from bad_address_fix file and was very confused because the model score got worsened to ~0.3157. But I decided to use the changed data sets.</p> <p><strong>Cross Validation (CV)</strong></p> <p>The task of getting the proper correlation of CV - LB was one of the important task in my decision. I tried the following variants of CV: </p> <p>1) usual k-fold CV (Didn't work properly for feature selection.)</p> <p>2) exclude all ELP (extreme-low-prices) transactions + usual k-fold CV. (Perhaps it would worked at some degree. But because of huge penalty from ELP transactions in test set, local score were almost twice better than LB one and this scared me.)</p> <p>3) CV for time serie splits (It's a variation of KFold. In the kth split, it returns first k folds as train set and the (k+1)th fold as test set. No any shuffling.) In different splits, local scores were very different between themselves, so I declined this variant.</p> <p>4) exclude (undersample) some ELP transactions + a modification of CV for time serie splits (the set splits k times as following way: some percent of transactions before fixed time stamp for training, some percent of transactions after fixed time stamp for predicting).</p> <p>The last variant (5 times, 90% for training before 2014-07-01, 90% for predicting after 2014-07-01) gave MEAN-score about 0.305 - 0.310 and more, STD was about 0.0015-0.003. </p> <p>It was not perfect and bad for tuning XGB parametrs. But it helped me to select features more accuratly.</p> <p><strong>XGBoost as base algorithm</strong></p> <p>I tried several algorithms. But XGBoost don't required a mandatory preprocessing of the missed values in the input and it worked with the data quite fast. So I chose it as a base algorithm.</p> <p>As I mention above, I didn't have appropiate CV for tuning XGB parameters. So I spent some submits to tune an eta-parameter and used this tuned value in all later models. Other parameters max_feature / subsample / colsample_bytree I set as 4 / 0.7 / 0.7. The number of boosted rounds was been tuning by built-in CV method of XGB with early stopping on each new data set (without any undersampling).</p> <p><strong>Feature engineering and feature selection (MAIN PART)</strong></p> <p>I tried the sequantial feature selection on initial features but it took a lot of time (for my laptop) and it was not effective.</p> <p>Then I tried to make models excluding whole groups of initial features. And the model without 'population'-group features improved LB score significantly. So I excluded 'population'-group features from the next modifications of the model. </p> <p>Also I suggested that the price of apartment may depend on the distance to the nearest metro, bus terminal, large roads, railroad stations, ring roads, water reservoirs in its area and other special objects. I began to develop this idea and created new categorical features for marking houses by equal marks according to their simillar remoteness from these special objects. I set accuracy as 0.1 km (as 1 min) for the remoteness measured in km (in minutes) . The accuracy as macro parameter for remotness was been tuned by local CV.</p> <p>For example, lets look at new categorical feature 'ID_metro_metro_km_avto' which is a concatenation of 'ID_metro' transformed to the string and 'metro_km_avto' rounded to decimals and transformed to the string. This variable marks the apartments equally if their houses are equally remote from the metro with an accuracy of 0.1 km.</p> <p>Here is the python code of new feature 'ID_metro_metro_km_avto': <code>train['ID_metro_metro_km_avto'] = train['ID_metro'].astype(str) + train['metro_km_avto'].round(1).astype(str)</code></p> <p>Or, in the simillar way, we create the new feature 'ID_railroad_station_walk_railroad_station_walk_min' with accuracy of remoteness from the railroad station of 1 minute: <code>train['ID_railroad_station_walk_railroad_station_walk_min'] = train['ID_railroad_station_walk'].astype(str) + train['railroad_station_walk_min'].round(0).astype(str)</code></p> <p>Some of these new features were declined because of not good score of model on local CV. The result model with new features of this part improved my public LB score on about 0.004 and I chose it for final scoring.</p> <p><strong>Best model</strong></p> <p>I generated more new features by concatenation of 'sub_area' and features from the 'share' group ('green_part_500', 'prom_part_500' and other like these). But there were no much time (about 3 hours to the end) for accurate selection on CV. So I just added them to the result model from the previous part, sorted all features by their XGB-importances (get_fscore) and selected the features with best importances. This model improved a little the score on private LB.</p> <p><strong>Magic numbers</strong> </p> <p>I started to work closely on the competition task only in the begining of the June. And to that moment, there were a lot of public scipts which widely used so-called 'magic numbers'. I was tempted by these scripts because one of them (after some little changes in the code) put me on 44th position on public LB. I got thought that was a key to success and became to research the nature of this 'magic numbers'. But its turned out these numbers were been tuning by their authors mostly by probing LB (not from researching of macro data).</p> <p>I didn't managed with the macro data task (may be because of low skill in researching time series data or may be because of time trouble) so I couldn't properly interpret the 'magic numbers' for myself. So I decided to decline an idea to use them in my models. </p> <p>But after the finish, the winners posted that they used 'magic numbers'. So I also tried to search them. Probing LB 5 times, I found a good 'magic multiplier' for the target value (in training set) and improved my post score on about 0.0005. So this might gave me a place closer to the top.</p> <p><strong>I would like to thank the organizers for interesting competition! And thanks a lot the competitors for valuable discussions!</strong></p>
Sberbank Russian Housing Market
15th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Sberbank Russian Housing Market <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My final submit was an average of three models. Two of them I've made myself (weights 0.58 and 0.16) and the third is my version of one of the public kernels (weight 0.26): <a href="https://www.kaggle.com/aharless/latest-iteration-in-this-silly-game/code">https://www.kaggle.com/aharless/latest-iteration-in-this-silly-game/code</a>. You can access to all notebooks here: <a href="https://github.com/Danila89/sberbank_kaggle">https://github.com/Danila89/sberbank_kaggle</a>. I will describe ideas I found and utilized in my solution.</p> <ol> <li>I made separate models for Investment and OwnerOccupier. The Owner model was fitted on the product_type=='OwnerOccupier' train subset and was used to predict product_type=='OwnerOccupier' test subset. The Investment model was fitted on the entire train dataset and was used to predict product_type=='Investment' test subset. Now it seems that may be I should have fitted Investment model on product_type=='Investment' train subset (see <a href="https://www.kaggle.com/c/sberbank-russian-housing-market/discussion/35539">https://www.kaggle.com/c/sberbank-russian-housing-market/discussion/35539</a>), but it had given lower Public score and I gave up with this. The separate approach is used in model_1_price_per_meter.ipynb and in model_2_price.ipynb and gives 74% of the final submission.</li> <li>I fitted price per square meter (price_doc/full_sq) with my first model (model_1_price_per_meter.ipynb, weight=0.58). Price per meter is a better target because it is a common measure of the flat's quality, average price per meter is common measure of the sub_area expensiveness, if averaged within a city or a country it could be a macroeconomic factor as well, etc. Finally it is much more correlated with our features (see <a href="https://www.kaggle.com/daniel89/may-be-price-doc-full-sq-is-a-better-target">https://www.kaggle.com/daniel89/may-be-price-doc-full-sq-is-a-better-target</a>). In this dataset we had some issues with full_sq feature both in train and in test. We had a lot of zero or unlikely small full_sq. It was very important to filter that before calculating price per meter, I've struggled with this for some time.</li> <li>I found that predictions of models based on price and ones based on price per meter are different enough and are good for averaging. My second model (model_2_price.ipynb, weight=0.16) is almost the same as the first model, but fits price_doc. The third model is predicting price_doc as well.</li> <li>I have had a huge trouble with cross-validation. Bravo for those who managed to overcome this (see <a href="https://www.kaggle.com/c/sberbank-russian-housing-market/discussion/35684">https://www.kaggle.com/c/sberbank-russian-housing-market/discussion/35684</a>), I did not. Instead I decided to average across many-many estimators that were in many cases overfitted. In the first model I'm averaging 400 xgboosts, in the second – 100. The typical standard deviation for the price prediction for a particular apartment by these 400 xgboosts was roughly 3e5 which I feel is very high. It seems that it is like Random Forest approach where you do not tune the trees, but average across many of them. Here xgboosts were the trees ;)</li> <li>I didn't have any success in solving the 'time-series' part of the problem. My models do not use macro at all (except the model_3 which is from public kernels). Instead I use the thing almost everybody used – the quarter scaling coefficients (I took them here <a href="https://www.kaggle.com/aharless/exercising-the-exorcism">https://www.kaggle.com/aharless/exercising-the-exorcism</a>).</li> <li>My models use cleaned data, the cleaning was made by this script: <a href="https://www.kaggle.com/keremt/very-extensive-cleaning-by-sberbank-discussions">https://www.kaggle.com/keremt/very-extensive-cleaning-by-sberbank-discussions</a>.</li> </ol> <p>Thank you colleagues for sharing ideas, collaboration and interesting competition. Any comments, questions and suggestions appreciated.</p> <p>Thank you very much. You are getting pick your second medal as well in few days, congratulations for you too :)</p> <p>Hi, thank you for nice question. The model_1_price_per_meter.ipynb is the main model I worked on during the entire competition. However I found that it's prediction are much different from the prediction of the models using raw prices (model_2 and model_3), so it makes sense to average across them. The weights are somehow based on the models' performance and on how much I trusted each model. I didn't tune the weights but picked them intuitively. I think it is absolutely possible to get a bit higher score tuning the weights by Public LB, but I did not have submissions for his.</p> <p>Thank you. I had no chances to tune xgboost because of the absence of CV as you said. I decided to average across many overfitted xgboosts and I did not really tune them. Like if it was RandomForest of xgboosts. You can check my code - xgb parameters are almost default-like. The important point was max_depth=7 to make them overfitted and subsample and colsample_bytree less than one to make xgboosts different.</p>
Sberbank Russian Housing Market
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Sberbank Russian Housing Market <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes, my models were also different for each product type. </p> <p>I developed models for OwnerOccupier product type first. When started to work on Investment product type, I initially copied OwnerOccupier model, but in process of development many features were added/removed and parameters also were tuned differently - so in the end models diverged quite far from each other. Only some core features and main ideas (like using log transformation on prices) left the same.</p> <p>Our best submission was ensemble from several models from our individual solutions, which we'll describe in details below. We merged different subsets of predictions separately - e.g. each product type (Investment and OwnerOccupier) was predicted by different models.</p> <p>Already at the moment of forming our team (as it turns out now) submission of ensemble from our models had private LB score 0.30704, which was enough for 1st place. Public score however was enought only for 3rd place, so we kept trying hard to find any possible opportunities for improvements. Having two good models on hand helped in this a lot. By analysing residuals of our predictions we found and fixed weak places of our models and also mistakes in data - that all helped to reach 2nd place in public LB and allowed to wait for revealing of private LB with hope for 1st place. Which turned out to be true.</p> <p>Next we'll describe core concepts and main ideas from our individual solutions and approaches.</p> <h2>alijs approach</h2> <p>I used LightGbm for all my models. Very initial version of it was also published as Kaggle kernel 2 months ago: <a href="https://www.kaggle.com/alijs1/lightgbm-starter-0-33342/code">https://www.kaggle.com/alijs1/lightgbm-starter-0-33342/code</a></p> <p>In quite early stage of this competition I discovered several important insights:</p> <ol> <li><p>Investment and OwnerOccupier product types were too different and using separate models for each of them gave better results.</p></li> <li><p>My CV for Investment type model was amost unusable, but CV for OwnerOccupier product type gave much better correlation to public score (not perfect, but quite usable for selecting features). I used 5-fold CV with random shuffling on OwnerOccupier product type, comparing results for different seeds.</p></li> <li><p>Data cleaning was very important. I used two levels of data cleaning - some models used data with slight cleaning, others with more cleaning applied.</p></li> <li><p>I removed full_sq feature, because it was considered too important by my models, but it contained many mistakes in data. Instead of using full_sq as feature directly I added the following features: feature_X = full_sq / mean_full_sq_for_group_X, where X were different categorical features, like num_room, build_year, floor, sub_area, etc.</p></li> <li><p>As my CV didn't include any time based validation, I didn't know how good my solution deals with trend/seasonality component, so I used several probing submissions to check level of mean for public test set. That way I discovered, that my models predictions mean was different than mean of public test set (in similar way as popular public kernels) and "magic numbers" helped to correct the difference. But unlike public kernel I was using 2 models - so I also discovered, that magic numbers for Investment and OwnerOccupier product types actually were quite differnt - 1.05 and 0.9 correspondingly.</p></li> <li><p>As I had usable CV only for OwnerOccupier product type and those models were stronger, I made predictions for Investment products using my OwnerOccupier model and used those predictions as feature for Investment models.</p></li> </ol> <p>All those insights helped me to get to the 1st place in the middle stage of competition. I stayed at the top for some week or more, tried a lot of other things, but didn't succeed in any significant further improvements and started to loose my position.</p> <p>When I felt down to 4th place, I decided to look for a team member and invited Evgeny (which turned out to be really good decision). His models strongest part turned out to be Investments (which was weakest part of my solution), so merging our models gave good boost to our score.</p> <h2>Evgeny approach</h2> <p>I also used lightGBM as it much faster and perform better than xgboost now.</p> <p>My approach was very different from alijs and it helped us when we combined our models.</p> <p>Some ideas for start:</p> <ul> <li>I didn't predict full apartment prices directly, but prices per 1 square meter.</li> <li>I guessed that product_type "OwnerOccupier" was direct sales from developers of new buildings, and "Investment" type was usual second-hand market and not all sales were real invesment. Those very different markets and I made separate models for them.</li> <li>There are many low bad prices at the data. As they related to avoid tax purposes of sellers and we hadn't information about sellers enough to detect them, I decided to drop them. Fake prices were not only prices with 1-2-3 millions. In some areas 2 or 3 million were real prices, in other even 10 million were much lower than market. I used my investment model to find them (I guess that most of non-investment prices were real) and drop data were prediction and target were much different (2 times or more). And repeated process one more after first iteration to clean more. Totally I dropped near 10% of train set as data with bad prices. If you would keep them during training you could get two type of troubles: <ol><li>during local validation better or lower accuracy for low prices data could mask changes of accuracy of good data and you could miss something useful when you fine-tune model.</li> <li>the boosting technology at the "linear regression" mode based on mean target values of each split of data. If you used low prices, they appeared at different splits randomly, for example, when you got less than average share of low prices at specific split your prediction could be higher than average level. If you removed bad prices, you could exclude this random factor and get higher generalization and accuracy of the model. The practice confirmed that - my investment part had even better score on private compared to public.</li></ol></li> </ul> <p><strong>Data cleaning</strong></p> <p>I fixed some errors in features with squares, build year and max floor, but nothing special. Also I filled some features like build year and num_room based on other apartments with same "address".</p> <p><strong>Main part</strong></p> <p>I saw two different tasks at this competition:</p> <ol> <li><p>to determine macro influence to prices</p></li> <li><p>to determine specific locations, building and apartment conditions to prices</p></li> </ol> <p>The first task I solved partially - I use my investment model as a way to find macro components of training periods which helped me to scale all periods to one level (it was not precise, but much better than without it). After I dropped bad prices and scaled the rest by periods I got ability to use cross validation (8 folds split by each half year). The CV perform good enough to tune model - SD was between .007 and .008. CV score was around .09 for non-investment model and around .13 for investment part. The difference with leaderbord's scores was just huge penalty by bad prices for all participants.</p> <p>I didn't managed to solve forecast part of macro task and just used downscale coefficients (they were called "magic", but there were just macro amendments to get average level of test prices). As I scaled train prices up in early train periods (according macro) and drop bad prices, my average level of predictions were high and I should to use lower downscale coefficients than used at public scripts. I tried some different values of coefficients on leaderboard, found good average levels for investment and non-investment parts and after that scaled my predictions to those average level without additional probing of LB.</p> <p>I tried to find specific coefficients for each test month by probing LB. For non-investment part it were better to reduce discount at two first months and to increase discount at January16. It got me .0005 improvement before we merged to team, but when we tried it at our final ensemble impact was much lower - less than .0001. For investment part similar changes made score worse. I tried to build model to get trend similar to coefficients that I got from LB probing, and found that we had not enough macro data. Shifted and scaled oil prices together with USDRUB courses and cpi-ppi were very similar, but additional data should be used also to good generalization and precise.</p> <p>For invest part I used all data for training, but predict and check CV score only to invest data. My validation approach helped me to select right features and create new ones. My models had less than 50 features.</p> <p>I had one early version what performed better on public LB and another that was better on my CV, but little bit worse on public LB. We used my latter version as part of our alternative second final submission and it performed better on private LB (trust your CV).</p> <p>For non-investment part I used 2-stage approach - first I used all data for train and predict non-invest part for train and test. After that I used another model based on non-invest data and prediction from 1st stage.</p> <p>For non-invest part I used additional price scaling for train periods. Prices of the most of new building grew much faster than general level. I decided to calculate additional smoothed scale factors for each non-investment addresses with more than 30 apartments in train. My non-invest part was weaker than investment part. I think my approach was not enough precise as we had not much data, but I'm sure it could be good for further development.</p> <h2>Basmannoe-Savelki &amp; Kuncevo - hidden troubles</h2> <p>When we compared our predictions, we found few addresses with high differences. One of them were apartments at Basmannoe area with kremlin_km==2.90274. Our mean predicts were ~150000 and ~200000 rub per sqm. As these apartments were so close to Kremlin they should have high prices. But probed LB showed true level around 90000 per sqm where our score improved by .002. It was little bit upset and we started to research. Geo coordinates from Chippy script pointed to park were no buildings at all. Then we try to find apartments with the same full_sq (test set had 2 digits precision). And found that apartments at Savelki and both Krukovo (near Savelki) absolutely the same. All apartments at Krukovo were result of fix of Tverskoe issue. As Anastasia wrote that for some properties addresses were approximate, we guessed that all 4 addresses are one in reality and changed address and other features to Savelki. When we built separate model for this place, we got .0023 on public LB, what was quite high to 22 apartments.</p> <p>After that we checked most of non-investment addresses without previous history in the train. Next was 10 apartments at Kuncevo (.0006). And few others with smaller improve. Totally we got around .004 from such type of corrections.</p> <h2>Main take-away from this competition</h2> <p>This was first experience for competing as a team for both of us and it showed that having two strong and diverse models gives a good platform for further improvements, which turned out to be the key factor for 1st place here.</p>
Sberbank Russian Housing Market
21st place submission
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Sberbank Russian Housing Market <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Without looking pretty sure the imputed ones were worse/more biased, though wasn't overly bothered by this as it added more different models to stack over.</p> <p>Okay, not a winning script (very keen to see the top scripts) but hope some find useful!</p> <p>Firstly, thank you all for a fun and challenging first competition. Despite the final position I do think luck played a large part here though I was true to my desire to try not to overfit the leaderboard at any point (aside from the magic number script I submitted last week my highest position was about 1,300th , with a score of ~0.315). As this was my first competition it was a real challenge to juggle the many issues the dataset threw up, particularly as I was building out a code base from scratch for all the models, CV and stacking (in R). Here is what I did:</p> <p>• <strong>Data cleaning/imputation:</strong> went back to the drawing board on this many times. Did a variety of things from flagging outliers to cleaning obvious errors manually and some time imputing all missing data from key features (mainly internal) using missForest. Certain key features were encoded in different ways and new ones created based on the location data @Chippy kindly provided. Spent some time trying to model the dodgy house prices (i.e. tax dodge prices that were on average 70% lower than their fair values) using classification models to then feed the probability in as a meta-feature.</p> <p>• <strong>Base models:</strong> a mix of ‘linear’ models (glmnet, mars, svm), ranger (fast random forest implementation), xgboost on all features (with and without NAs) as well as weaker models which were feature restricted (i.e. location model, internal, area, partial least squares on all café data, pca on all sub area data, kNN). In hindsight for the stacking the models were not diverse enough though did fit on a range of target variables: price, log price, log price per sqm, cpi adjusted etc… Used per full_sq models as it became clear early on the most dominant feature was floor space. Tried clustering with limited success.</p> <p>• <strong>CV:</strong> probably the hardest part. Ultimately defined 6 CV ‘regimes’ for the data which varied from stratifying it based on sub_area and productType to down sample dodgy properties (thanks <a href="/raddar">@raddar</a>) to removing all data before 2013 and anything dodgy in the target variable. </p> <p>• <strong>Stacking:</strong> Fit all base models across all regimes and stacked across all regimes using a variety of methods, probably the weakest area due to inexperience. Linear stacking (mars, glmnet, lm) seemed to perform the best though was unsure how it would generalise. Tree stacking seemed to overfit and fwls wasn’t successful. Did spend a fair amount of time stacking as it became clear lots of the models were very unstable, despite being correlated (i.e. the price prediction for a single property varied quite a lot per regime).</p> <p>• <strong>Final submission:</strong> in the end just blended the stacking models that were robust across all regimes, with a slight bias down due to the inability to capture the time component reliably. I also submitted a non-biased down entry to try to be true to my local CV performance. Idea was to be as robust to the noisy data as possible.</p> <p>• Note on time component and macro data: had no luck modelling this reliably and given the noisy nature of financial data wasn’t convinced I could capture it reliably enough to trust. Only real final use of macro data was lagging cpi and rent prices of 2 bed apartments (and taking geometric average of these as a feature) which seemed to capture the time component of some major regions (once adjusting for floor space) but wasn't stable on public leaderboard . As painful as it was to just bias all submissions down slightly I hoped this would be more robust than keep probing the leaderboard. </p> <p>Undoubtedly lucky with my final submission as I submitted my second best model though had ~10 that would have scored in top 100. In summary, I learnt an awful lot with even more learning points to consolidate before the next competition!</p> <p>Comments welcome!</p> <p>Cheers,</p> <p>Mark</p>
Sberbank Russian Housing Market
22nd Code Sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Sberbank Russian Housing Market <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It's the division data from here: <a href="https://www.kaggle.com/jtremoureux/administrative-divisions-of-moscow">https://www.kaggle.com/jtremoureux/administrative-divisions-of-moscow</a> However, I didn't use these data set on my final submission (it did not improve both cv and lb).You can just comment the importing code.</p> <p>Welcome to my github for my main codes on this competitions. <a href="https://github.com/LenzDu/Kaggle-Competition-Sberbank">https://github.com/LenzDu/Kaggle-Competition-Sberbank</a></p>
Sberbank Russian Housing Market
72nd place witches' brew
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Sberbank Russian Housing Market <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>So posting solutions seems to be the thing now. But the problem is, I'm not really sure what's in our solution. It was created largely by an ad hoc iterative ensembling process that fit to the public leaderboard, and tracing it all would be a project in itself. A little more eye of newt here, does it improve our score? Some toe of frog there? How about fillet of a fenny snake? There were magic multipliers and sorcerous shift factors, and some judicious squeezing down of extreme predictions. Arithmetic means, logarithmic means, harmonic means, some of them in Python, some of them in R, some of them in Excel.</p> <p>I can say this:</p> <ul> <li>About 5% of our solution was an Extra Trees model which <a href="https://www.kaggle.com/lujing">Jing Lu</a> developed at the last minute and which I know nothing about except that it raised us 20 rank positions on the private leaderboard and could have raised us at least another 15 if we had weighted it more heavily.</li> <li>About 1% of our solution was <a href="https://www.kaggle.com/aharless/wls-and-small-xgboost-with-macro-adjustments">the model I developed</a> (but run at home with a lower learning rate), where I normalized price by CPI, used time-weighted least squares to create features, fit them using XGBoost (time-weighted and with downweighting of small even millions), and then used a simple macro model to do month-by-month adjustments to the results.</li> <li>About 2% (but hard to say exactly, because I took 5% but only when the results differed from ours by more than 5%, and then more stuff was added afterward) was from <a href="https://www.kaggle.com/mwaddoups/i-regression-workflow-various-models">a very old kernel</a> by Mark Waddoups.</li> <li>There were various other contributions from publicly available kernels whose individual public LB performance seemed unimpressive.</li> <li>A significant portion was from a neural network model that was apparently Jing's tuning of wti200's kernel, but with predictions shifted.</li> <li>A large part was the usual combinations of publicly known XGB models such as found in some of my public kernels, of which probably several variations made it in.</li> <li>And there was other stuff that my teammates (Jing Lu and <a href="https://www.kaggle.com/sidharthkumar">Sidharth Kumar</a>) did before we got together, which I can only guess about.</li> </ul> <p>"One of the old submissions (one month ago) already has a private score 0.314 (top 2 among all our submissions). It basically means we can just use that submission and do nothing for the rest of the month and still have a similar ranking."</p> <p>But it's not clear that any testing strategy would have selected the right submission a month ago. And if we had selected it at the time, I think we could have improved on it just as we improved on other selections over the path we actually followed. So I don't think the month was wasted. We made some wrong turns, partly because we were focusing too much on the public LB and partly just due to randomness, but our general sense of direction was approximately right. It's clear that the general trend in our private LB scores has been upward all along.</p>
Sberbank Russian Housing Market
9th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Sberbank Russian Housing Market <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks!</p> <p>Thanks Kaggle for hosting this great competition, and thanks all kagglers' great contribution. here are our solutions:</p> <h1>🐳鲲's approach</h1> <p>1.<strong>Feature engineering</strong> <br> &nbsp;&nbsp;&nbsp;Here is my <a href="https://www.kaggle.com/nickycan/feature-engineering-public-6th-private-9th">feature engineering kernel</a>, it contains all of my work about feature engineering(missing value imputation, create new variable, adjust weird points...) </p> <p>2.<strong>Ensemble-1: Trend-adjust model to simulate the magic number</strong> <br> &nbsp;&nbsp;&nbsp;As we all know, there's a price drop during 2015/06-2015/12, and many people use their magic number to adjust their predicted Y, that's problem and would cause overfitting. </p> <p>&nbsp;&nbsp;&nbsp;I use three variables(micex_rgbi_tr, gdp_quart_growth, oil_urals*gdp_quart_growth) in macro dataset to simulate the effect of magic number, that would be more generalized, and here is a simple description of how I used them. </p> <ul> <li>use Minmax_Scaler to scaler "micex_rbgi_tr"</li> <li>use log1p to transform it</li> <li>divide Y by (1+transformed_micex), and this is my new Y</li> <li>I use this Y to train my new model, and the same as other two variables. and use(1 + transformed_micex to reverse the predicted test Y back)</li> <li>then I give them each 1/5 weight and ensemble them to my original model.</li> </ul> <p>3.<strong>Ensemble-2: Remove bad points to adjust the former model</strong> <br> &nbsp;&nbsp;&nbsp;I trained a simple xgboost model and predict the y-hat, then I use abs(y-hat minus y)/y to compute diff, I remove bad points which has a big diff. <br> &nbsp;&nbsp;&nbsp;I use the former new training dataset to train model the same as <strong>Ensemble-1</strong> and ensemble it with "Ensemble-1"'s result. </p> <p>4.<strong>"Ownership" model to adjust the former model</strong> <br> &nbsp;&nbsp;&nbsp;This the is final part which let us got a jump into top 10, I find there's a significant distribution difference between 'Investment' and 'Ownership', and the 'Ownership''s data has better data quality than 'Investment', and I also find feature engineering has a significant improve on 'Ownership' data but not 'Investment' data, so I trained a 'Ownership' model and ensemble it with the Ownership part of my former model.</p> <h1>Jiani's approach</h1> <p>I used xgboost and Keras NN for this competition. I did not used the “magic feature” to tune down the price, I think I found a reasonable method to control it. So Here is the general description of my method. I am also going to upload some key kernels later. </p> <ol> <li>At the beginning, I used the Reynaldo’s kernel <a href="https://www.kaggle.com/reynaldo/naive-xgb">https://www.kaggle.com/reynaldo/naive-xgb</a> as my baseline model. I also found that the predictions are higher than the actual prices, because there is a house price drop in in 2015. I did not use the magic feature to tune down the price_doc. instead, I found the gdp_quart_growth from Macro dataset can help control it. I used the predicted price * (gdp_quart_grouth + 100) /100 to get the final predictions (gdp_quart_growth was merged to the test data by timestamp). It improved the public LB and is better than only using the “magic feature” on public LB.</li> <li>My second model used the Keras NN. I selected some features from macro data, which reflect the decreasing trend of economics (got from plotting the data). I did a train and test data cleaning step before applying the model. (I will upload this to the kernel later.) After I combined the train data and test data with selected macro data on timestamp, I selected 100 features prioritized by xgb for NN model. My NN model is a very sample 4-layer network with “relu” activation method. I did log transformation for prices to make them fell in normal distribution. Then I also applied the StandardScaler to training and testing datasets and y targets.</li> <li>I also divided the training data into 2011, 2012, 2013 and 2014, 2015 two datasets and trained two xgb models, respectively. </li> <li>My final step before merging is combining all the predictions together with different weights (weights are generated by guess and test submission). </li> </ol> <p>Above methods are what I used before I merged with 🐳鲲. Thanks 🐳鲲 for inviting me to team up. My ranking before merging is 15th and 🐳鲲 is 14th, After we merge, we jumped to 9th place. </p> <p>Then I removed some sites in training dataset that are hard to predict mostly form Investment based on 🐳鲲’s suggestion. This helped me to improve the LB again. My final best scores are 0.31246 (private LB) 0.30891 (public LB) before merging.</p>
Second Annual Data Science Bowl
11th Place Quick Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Second Annual Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>@DavidGbodiOdaibo, there wasn't any specific intuition about three regions. In fact, I would have preferred using 8 regions as that would have given decent spatial resolution only the short axis. However, when I went to more than three regions, overfitting overwhelmed any potential gains due to increased resolution.</p> <p>If you can read Lasagne, the actual net is below. The 62 input layers are 30 time slices for the central region, 15 for each of the end regions (randomly selected even or odd time points), plus the spacing between the end slices and the central slice. </p> <p>With enough data, I suspect you could get respectable results with extensions to this method (more regions etc), since I was limited by overfitting. However, segmenting the individual slices is really a better approach since then you get visualization of the volumes as well.</p> <pre><code>layers = [ LF(InputLayer, shape=(None, 62, SIZE, SIZE), layer_name=&quot;images&quot;), LF(BatchNormLayer, nonlinearity=None), # LF(Conv2DLayer, num_filters=16, filter_size=(3,3), nonlinearity=leaky_rectify), LF(Conv2DLayer, num_filters=16, filter_size=(3,3), nonlinearity=None, b=None), LF(BatchNormLayer, nonlinearity=leaky_rectify), # LF(Conv2DLayer, num_filters=32, filter_size=(3,3), nonlinearity=leaky_rectify), LF(Conv2DLayer, num_filters=32, filter_size=(3,3), nonlinearity=None, b=None), LF(BatchNormLayer, nonlinearity=leaky_rectify), # LF(Conv2DLayer, num_filters=48, filter_size=(3,3), nonlinearity=leaky_rectify), LF(Conv2DLayer, num_filters=48, filter_size=(2,2), nonlinearity=None, b=None), LF(BatchNormLayer, nonlinearity=leaky_rectify), LF(MaxPool2DLayer, pool_size=(3,3), stride=(2,2)), # LF(Conv2DLayer, num_filters=96, filter_size=(3,3), nonlinearity=leaky_rectify), LF(Conv2DLayer, num_filters=96, filter_size=(2,2), nonlinearity=None, b=None), LF(BatchNormLayer, nonlinearity=leaky_rectify), LF(MaxPool2DLayer, pool_size=(3,3), stride=(2,2)), # LF(Conv2DLayer, num_filters=192, filter_size=(3,3), nonlinearity=leaky_rectify), LF(Conv2DLayer, num_filters=192, filter_size=(2,2), nonlinearity=None, b=None), LF(BatchNormLayer, nonlinearity=leaky_rectify), LF(MaxPool2DLayer, pool_size=(3,3), stride=(2,2), layer_name=&quot;jointOut&quot;), ] for name in [&quot;systole&quot;, &quot;diastole&quot;]: layers += [ LF(Conv2DLayer, incoming=&quot;jointOut&quot;, layer_name=name+&quot;In&quot;, num_filters=384, filter_size=(3,3), nonlinearity=leaky_rectify), LF(MaxPool2DLayer, pool_size=(3,3), stride=(2,2)), # LF(DropoutLayer, p=0.5), LF(DenseLayer, num_units=OUTPUT_SIZE, nonlinearity=sigmoid), LF(ReshapeLayer, shape=([0],1,OUTPUT_SIZE), layer_name=name+&quot;Out&quot;) ] layers += [ LF(ConcatLayer, incomings=[&quot;systoleOut&quot;, &quot;diastoleOut&quot;], axis=1, layer_name=&quot;output&quot;) ] </code></pre> <p>@DavidGbodiOdaibo, yes the images were scaled so that the pixels in each image where the same size.</p> <p>I've been meaning to write up something about my approach, but I haven't been able to find the time, so here's a quick summary. </p> <p>But first: congratulations to the winning teams. Really impressive work!</p> <p>Onward, to my approach:</p> <ol> <li><p>Crop and down sample images to a common resolution and size (84x84). I played around with various complicated cropping strategies, but ended up just using center cropping.</p></li> <li><p>Divide the images from each heart into 3 sets; one from the bottom third, one from the middle third and one from the top third of the heart.</p></li> <li><p>Train a neural net to predict both CDFs directly that took 3 images at a time, one randomly selected from each set. These images were fed in at either the odd or even time steps, randomly selected. This gave a lot more effective training samples than I would otherwise would have had. I also used translations and flips to augment the data.</p></li> <li><p>Compute the prediction for each heart by randomly selecting many different combinations for images from the three regions as well as different translations and flips and combing the resulting CDFs. I got slightly better results by averaging the inverse CDFs than averaging the CDFs directly.</p></li> </ol> <p>That's pretty much it. This had the advantage of being quick to implement and not too computationally expensive: it could easily be run overnight on 3 year old macbook pro. Unfortunately, I ran into severe overfitting problems when I tried to up the accuracy of this by moving to more layers or higher resolutions.</p>
Second Annual Data Science Bowl
34th place solution, end-to-end learning with a single neural net.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Second Annual Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>@rcarson</p> <p>I don't really feel right posting my code since its pretty much just Marko's Keras tutorial with the changes that I outlined in my post. It should be easy enough to just make the same tweaks. </p> <p>@Alex Risman</p> <p>For the padding, lets say you have an input size 7x7 and without padding your output would be 5x5. So when you try to stack a whole bunch of convolution layers together you'll continuously decrease your image size before you really want to. So you could miss out on some convolution layers.</p> <p>Combining the the outputs it was just a simple average. </p> <p>First of all, congrats to the winners and all of the competitors. Lots of really great performances! </p> <p>Also, big big thanks to Marko Jocic, Bing Xu, and the Booz Allen Hamilton/NVIDIA Team. I joined this competition with only a month left and relied on those starter scripts and tips really heavily. </p> <p><a href="http://florianmuellerklein.github.io/DSB/">http://florianmuellerklein.github.io/DSB/</a></p>
Second Annual Data Science Bowl
38th place blog summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Second Annual Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I used hypercolumns which i though was interesting so i wrote a small blog post on it. <a href="http://saketkunwar.blogspot.com/2016/03/ndsb2.html">http://saketkunwar.blogspot.com/2016/03/ndsb2.html</a></p>
Second Annual Data Science Bowl
3rd place quick summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Second Annual Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Nothing wrong with a little PR :)</p> <p>Will take a look.. Hard to keep up.</p> <p>@Ben hammer,<br> I used 5 folds for training to not overfit the calibration and the average 5 translations in predictions. Both are not strictly necessary.<br></p> <p>Training for one full set (no folds) was around 5 hours. </p> <p>Prediction was around 1 hour for 1140 patients (+/- 3 secs per patient).</p> <p>Prediction took so long since every patient has 30 frames and +/- 10 slices which comes down to roughly 342000 images on total that need to be segmented.</p> <p>@woshialex, First of all congratulations of course. Yes I did notice (like you) that simple ensembling did not add much. The model that was best at picking up the outliers did best on it's own.</p> <p>But I thought in hindsight that the easy patients could be more accurate with more models. With the pixel based segmentation I had a good measure on the uncertainty of the prediction so I knew which patients were easy. Also, last week I though that other models benefitted more from the 200 extra patients. Anyway.. it's all good now.</p> <p>Wei Dong, Yes I must publish the source code to win the prize I guess.</p> <p>But I have two other goals. The first is to do something with my hand drawn labels + images. Perhaps make it a public dataset. I do not know how to go about this and of course they are not 100% correct. But it's bigger and more challenging dataset than sunnybrook.</p> <p>The second goal is to make an even more concise example and, if allowed, add it to the MxNet examples.</p> <p>Hello here a quick summary of my solution. This thing was that I used only <strong>one</strong> model. This was both cool and stupid at the same time..</p> <p>The basic gist:</p> <ol> <li>Preprocessing</li> </ol> <p>Scale images to real sizes using provided pixel areas in dicom. Crop 180x180 center. Use CLAHE on the images to get good local contrast.</p> <ol start="2"> <li>Hand labeling</li> </ol> <p>Label all train patients frame 1 and frame 12. I already had a lean-and-mean labeling tool that allowed me to label very fast. The big problem was that I did not know how to label. I tried to label as consistently as possible and then later on adjust systematic errors with a calibration step.</p> <ol start="3"> <li>Pixelwise segmentation with U-net</li> </ol> <p>Use a U-net for pixel segmentation (LV yes/no). <a href="http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/">http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/</a> I consider this architecture the state of the art in pixel segmentation. I implemented the U-net in MXNET which was a breeze to work with. big recommendation. Everything in the U-net paper was useful except that I was lazy with the weight initialization and relied on Batch Normalization.</p> <ol start="4"> <li>Integration to a volume</li> </ol> <p>After the pixel segmentation I counted the LV pixels and integrated over the slices. This is easier said then done since many slices were missing. Slice location was not 100% dependable. Order etc etc. A lot of work had gone in slice management. Pandas was invaluable for this.</p> <ol start="5"> <li>Calibration</li> </ol> <p>After I had the predictions I did a calibration step to work out systematic labeling errors. I did this by regressing over the residuals with a gradient booster with some features like age, sex etc.</p> <ol start="6"> <li>Submission</li> </ol> <p>Use the stdev in the errors (sliding window over heart size) and plug that in a CDF to generate predictions.</p> <ol start="7"> <li>Conclusion</li> </ol> <p>Performance was only limited by the following issues :<br> - I did not know how to label. I'm not a doctor.<br> - Some labels were provided wrongly (429!!)<br> - Corrupt scans/slices.<br></p> <p>MAE in ml was roughly 9 ml.</p> <ol start="8"> <li>Thanks</li> </ol> <p>Kaggle and Booz Allen Hamilton for this cool challenge.<br> Mxnet<br> The authors of the U-net paper.. <br> Sander Dieleman for pointing it out in reddit.<br></p> <p><br></p> <ol start="9"> <li>Shoutout @Leustagos</li> </ol> <p>Leustagos advised me to join a team to get more different models so that I could maybe push for the win. Because big ego and my blind confidence in my single model I continued on my own. You will not believe how many times I banged myself on the head last week when I suddenly dropped to #16. In the end it I'm glad it all turned out well but in hindsight you were so right!</p> <p>@liveflow, I also used a binary mask image as the target.</p> <p>However, without good measures this can be very numerically unstable. For instance, when most of the pixels in the target are zero, the net wants to predict all '0' very quickly and does not recover. Mostly I saw my activations explode to float.nan.</p> <p>A few important measures were:</p> <ul> <li>Batch normalization at every layer keeps the activations in check</li> <li>Careful weight initialization. However, this was very hard so I relied on BN.</li> <li>Crop the images so that the '1' and '0' are more balanced</li> <li>Logistic regression instead of RSME loss seemed to help too</li> <li>In the end when my net was optimal I did not need BN anymore when using a very small batch size</li> </ul>
Second Annual Data Science Bowl
75th place with all R and classic techniques
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Second Annual Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Compared to some of the other contestants, a relatively simple approach - still took me many days to develop. I decided to stay with R and classic vision techniques for ROI detection, segmentation and LV detection. See <a href="http://ottopdatascience.blogspot.nl/p/blog-page.html">http://ottopdatascience.blogspot.nl/p/blog-page.html</a>.</p>
Second Annual Data Science Bowl
Blog post on how to get 60th place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Second Annual Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I described my approach over on <a href="http://jessebuesking.com/articles/machine-learning/neural-networks/2016/03/17/ndsb2-competition">my blog.</a> I was able to get some decent predicted contours like these:</p> <p><img src="http://jessebuesking.com/static/img/ndsb2/16final-smaller.png" alt="predicted contours" title></p> <p>Congrats to everyone who participated, especially the winners! I hope we'll have made a positive impact in cardiology!</p>
Second Annual Data Science Bowl
Blog post on how to get second place apparently
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Second Annual Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>[quote=Beyond Two Layers;111497]How did the idea? :D It was trial and error? [/quote] Quote from Yann LeCun: <em>Larger networks tend to work better. Make your network bigger and bigger until the accuracy stops increasing. Then regularize the hell out of it. Then make it bigger still and pre-train it (...).</em> <a href="http://fastml.com/yann-lecuns-answers-from-the-reddit-ama/">http://fastml.com/yann-lecuns-answers-from-the-reddit-ama/</a></p> <p>Hey everyone,</p> <p>First of, congratulations to the winners! You did an awesome job, and we still haven't figured out how to get our scores that low! We are looking forward to hearing how you did it. :-)</p> <p>Here's a blog post describing our approach in some detail: <a href="http://irakorshunova.github.io/2016/03/15/heart.html">http://irakorshunova.github.io/2016/03/15/heart.html</a> and <a href="http://317070.github.io/heart/">http://317070.github.io/heart/</a></p> <p>Code and documentation to reproduce our solution will follow soon!</p> <p>Edit: the code and documentation of our solution: <a href="https://github.com/317070/kaggle-heart">https://github.com/317070/kaggle-heart</a></p> <p>I think the big issue is that it can be pretty wrong for some patients. The result you see on the leaderboard is only an average error, but it is because most are very good and some are very bad outliers.</p> <p>This is typically NOT what you'd want for a medical application.</p> <p>The dataset would need to be cleaner. I reckon however that when the radiologists using this get clear instructions on what data is needed, it can become suitable. Especially since the algorithms seem to be reasonably good at telling when it doesn't know for sure.</p> <p>So, there is this other forum topic where it says &quot;accurate to within ~10%&quot;, that's only true on average. And in medicine, averages are no good.</p> <p>[quote=Ben Hamner;111655]What was the total computational time required to train your final model? What about making predictions from it on the test set?[/quote] We didn't separate the two, but it took us 6 days to retrain everything twice. So about 3 days per submission. However, we needed 14 GPU's to stay within that time frame.</p> <p>On a Titan X, one model would typically train in 4h for the smallest models, up to 36h for the slowest, also depending on the speed of reading from disk (which was the bottleneck on most of our machines). Generating predictions would take about 2h per model, since we predict every patient about 200 times before averaging.</p> <p>However, a single model without test time augmentation could probably predict the entire set of 440 patients in about 1 minute with a loss of ~0.0107.</p> <p>Most of the computation was to make it more robust, since it had to run on an unknown dataset, with maybe unknown data problems. In the end, it didn't look entirely necessary though.</p>
Second Annual Data Science Bowl
Code for 19th solutions
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Second Annual Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to the winners!</p> <p><a href="https://github.com/udibr/DSB2/blob/master/160306-Readme.ipynb">My code</a> reached the 19th place but I think it has useful ideas:</p> <ul> <li>converted time to DC and first 2 cos and sin frequencies (5 channels)</li> <li>each horizontal cropped slice is feed into a CNN which predicts the volume contribution of each slice to the entire volume of the heart</li> <li>when predicting, the results from the same study are added up</li> <li>when training a special arrangement is used in which all slices from the same study appear in the same batch and the loss function sums all slices from the same study before computing loss</li> <li>CNN predicts both the volume and the error of the prediction and the loss is negative log likelihood of a normal distribution</li> </ul>
Second Annual Data Science Bowl
Some outputs of my model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Second Annual Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I used Spatially Sparse Networks again. Though it was mostly because I was most familiar with the architecture and didn't have much time to implement. In fact, I didn't submit earlier simply because I had never even evaluated 501-700 until three days ago.</p> <p>I imagine there are going to be a lot of innovative solutions to this competition. I wanted to post a few outputs of my neural network as I thought they were interesting. The output are volume estimates at 1mm intervals along the long axis of the left ventricle. Left ventricle ROI was determined by cross-section of 4Ch and 2Ch views.</p> <p>First some good results:</p> <p><img src="http://i.imgur.com/Efvy7R5.gif" alt="701" title> <img src="http://i.imgur.com/msoBmuH.gif" alt="1081" title> <img src="http://i.imgur.com/yE1QYI6.gif" alt="1031" title> <img src="http://i.imgur.com/MehFshO.gif" alt="812" title></p> <p>Here are a few failure cases: Bad Left Ventricle ROI:</p> <p><img src="http://i.imgur.com/VTJzaJ3.gif" alt="921" title></p> <p>Last Sax Slice not within 10mm of end of heart:</p> <p><img src="http://i.imgur.com/79HuRFy.gif" alt="931" title></p> <p>I hand labeled 200 of the hearts and then used that data to bootstrap to the entire 1-700 hearts. I will post a blog about the method at some other point. I think this method has a lot of room for improvement.</p>
Second Annual Data Science Bowl
Summary of our method
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Second Annual Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>[quote=Julian de Wit;111533]</p> <p>Did you try something in that direction ?</p> <p>[/quote]</p> <p>No we didn't try any recurrent networks, for two reasons. There wasn't enough data to use it in volume calculations. Also, we knew the volumes had been calculated from the areas in a predefined way, so it made sense to just reproduce that way instead of giving a model the chance to come up with the wrong conclusion. If we got the segmentation perfect and also knew the formula the volumes had been calculated with, the error would be zero, so that's the target we aiming towards.</p> <p>For another option, like DRAW or similar recurrent attention model, again there wasn't enough data and also it did not seem necessary. The fully convolutional feed-forward network was doing a good job with segmentation already, and we thought most of our error was coming from other places.</p> <p>We used a Gaussian CDF to come up with the probability values, and used linear fit of stdev = a*predicted_volume + b, fit separately for systole and diastole for each model.</p> <p>Hello everyone,</p> <p>We wanted to share a quick summary of our method. We had a great time on this competition and learned a lot by working with this data, which is interesting by itself and we&#8217;re very glad it is also potentially useful as a real life medical problem. We will briefly explain our approach here and post full documentation in a couple of days.</p> <p>Our method uses an average of 10 different fully convolutional neural networks (different architecture, input image size, or pre-processing transformation on the image), which output segmentation results for individual MRI images. We trained on the Sunnybrook data set which has one MRI image as input and the contour of the LV as output. We manually added to the training image set some additional images (in total ~200 images). Adding these improves the performance of the network significantly, and we believe the result can still be significantly improved if more training images are added. We found ensembling only improves the result slightly (something like 0.0096 for the best single network to 0.0093 in train set).</p> <p>For the preprocessing, we tried a combination of different things:</p> <pre><code>1. Use the time variance of the images to determine a preliminary center of the LV and bounding box and crop from the center. 2. We rotated the images so all the cases are aligned to the same direction. 3. For the input image augmentation, we did random rotation, shift, and contrast normalization. </code></pre> <p>Models were trained on two GPUs, NVIDIA GTX 970 and 980Ti. The entire model takes about 4 days to train and evaluate if both GPUs are used. We used Python, Theano, Lasagne, and cuDNN for neural network implementation.</p> <p>The CNNs can detect the contour amazingly well for high-quality images. Once we have the contours, the volume is calculated basically as V_preliminary = \sum (area_i*thickness) and maximum and minimum volume can be determined. We hypothesized that much of the error comes from the end slices, where a human can decide to include it or not. With this in mind, we did a final fitting to correct some error, with a function V_pred = V - \beta * sqrt(V) and it does much better than a simple linear fitting.</p> <p>To deal with some extreme cases that our CNN model cannot predict good volumes for, we developed </p> <pre><code>1. a sex-age model ( score ~ 0.036 ) (basically the same as https://www.kaggle.com/c/second-annual-data-science-bowl/forums/t/18375/0-036023-score-without-looking-at-the-images) 2. a model based only on a single SAX slice. Score ~ 0.015 3. A model based on the 4-chamber view. We hand labeled many of these images to train this model. Score ~ 0.017 4. We took the average of these three models (which scores ~ 0.013) as the default model, if our SAX-based CNN model fails, we took the result from this model. </code></pre> <p>We also tried a Fourier-based segmentation method which gives a score about 0.016. Since it is quite complicated and does significantly worse than the CNN, after the early stages of the competition we dropped this model entirely, to simplify our work and code.</p> <p>Some observations:</p> <ol> <li><p>A lot of our effort was spent cleaning up data and dealing with edge cases. As kunsthart found, much of the error came from just a few cases, and in one case fixing a single prediction in the validation set dropped our score from ~ 0.0102 to 0.0098.</p></li> <li><p>The CNN architectures we used were relatively small. We found that adding capacity did not improve results, though we did not experiment much with different types of architectures or activation functions.</p></li> <li><p>Batch normalization helped enormously, as did using a modification of the Sorenson-Dice Index as the segmentation objective function, rather than binary cross-entropy.</p></li> <li><p>We did no test-time augmentation due to time constraints, this might have helped.</p></li> </ol> <p>We&#8217;ll write up a full documentation of our model in a couple of days. Thanks to Kaggle, Booz Allen Hamilton, and the administrators for running such an interesting and engaging competition, and to our fellow competitors :)</p> <p>The additional images were manually segmented from within the training set. These were images used to train the fully convolutional segmentation network, not to calculate volumes, so there was no age/sex info used.</p>
Second Annual Data Science Bowl
pure C++ solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Second Annual Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I think I'm doing more of software integration than research for accuracy. But here's a binary release that can make predictions with minimal hardware/software dependencies. It is the compressed version of our final model submission and reproduces the score of 0.011645.</p> <p><a href="https://github.com/aaalgo/adsb2">https://github.com/aaalgo/adsb2</a></p> <p>This represents my ideal of how software should be packaged.</p>
See Click Predict Fix
My favorite model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: See Click Predict Fix <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>[quote=Miroslaw Horbal;35458]</p> <p>That's pretty cool. I wonder how your results would change if you used median values instead of mean values. Even in logspace the distributions are skewed towards smaller values, so median may be a better 'average' metric in this case.&nbsp;</p> <p>[/quote]</p> <p>&nbsp;</p> <p>I just checked that, its 0.319</p> <p>[quote=Fabio Vessoni;35474]</p> <p>Hi David! That's amazing. I tried a lot of things, this is much simpler than everything I thought..</p> <p>I rewrote your code in SQLServer. Just 2 queries..</p> <p>But I am getting 0.36 against the private leaderboard (not 0.31). I will try again tomorrow, using the median instead of the mean, as Miroslaw suggested, and post the results here.</p> <p>[/quote]</p> <p>Fabio, I just ran the code without the log-transform/inverse transform removed and got a 0.369. The mean in the attached code is taken on the log-transformed labels, then those predictions are sent back to raw space before being written. The reason for that is that we are minimizing RMSLE on the original variables, which is RMSE on the transformed variables. For minimizing RMSE, the mean value is a reasonable guess.</p> <p>This wasn't one of my selected entries in this competition, but it is a good example of how sometimes very simple models can punch far above their weight. The model just groups the training data by city and source (reduced to 3 levels: remote_api_created, city_initiated and everything else), takes a mean (in logspace) and applies those values as predictions, which are then sent back to raw space. Using the last 4 weeks of the data, this gets 0.31499 against the private leaderboard, which would rank in the high 70's, easily inside the top 25%. A refactored, turnkey version of it is attached, but the gist of it is here:</p> <p><code> mean_vals = train.groupby(['city', 'src']).mean()<br> test = test.merge(mean_vals,<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; how = 'left', <br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; left_on = ['city', 'src'],<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; right_index = ['city','src'],<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; sort = False,<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; copy = False)<br></code></p> <p>&nbsp;</p> <p>This just uses python/pandas, with no real algorithm other than grouping and aggregation.</p>
See Click Predict Fix
Source Code for Bryan Gregory & Miroslaw Horbal
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: See Click Predict Fix <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello,</p> <p>Hope everyone had a great holidays.&nbsp;</p> <p>Here are github repositories for our team's code, released under the BSD-3 OSI license. &nbsp;Everything is coded using the open source Python/PANDAS/SKLearn stack of tools.</p> <ul> <li><span style="line-height: 1.4">Ensemble code used for combining our predictions. &nbsp;This includes code for our individual base models (this is all you need to download to re-create our solution)</span></li> <ul> <ul> <li><a href="https://github.com/theusual/kaggle-seeclickfix-ensemble"><span style="line-height: 1.4">https://github.com/theusual/kaggle-seeclickfix-ensemble</span></a></li> </ul> </ul> <li>Bryan's individual model code:</li> <ul> <ul> <li><a href="https://github.com/theusual/kaggle-seeclickfix-model">https://github.com/theusual/kaggle-seeclickfix-model</a></li> </ul> </ul> <li>Miroslaw's individual model code:</li> <ul> <ul> <li><a href="https://github.com/beegieb/kaggle_see_click_fix">https://github.com/beegieb/kaggle_see_click_fix</a></li> </ul> </ul> </ul> <p>&nbsp;</p> <p>Also, here is a more in-depth description of our code:</p> <p><a href="http://bryangregory.com/Kaggle/DocumentationforSeeClickFix.pdf">http://bryangregory.com/Kaggle/DocumentationforSeeClickFix.pdf</a></p> <p>And a description of our background and approach to the contest:</p> <p><a href="http://bryangregory.com/Kaggle/Kaggle-SeeClickFix-HowIDidIt.pdf">http://bryangregory.com/Kaggle/Kaggle-SeeClickFix-HowIDidIt.pdf</a></p> <p>&nbsp;</p> <p>Please let us know if you have any questions or feedback!</p> <p>-Bryan &amp; Miroslaw</p>
See Click Predict Fix
Source code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: See Click Predict Fix <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><span style="line-height: 1.4">This is my source code under&nbsp;</span><span style="line-height: 1.4">BSD-3 OSI license,</span></p> <p><span style="line-height: 1.4">https://github.com/BlindApe/SeeClickPredictFix</span></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p>
See Click Predict Fix - Hackathon
Congrats to the winners!vecorization
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: See Click Predict Fix - Hackathon <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>[quote=Leustagos;31749]</p> <p>Since the competition ended, could somebody publish an example of tdf-idf features creation for this dataset?</p> <p>I would really like to learn some NLP. :)</p> <p>[/quote]<br><br>The easiest and quickest way is to use sklearn's&nbsp;<a href="http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html">TFIDF vectorizer</a><br><br><a href="http://pastebin.com/tu8cy0pw">Sample code&nbsp;</a>with numpy and pandas</p> <p>Congrats James for getting the top score, and&nbsp;Tunguska for the win at the event! I'm looking forward to hearing your methods.&nbsp;<br><br>Some notes from my side:<br><br>1. Directly optimizing RMSLE was important in getting a competitive score on the leaderboard.&nbsp;</p> <p>2. It was very easy to overfit the training data. I hadn't noticed that I was overfitting until the last hour of the competition, but early stopping seemed to be useful.&nbsp;<br><br>I'm curious what kind of features people used, personally I used:<br>- Individual TFIDF vectorization for summary and description text<br>- 1 / (1 + days from first 311 issue)<br>- One hot encoded information for tags, and source<br>- Binary indicator for each of the four regions from latitude and longitude<br><br>I used a linear model for the entire competition. But I suspect deep learning could be very powerful (although slow)<br><br>Looking forward to reading your insights.&nbsp;<br><br>EDIT:&nbsp;<br><br>Wow... that title got mangled - must have accidentally pressed the middle mouse button before creating the thread. Is there any way to edit the title?</p> <p>[quote=Black Magic;31793]</p> <p>Normal cross-validation won't work for this competition.</p> <p><br>I ordered the observations based on time and used the first half to train and second to train third. my CV scores matched that of leaderboard</p> <p>[/quote]<br><br>Similarly, I used the 20% of most recent entries for cross validation and the score was reasonably close to the leaderboard, but more importantly, improvement on CV was correlated to improvement on LB.&nbsp;</p> <p>[quote=icetea;31766]</p> <p>[quote=Miroslaw Horbal;31730]</p> <p>2. It was very easy to overfit the training data. I hadn't noticed that I was overfitting until the last hour of the competition, but early stopping seemed to be useful.&nbsp;</p> <p>[/quote]</p> <p>Miroslaw, what was a signal of overfitting for you?</p> <p>[/quote]<br><br>Looking at cross validation scores per iteration of training.&nbsp;</p>
Semi-Supervised Feature Learning
Contest methods
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Semi-Supervised Feature Learning <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Surprisingly, one supervised method we tried on a whim to establish a supervised baseline ended up as the top performing model. &nbsp;Here's a brief description of our method:</p> <div>Though we explored various unsupervised and semi-supervised options, our best submission consisted of purely supervised features: the posterior probabilities output from Breiman's Random Forest algorithm. &nbsp;Input features to this algorithm came from the union of two sets: the top k features with the most non-zeros, and the top k features with the largest difference between class means. &nbsp;The sum of all features for each data point was included as well. &nbsp;Only the labeled dataset was used to select these features and train the model.</div> <div>A total of 5 submissions were included as final features, each comprising a different number of the top features from the two feature selection methods (ranging from ~600 to ~1400 total features) and&nbsp;slightly&nbsp;different random forest parameters.</div> <div>This method got first on the public and private sets, with a private AUC of 0.9771.</div> <div>Here's several of the alternatives we explored:</div> <div> <ul> <li><span style="direction:ltr">Weighted k-means / mini-batch k-means</span> </li><li><span style="direction:ltr">Wrapper methods around supervised methods to incorporate unsupervised data</span> </li><li><span style="direction:ltr">Wrapper methods &#43; multiple views around supervised methods</span> </li><li><span style="direction:ltr">SVD</span> </li></ul> </div> <div>Here's some of the other methods that we did not have the time and/or computational resources to explore, but we wanted to and are curious to see if other contestants gave them a shot:</div> <div> <ul> <li><span style="direction:ltr">Sparse autoencoders</span> </li><li><span style="direction:ltr">Deep belief networks</span> </li><li><span style="direction:ltr">Restricted boltzmann machines</span> </li><li><span style="direction:ltr">Latent dirichlet allocation</span> </li><li><span style="direction:ltr">Self-organizing maps</span> </li><li><span style="direction:ltr">Graphical models</span> </li></ul> </div>
SenNet + HOA - Hacking the Human Vasculature in 3D
110th Place, Did Not Work: Splitting Thin/Thick Vessels Approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>110th Place, Did Not Work: Splitting Thin/Thick Vessels Approach</h1> <p>Thank you for hosting the competition.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2930242%2F6d623087b77192ed018bb4216e81f873%2Fvessel.png?generation=1707318307890041&amp;alt=media"></p> <h2>Overview</h2> <ul> <li>Employed both 2D Unet and 3D Unet approaches.</li> <li>Expectation: The 2D model was expected to capture larger vessels, while the 3D model should have been better at capturing smaller vessels.</li> <li>2D Unet 512 x 512 (xy,yz,xz)<ul> <li>backbone: SE_resnext50</li></ul></li> <li>3D Unet 128 x 128 x 128<ul> <li>backbone : Resnet18</li></ul></li> </ul> <h2>Training</h2> <ul> <li>Pre-training with all data</li> <li>Fine-tuning with dense dataset(kidney1&amp;3)<ul> <li>augmentation with monai API for multi mask</li></ul></li> </ul> <pre><code> aug_list = [ RandRotated(keys=[, , , ],range_x = np.pi/180 * 90, =np.pi/180 * 90, =np.pi/180 * 90, =1, =), RandFlipd(keys=[, , , ], =1), RandGridDistortiond(keys=(, , , ), =1, distort_limit=(-0.03, 0.03), =), RandZoomd(keys=[, , , ], min_zoom = 1 , max_zoom = 6/5, =1, =), RandAdjustContrastd(keys=[],=1, gamma=(0.8, 2.5)), ] </code></pre> <h2>Data preprocessing</h2> <ul> <li>Thin/thick vessel split approach<ul> <li>Hypothesis: The premise was that a difference in contrast, dependent on vessel thickness, would result in unstable thresholding.This challenge was similar to issues discussed in papers on retinal vessel detection.</li> <li>Reference: <a href="https://www.frontiersin.org/articles/10.3389/fbioe.2021.697915/full" target="_blank">https://www.frontiersin.org/articles/10.3389/fbioe.2021.697915/full</a><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2930242%2Fadb0df2308c1f3db2936a04bfd945f3e%2Ffbioe-09-697915-g002.jpg?generation=1707318581847217&amp;alt=media"></li> <li>But overfitted to CV…</li> <li>Source code: 3d splitting code</li></ul></li> </ul> <pre><code>from scipy.ndimage import distance_transform_edt from skimage.morphology import skeletonize_3d dist_transform_varied_thickness = distance_transform_edt(mask_data) skeleton_varied_thickness = skeletonize_3d(mask_data) z_varied_thickness, y_varied_thickness, x_varied_thickness = np.where(skeleton_varied_thickness) num_points = len(x_varied_thickness) thicknesses_varied_thickness = np.array([dist_transform_varied_thickness[z, y, x] * 2 for z, y, x in zip(z_varied_thickness, y_varied_thickness, x_varied_thickness)]) thicknesses_varied_thickness.shape, thicknesses_varied_thickness[:10] from scipy.spatial import cKDTree skeleton_coords_3d = np.column_stack([z_varied_thickness, y_varied_thickness, x_varied_thickness]) skeleton_thicknesses_3d = thicknesses_varied_thickness skeleton_tree_3d = cKDTree(skeleton_coords_3d) mask_coords_3d = np.column_stack(np.where(mask_data)) distances_3d, indices_3d = skeleton_tree_3d.query(mask_coords_3d) colored_mask_3d = np.zeros_like(mask_data, dtype=float) for idx, coord in enumerate(mask_coords_3d): colored_mask_3d[coord[0], coord[1], coord[2]] = skeleton_thicknesses_3d[indices_3d[idx]] colored_mask_3d.shape thickess = colored_mask_3d skeleton = skeleton_varied_thickness thickess_mask = np.zeros_like(thickess, dtype=np.uint8) thickess_mask[thickess&gt;0] = 1 thickess_mask[thickess&gt;5] = 2 thick_mask = np.zeros_like(thickess, dtype=np.uint8) thick_mask[thickess_mask==2] = 1 thick_mask = thick_mask | skeleton thin_mask = np.zeros_like(thickess, dtype=np.uint8) thin_mask[thickess_mask==1] = 1 thin_mask = thin_mask | skeleton </code></pre> <h2>Inference</h2> <ul> <li>or ensemble ( 2D and 3D )</li> <li>Threshold adjusted with kidney2</li> </ul> <h1>post processing</h1> <ul> <li>merge each estimations with each threshold</li> </ul> <pre><code> post_processing(y_pred:torch.Tensor, thr=[.,.,.], k= ): = y_pred[,:]*k = y_pred[,:] = y_pred[,:] = (y_pred0 &gt; int(thr[]*)) = (y_pred1 &gt; int(thr[]*)) = (y_pred2 &gt; int(thr[]*)) = y_pred0 | y_pred1 | y_pred2 y_pred </code></pre>
SenNet + HOA - Hacking the Human Vasculature in 3D
12st Place Solution for the SenNet + HOA - Hacking the Human Vasculature in 3D
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Context</h1> <p>This solution was implemented as part of a blood vessel segmentation competition organized by the Common Fund’s Cellular Senescence Network (SenNet) Programm in cooperation with the Human Organ Atlas (HOA). <br> Competition overview page: <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation" target="_blank">SenNet + HOA - Hacking the Human Vasculature in 3D</a><br> Competition dataset is <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/data" target="_blank">here</a><br> Many thanks to the organizers for the opportunity!</p> <h1>Overview</h1> <p>Framework — <strong>TensorFlow</strong><br> Data pipeline — 2d, roi, resize (<strong>1024x704</strong>), <strong>tfrecord</strong><br> Model — almost classic <strong>U-net</strong> (details below)<br> The solution is presented in two notebooks:</p> <ul> <li><a href="https://www.kaggle.com/code/pib73nl/sennet-hoa-bvs-12th-place-solution-train" target="_blank">train</a></li> <li><a href="https://www.kaggle.com/code/pib73nl/sennet-hoa-bvs-12th-place-solution-infer" target="_blank">inference</a></li> </ul> <h1>Disclamer</h1> <p>This solution was developed in December 2023, before Santa's New Year's gift, which ultimately helped more than seven hundred participants to jump above 0.8. I left for the holidays in 238th place with a score of 0.567. When I next opened the leaderboard a week later, I had dropped over 150 positions! And that was just the beginning! ;) I worked on this approach for another week. I must say that I did this without much enthusiasm, since by improving the metric I found myself lower and lower in the ranking.<br> Finaly, with a result of 0.636, which was achieved by increasing the image size and minor architecture changes, I began to look for other approaches (see below in chapter <em>„Fruitless attempts“</em>).</p> <h1>Data preparation</h1> <p>All data (except for the kidney_3_dense labels) were used as training data. The images have large fields that contain no useful information. To reduce these fields, the images were preprocessed to extract roi using statistical methods. </p> <pre><code> (): row_mask = image.std(axis=)&gt; clmn_mask = image.std(axis=)&gt; row_mask = cleaning_mask(row_mask) clmn_mask = cleaning_mask(clmn_mask) image = image[row_mask,:][:, clmn_mask] label = label[row_mask,:][:, clmn_mask] (label, np.ndarray) row_pad = (row_mask.argmax(), row_mask[::-].argmax()) clmn_pad = (clmn_mask.argmax(), clmn_mask[::-].argmax()) image, label, (row_pad, clmn_pad) (): mask[] = mask[-] = frames = np.nonzero(mask[:-]!=mask[:])[] delta = frames[:]-frames[:-] max_solid_block_begin = np.argmax(delta) garbage = np.delete(frames, [max_solid_block_begin, max_solid_block_begin+]) a, b (garbage[::], garbage[::]): mask[a+:b+] = mask </code></pre> <p>Next, all images were reduced to a single size of 1024x704. The experiments started with a size of 384x256, and as the size increased, the result expectedly improved. 1024x704 is the maximum size that did not result in an OOM error. An example of the processed image is below.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F937321%2Fdc967a133b4763004a3d347af9c0af4e%2Fprepr_img.png?generation=1707740226468394&amp;alt=media" alt="Example of a processed image"><br> Every 25 images (4%) were used for validation, since the density of the labels varies greatly along the z-axis.<br> The resulting images and tags were packed into tfrecord files to organize a multi-threaded pipeline (total files - 92, 162 MB each). The maximum possible batch size for the 1024x704 shape turned out to be 32. Augmentation was not used - I just couldn’t get around to it!</p> <h1>Model</h1> <p>The more or less classical <strong>U-net</strong> architecture was used as a model.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F937321%2Fc84d61b6e342a86e18e47abafb575cb5%2Funet%20arcitect.png?generation=1707741082913852&amp;alt=media" alt="U-net architect"><br> Losses were estimated using <strong>binary crosentropy</strong>. The <strong>Adam</strong> optimizer was used for optimization. The <strong>learning rate</strong> was changed according to the <strong>cosine decay</strong> schedule with warmup.<br> Since there is a significant class imbalance, weights were used. The idea was to set the weights at the instance level, since the class ratios vary greatly as we move from the center of the kidney to the edges (along the z-axis). But to begin with, I hardcode the weights, and it worked tolerably well. I didn't return to this issue later, so there is room for improvement.<br> The model was created from scratch and trained for 60 epochs. For prediction, epochs with a minimum value of validation losses were taken.<br> The prediction result looked something like this:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F937321%2F7325fd2a8223112484047b8a02408aed%2Fpredict_res.png?generation=1707740910547839&amp;alt=media" alt="Prediction result"><br> There are quite a lot of FP here… However, it makes sense to work on the sample weights 🤔</p> <h1>Fruitless attempts</h1> <p>Obviously, given the large number of small details, any resizing harms the result. I tried to solve this problem by dividing the image into fragments (intersecting tiles of 256x256 size). I used the same model architecture. But the labels turned out to be exclusively in the places where the tiles overlapped, and having assembled a mask from the tiles, I got a blank sheet! I haven't had time to figure this out.<br> Second. I tried to solve the problem of label resizing by changing the architecture - I added another “kinda u-net” to the end of decoder - 2 convolution layers and two reconvolution ones. Didn't do well here either, but would have been in 67th place on the private leaderboard 😉<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F937321%2F5d62e9402997d2db29eff231c65447af%2Fscore_of_ext_model.png?generation=1707741415352143&amp;alt=media"></p> <h1>Some observations</h1> <p>Yes, yes… There was a big quake… For some reason, most of the solutions failed in suspiciously similar ways ;) This is clearly noticeable in the interval of about 100-600 places.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F937321%2Fbf0eec25d430cbc29190c3b83d5e5dfb%2FLB_quake.png?generation=1707741692231072&amp;alt=media" alt="LB shake plot"><br> My main solutions, similar in network architecture and image size to the winning one, gave stable results on a public and private dataset. On a private dataset - even a little better!<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F937321%2F3673a9e41dfdde1120b98b7f01a75017%2Fscore_of_win_model.png?generation=1707741845733027&amp;alt=media" alt="Score of the winning model"><br> The difference between the public and private data sets was 0.012 points. Such stable results in the first thousand can be counted on the fingers of one hand. In general, the variance is already normal, all that remains is to work on the bias 😁<br> Thanks to everyone who worked on the problem! It was interesting with you! Good luck! ✋</p>
SenNet + HOA - Hacking the Human Vasculature in 3D
132th Solution: 2D UNet with p1-p99 normalization
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>A simple tu-seresnext50_32x4d UNet with volumes normalized on percentiles 1 and 99. The inference was a votation accumulation from x3 axis x4 rot90 scan of the minimum square possible padding with reflections. TH = 6 (of 12, .5)<br> <a href="https://www.kaggle.com/code/sacuscreed/smp-2d-inference-back-to-basics?scriptVersionId=161523508" target="_blank">https://www.kaggle.com/code/sacuscreed/smp-2d-inference-back-to-basics?scriptVersionId=161523508</a> <br> The training was made on kidney_1_dense and kidney_3_dense 512x512 centred on labels centroid crops padded with reflections too and basic augmentations. Batches of 8 (4 of each kidney) during 50 epochs without validation:<br> <a href="https://www.kaggle.com/code/sacuscreed/143th-solution" target="_blank">https://www.kaggle.com/code/sacuscreed/143th-solution</a></p>
SenNet + HOA - Hacking the Human Vasculature in 3D
13th place solution: 4-panel solo model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><ol> <li>Yes, both cv and lb scores without the 4 panel (256px patches separate) were worse. That is why I pursuit this 4-panel image model further.</li> <li>Yes that is correct! Another nice thing to add to that: most sliding windows are seen by the model 4 times, once in each quarter of the 4-panel image. If that makes sense.</li> <li>I think this is a setting that I should have tested further, but due to time and other priorities I did not. Setting the merge option on mean or max did not change my cv or lb score too much. My cv score was slightly worse with max and the lb slightly improved with max. Why I eventually went for the max setting was the idea that it would better predict blood vessels cut off on the edge of the patch. If the model would be unable to predict the blood vessels that were cut off by the edge and in an overlapping patch it would be able to predict the segmentation of these blood vessels, taking the mean of these predictions would lower the probability. With the risk that these blood vessels would be excluded after thresholding. But taking with the max you would keep the highest probabilities.</li> <li>Yes for the interference I put the overlap on 0.1. I did not see and difference by putting it on 0.2. However this was when I still had the setting to merge the patches by mean. Unfortunately, I ran out of time and submissions to test it with max hahah:) It took around 7 hours to score on the lb.</li> <li>I did not try using the slices as channels, mostly because of the comments saying that a normal 2D model outperformed such 2.5D model. Therefore, I cannot say if such a model would have outperformed this 4-panel model.</li> </ol> <p>Hope this answers your questions!</p> <p>First I would like to thank the host for this wonderful competition, I really enjoyed participating in it! I would also like to thank <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> for his valuable and interesting comments, I learned a lot from this! As my solution was inspired by the <a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/discussion/430618" target="_blank">winning solution</a> of the contrails competition 6 months ago, I want to thank <a href="https://www.kaggle.com/junkoda" target="_blank">@junkoda</a> as well! </p> <p><strong>4-panel image</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5960015%2F1cc75a5e44c828cbb2b05af1a2d91558%2FSenNetHOA23_4Panel_example.png?generation=1707287151880105&amp;alt=media"></p> <p>Inspired by <a href="https://www.kaggle.com/junkoda" target="_blank">@junkoda</a> winning solution, the model was trained on 4-panel images consisting of 256px sized patches of consecutive slices (creating 512px by 512px images). Therefore, I was able to keep 2.5D dimensionality in a 2D image. The idea was that the model would learn this relationship between slices and therefore would predict a more continuous segmentation.<br> The 256px patches were made with the <a href="https://github.com/Mr-TalhaIlyas/EMPatches" target="_blank">EMPatches</a> library. Which made stitching the separate 256px patches from each 4-panel image a lot easier.<br> The images were normalized based on the percentile of the whole kidney. All values lower than 0.5 were clamped to 0.5, to normalize the background better:</p> <pre><code>lo, hi = .percentile(kidney_volume.numpy(), (, )) def preprocess_image(, lo, hi): = .to(torch.float32) = ( - lo) / (hi - lo) = torch.clamp(, =) </code></pre> <p><strong>Augmentations</strong><br> I used simple augmentations for training: </p> <pre><code>train_transform = A.Compose([ A.RandomRotate90(=1), A.HorizontalFlip(=0.5), A.VerticalFlip(=0.5), A.RandomBrightness(=1), A.OneOf( [ A.Blur(=3, =1), A.MotionBlur(=3, =1), ], =0.9, ), ]) </code></pre> <p>What did not work for me were the augmentations based on scaling of the image.</p> <p><strong>Submission</strong><br> For the submission part: each patch in the 4-panel image was rotated on each own. Where after, the mean was taken of the rotated patches. The mean was taken as well of all the patches in the separate 4-panel images. These patches were merged with the ‘max’ setting. This was performed for the xy, xz, yz rotations of the whole kidney volume.</p> <p><strong>Model</strong><br> The model that was trained on these 4-panel images was a Unet maxvit_tiny_tf_512 using segmentation models pytorch (SMP). The model was trained on 3 whole kidney volume rotations with 0.4 overlap in the patches (~490.000 different images). The model was trained for 9 epochs with a 1e-4 lr and then another 6 epochs with CosineAnnealingLR to 1e-6.</p> <ul> <li>Link to Kaggle submission notebook: <a href="https://www.kaggle.com/code/menno1111/sennet-hoa23-2-5d-4-panel-submission/" target="_blank">SenNet-HOA23 | 2.5D 4-panel | submission</a></li> <li>Link to the training and validation notebooks: <a href="https://github.com/Menno-Meijer/SenNet_VasculatureSegmentation_Competition" target="_blank">GitHub</a></li> </ul>
SenNet + HOA - Hacking the Human Vasculature in 3D
14th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Overview</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3823496%2F6a7d79a9543e55064e881b6961b14061%2Foverview.jpg?generation=1707324869303897&amp;alt=media" alt="overview"></p> <ul> <li>2.5D segmentation model that inputs N consecutive slices stacked in ch direction and outputs corresponding Nch masks. </li> <li>Input images are cropped to the kidney area only and then resized. </li> </ul> <h1>Pipeline Detail  </h1> <h2>1. Preprocess</h2> <h3>1-1. Normalization</h3> <ul> <li>A histogram of luminance values is calculated for the entire kidney and normalized based on minimum and maximum values. </li> <li>Normalization based on maximum and minimum values per image unit could cause variations in the appearance of images, resulting in unnatural switching of inference results. To counteract this, normalization based on the luminance distribution of the entire kidney was employed. </li> <li>The code is as follows. </li> </ul> <pre><code> (): img_paths = (glob(os.path.join(image_dir, ))) pixels = np.zeros((,), dtype=np.int64) img_path tqdm(img_paths): img = cv2.imread(img_path, cv2.IMREAD_UNCHANGED) _pixels = np.bincount(img.flatten(), minlength=) pixels += _pixels = hist = [] bins = [] i (, +, ): hist.append(pixels[i:i+].()) bins.append(i) hist = np.array(hist) hist_rate = hist/hist.() idxes = np.where(hist_rate&gt;)[] min_idx = idxes[] max_idx = idxes[-] bins[min_idx]-, bins[max_idx]+ min_val, max_val = get_min_max_val(inference_img_dir) img = cv2.imread(path, cv2.IMREAD_UNCHANGED) img = img.astype() img = np.clip(img, min_value, max_value) img = (img-min_value)/max_value </code></pre> <h3>1-2. Crop</h3> <ul> <li>Obtain a rectangle of the kidney region using a segmentation model that infers a mask of the entire kidney. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3823496%2F68c25e3b97c72b5b599b5607e8bcff8f%2Fcrop.jpg?generation=1707324937523266&amp;alt=media" alt="crop"></li> <li>The kidney segmentation model used a single model of Unet (backbone: efficientnet_b4) and only kidney_1_dense was used as training data. </li> <li>Cropping only the kidney region eliminates wasted areas in the image and greatly improves the accuracy of vessel segmentation. </li> <li>During training, the height and width of the rectangle were stochastically increased or decreased by ±5% as part of augmentation.</li> </ul> <h3>1-3. Resize</h3> <ul> <li>Because of the strict masking requirements of this competition metric, it was important to resize the image to a larger image size. </li> <li>In my solution, I trained the model by resizing the image to as large as GPU memory would allow, in the range of 1536~1920. </li> <li>If the mask is resized by OpenCV's resize function and then resized back to the original size again, the mask pixels are shifted to the lower right, resulting in a significant loss of accuracy. Therefore, care should be taken in resizing.<ul> <li>In my solution, I used an affine transformation that simultaneously translates by 0.5 pixel and scales the image to prevent pixel misalignment.  </li> <li>Incidentally, this idea is strongly influenced by the contrail competition solution. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3823496%2Fe326568835894e0923ac98172873977f%2Fresize.jpg?generation=1707324969473901&amp;alt=media"></li></ul></li> </ul> <h2>2. Vessel Segmentation</h2> <h3>Model</h3> <ul> <li>Model was Unet (using smp implementation), resnest14d, resnest50d, maxvit_tiny were used for backbone, and ensemble with equal weights was used as final sub. </li> <li>Since I wanted to use depth direction information as well, we employed a 2.5D model that takes an input image consisting of n (5 or 7) consecutive slices stacked in the ch direction and outputs the corresponding n ch masks. </li> </ul> <h3>Data</h3> <ul> <li>The data was kidney_1_dense as training data and kidney_3_dense as validation data. Some models used kidney_2 and Pseudo Labeled data for external data as training data. </li> </ul> <h3>Augmentation</h3> <ul> <li>Use augmentation on rotation, flipping, and brightness (using the albumentations implementation).</li> <li>The shape-changing type augmentaion (e.g., Distortion) was tried but was not used because it worsens the accuracy of both cv/lb.</li> </ul> <h3>Inference</h3> <ul> <li>Inference in each view in XY, XZ, and ZY directions. </li> <li>The accuracy of both cv/lb was increased by inputting a larger size than the image size used for training during inference. </li> <li>The threshold was determined based on CV and used 0.25. </li> </ul> <h3>Summary</h3> <p>The final scores are as follows. </p> <table> <thead> <tr> <th>Model</th> <th>N(ch)</th> <th>train_data</th> <th>validation_data</th> <th>input_size(train)</th> <th>input_size(inference)</th> <th>CV</th> <th>Public</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>resnest14d</td> <td>7</td> <td>kidney_1_dense</td> <td>kidney_3_dense</td> <td>1920x1920</td> <td>2304x2304</td> <td>0.909</td> <td>0.835</td> <td>0.659</td> </tr> <tr> <td>resnest50d</td> <td>5</td> <td>kideny_1_dense</td> <td>kidney_3_dense</td> <td>1536x1536</td> <td>1920x1920</td> <td>0.903</td> <td>0.818</td> <td>0.599</td> </tr> <tr> <td>maxvit_tiny</td> <td>7</td> <td>kidney_1_dense, kidney_2(pseudo label), extra_data(pseudo label)</td> <td>kidney_3_dense</td> <td>1536x1536</td> <td>2048x2048</td> <td>0.901</td> <td>0.810</td> <td>0.623</td> </tr> <tr> <td>Ensemble</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>0.913</td> <td>0.824</td> <td>0.645</td> </tr> </tbody> </table> <p>Thanks for reading solution.</p> <blockquote> <p>Did you make the kidney region labels yourself?</p> </blockquote> <p>No, I used the dataset published by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> .<br> <a href="https://www.kaggle.com/datasets/hengck23/blood-vessel-segmentation-kidney-mask" target="_blank">https://www.kaggle.com/datasets/hengck23/blood-vessel-segmentation-kidney-mask</a></p>
SenNet + HOA - Hacking the Human Vasculature in 3D
1st Place Solution (code updated)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks!<br> For experimenting, we used kidney_3_dense for validation and other data for training. And we used IOU for local validation. For submission, we used all the data for training. The correlation between local cv and lb is not good. And because of the lack of data, our strategy was to improve lb score while ensuring the local cv improve or remain the same. Since we used all the data for training, we selected the last three checkpoints of one experiment for submissions, the lb score for different checkpoints varies a lot even with similar cv (the selection was based on lb). We also tried swa and ema, but didn't work well.<br> Here's some results for reference. The models for cv and lb were trained with same setting only that the model for submission was trained with all data.</p> <table> <thead> <tr> <th></th> <th>model</th> <th>epochs</th> <th>channels</th> <th>custom loss</th> <th>3d rotation</th> <th>IOU</th> <th>public score</th> <th>private score</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>convnext tiny</td> <td>20</td> <td>1</td> <td></td> <td></td> <td>0.8116</td> <td>0.835</td> <td>0.732</td> </tr> <tr> <td>1</td> <td>convnext tiny</td> <td>20</td> <td>3</td> <td></td> <td></td> <td>0.8066</td> <td>0.852</td> <td>0.534</td> </tr> <tr> <td>2</td> <td>convnext tiny</td> <td>20</td> <td>3</td> <td>✓</td> <td></td> <td>0.8150</td> <td>0.889</td> <td>0.682</td> </tr> <tr> <td>3</td> <td>convnext tiny</td> <td>30</td> <td>3</td> <td>✓</td> <td>✓</td> <td>0.8153</td> <td>0.867</td> <td>0.835</td> </tr> <tr> <td>4</td> <td>convnext small</td> <td>30</td> <td>3</td> <td>✓</td> <td>✓</td> <td>0.8045</td> <td>0.848</td> <td>0.741</td> </tr> </tbody> </table> <p>Thanks! </p> <p>Thank you!</p> <ol> <li>Random 3d rotation brought little improvement for both cv and public score. but for private score, the improvement is huge (0.682-&gt;0.835).</li> <li>The custom loss gave us ~2% boost for local cv, no improvement for public or private lb. But we noticed that the custom loss brought more stability and consistency for public and private scores.</li> <li>Large inference size is critical for the final score. &gt;10% boost from 1536 to 3072 for both public and private scores.</li> </ol> <p>1) We haven't done any post submission yet. Here's some existing results of different thresholds for ensemble. We will do more probes later. I think one potiential reason for the large difference might be the inconsistent spacing of the channel dimension for 2.5d models, while the inconsistency of other dimensions is alleviated by random scaling augmentation(3d resize might help?). And the 3d rotation might alleviate this problem by resampling? I realized this problem near the end the competition but didn't have time to dig into it.<br> 2) Yes. We used single channel models in early experiments. Checking the submission history, the best single channel model scored 0.732/0.835 for private/public. But the single channel models are quite unstable and unreproducible, the scores ranging from 0.6 to 0.73 for private, 0.83~0.87 for public. For 2.5d models with custom loss, the private scores are about 0.67~0.69 for private and 0.86~0.88 for public.<br> 3) We were using convnext v1 without grn.</p> <table> <thead> <tr> <th></th> <th>infer size</th> <th>threshold</th> <th>public score</th> <th>private score</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>3072</td> <td>0.40</td> <td>0.897615</td> <td>0.751151</td> </tr> <tr> <td>2</td> <td>3072</td> <td>0.35</td> <td>0.895733</td> <td>0.767362</td> </tr> </tbody> </table> <p>For experimenting, we used kidney_3_dense for validation and other data for training. For submission, we used all the data for training.</p> <p>Thanks!</p> <ol> <li>We tried other normalization methods, but they did't work well for us.</li> <li>The GroupNorm is for gradient accumulation and the GELU is just an empirical choice.</li> <li>The augmentation scheme is found throught iterative refinement.</li> </ol> <p>We started with focal + dice loss, so no results for simple BCE or focal loss. And we discovered boundary loss in early experiments which brought minimal improvement(&lt;1%) for cv and public lb. A combination of focal + dice + boundary loss with a single channel 2d model gave a score of 0.732/0.835 for private/public lb. As for the custom loss, it didn't bring any improvement for public/private score, but the 2.5d models with custom loss seem more stable for both public/private score.</p> <p>First of all, we would like to thank Kaggle and the organizers for hosting such a great competition. And also thanks to <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> for the amazing posts, <a href="https://www.kaggle.com/junkoda" target="_blank">@junkoda</a> for the metric implementation and all other participants for sharing their experiments.</p> <h1>Overview</h1> <p>Our final submission is an ensemble of two 2.5d convnext tiny unet with 3 channels, and the only differences between these two models are augmentation and number of epochs. Actually the best scored submission is not the selected ensemble but one single model of the ensemble which is 0.835 on private lb.</p> <h1>Data Preparation</h1> <p>We used all training data <strong>including</strong> kidney_1_voi.</p> <ul> <li>Multiview slice (x, y, z)</li> <li>Normalization: No normalization, just <code>image = image / 65535.0</code></li> <li>Whole slice instead of tiles and all slices resized or cropped to 1536x1536. </li> <li>Augmentations:</li> </ul> <pre><code>A.Compose([ A.HorizontalFlip(=0.5), A.VerticalFlip(=0.5), A.Transpose(=0.5), A.Affine(scale={:(0.7, 1.3), :(0.7, 1.3)}, translate_percent={:(0, 0.1), :(0, 0.1)}, rotate=(-30, 30), shear=(-20, 20), =0.5), A.RandomBrightnessContrast(=0.4, =0.4, =0.5), A.OneOf([ A.Blur(=3, =0.2), A.MedianBlur(=3, =0.2), ], =1.0), A.OneOf([ A.ElasticTransform(=1, =50, =10, =1, =0.5), A.GridDistortion(=5, =0.1, =1, =0.5) ], =0.4), A.OneOf([ A.Resize(1536, 1536, cv2.INTER_LINEAR, =1), A.Compose([ RandomResize(1536, 1536, =0.5, =0.5, =1), A.PadIfNeeded(1536, 1536, =, =cv2.BORDER_REPLICATE, =1.0), A.RandomCrop(1536, 1536, =1.0) ], =1.0), ], =1.0), A.GaussNoise(=0.05, =0.2), ]) </code></pre> <ul> <li>Random 3D rotation to get slices that is not necessarily parallel to axes. The best scored submission used random 3d rotation for augmentation and trained for more epochs.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4240322%2F07e6746d787af196eaceb6acf5d336fe%2F3drot.png?generation=1707409376653348&amp;alt=media"></li> </ul> <h1>Modeling &amp; Training</h1> <ul> <li>We used unet from SMP with convnext tiny backbone, replaced BatchNorm and ReLU to GroupNorm and GELU and added an extra convolution stem. The input size for all models is 3x1536x1536.</li> </ul> <pre><code>self = nn( nn(in_channels, out_channels, , , ), (out_channels), ) </code></pre> <ul> <li>For loss function, we used 1.0 focal loss, 1.0 dice loss, 0.01 <a href="https://arxiv.org/abs/1812.07032" target="_blank">boundary loss</a> and 1.0 custom loss. The custom loss is inspired by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>'s <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/456118#2583472" target="_blank">post</a> and <a href="https://www.kaggle.com/junkoda" target="_blank">@junkoda</a>'s <a href="https://www.kaggle.com/code/junkoda/fast-surface-dice-computation" target="_blank">metric implementation</a>.</li> </ul> <pre><code></code></pre> <ul> <li>For optimization, we used AdamW and CosineAnnealingLR from 1e-4 to 0 with warmup. All models were trained for 20 epochs with a batch size of 8 and 4 gradient accumulation steps, except for the model with 3d slice rotation augmentation which was trained for 30 epochs.</li> </ul> <h1>Inference</h1> <ul> <li>Inference on 3 axes with 8xTTA.</li> <li>We tried different resize methods for inference. For the best scored submission, all slices are simply resized to 3072x3072; for the selected submission, we used a dynamic scale factor that <code>(h*scale)*(w*scale)=3200*3200</code>.</li> <li>The threshold used for submission is 0.4, and the optimal threshold based on cv and lb is about 0.4~0.5.</li> <li><code>torch.compile()</code> gave about 2x acceleration so that we were able to inference with high resolution and TTAs.</li> </ul> <h1>What didn't work</h1> <ul> <li>3d models.</li> <li>External data and pseudo labels.</li> <li>Transformers.</li> <li>Stacking more slices (&gt;3) for 2.5d model.</li> </ul> <h1>Results</h1> <table> <thead> <tr> <th></th> <th><strong>Model</strong></th> <th><strong>Slice Rotation</strong></th> <th><strong>Inference size</strong></th> <th><strong>Public Score</strong></th> <th><strong>Private Score</strong></th> </tr> </thead> <tbody> <tr> <td>1</td> <td>convnext_tiny</td> <td></td> <td>3072</td> <td>0.889</td> <td>0.682</td> </tr> <tr> <td>2</td> <td>convnext_tiny</td> <td>✓</td> <td>3072</td> <td>0.888</td> <td>0.830</td> </tr> <tr> <td>3</td> <td>convnext_tiny</td> <td>✓</td> <td>3072</td> <td>0.867</td> <td><strong>0.835</strong></td> </tr> <tr> <td>4</td> <td>ensemble(1+2)</td> <td>-</td> <td>3200</td> <td><strong>0.898</strong></td> <td>0.744(selected)</td> </tr> <tr> <td>5</td> <td>ensemble(1+2)</td> <td>-</td> <td>3200(dynamic)</td> <td>0.895</td> <td>0.774(selected)</td> </tr> </tbody> </table> <h1>Links</h1> <ul> <li><a href="https://github.com/jing1tian/blood-vessel-segmentation" target="_blank">training code</a></li> <li>inference code<ul> <li><a href="https://www.kaggle.com/code/clevert/sennet-1st-place-solution" target="_blank">final submission ensemble 0.774106</a></li> <li><a href="https://www.kaggle.com/code/clevert/sennet-unet-convnext-3d-rotation" target="_blank">3d rotate single model 0.835346</a></li></ul></li> </ul>
SenNet + HOA - Hacking the Human Vasculature in 3D
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you very much. I will study "connected-components-3d cc3d.dust" as you suggested. It's beautiful.</p> <p>Final submissions were auto-selected. It's too difficult to choice from 0, 0, 0.043, 0.002…</p> <p>I checked my submissions.</p> <p>Version 12: clip + search large size chunk (&gt;20% of candidates) -&gt; private score: 0.756793, public score: 0.</p> <ul> <li>My program was able to find a large chunk of blood vessels in the private data, but could not find a large chunk in the public data.</li> <li>If the large chunk is not found, my program uses prediction. -&gt; My program could not find the blood vessels from the public data.</li> </ul> <p>Version 32: no clip + delete size &gt;= 100 chunks -&gt; private score: 0.759959, public score: 0.043141.</p> <p>Version 33: no clip + delete size ≥ 1000 chunks → private score: 0.742442, public score: 0.041478.</p> <ul> <li>Version 33 has a lower score, so the 100-1000 size chunks are blood vessels. -&gt; Without clipping, the large chunk are split into small unconnected chunks.</li> <li>However, the score does not change much. -&gt; My program found very few blood vessels in the private data.</li> </ul> <p>What happened? I think any of the following is happened.</p> <ol> <li>My program luckily overfit the public data.</li> <li>My program overfit the training data + the private data and the training data are very similar + the public data are too different.<br> And fitting to the public data will reduce accuracy on the private data?</li> <li>Or, surface-dice-metrics is the source of this confusion…</li> </ol> <p>Please post what do you think, great Kagglers.</p> <p>Thanks! Please give me time for checking.</p> <p>Thank you very much. I had updated my post.</p> <blockquote> <p>Public Test:<br> Continuous 3D part of a whole human kidney…<br> Private Test:<br> Continuous 3D part of a whole human kidney…</p> </blockquote> <p>What's happened?</p> <p></p> <p>What's happened? My name is written at 2nd place? I remember my public score was 0.43 and my place was 1052…</p> <h1>Overview</h1> <p>My solution consists of U-Net3D (128x128x32), threshold adjustment, and <strong>post-processing</strong> to remove unconnected vessels as they are false positives.</p> <ul> <li>data augmentation using random rotation (and position), same as 1st place solution.</li> <li>U-Net3D. I assumed that 3D would be more accurate because it provides more information. I think this assumption might be wrong, since the 1st place solution uses 2.5D, .</li> <li>Binary-focal loss. Since there are a few positive data.</li> <li>Adjusting threshold. Since the volume ratio of blood vessels are not so different betwwen persons, the threshold is set according to the ratio.</li> <li><strong>Post-processing</strong>. Since blood vessels are supposed to be connected, extract small chunks with depth-first-search and remove them.</li> </ul> <p><a href="https://github.com/tail-island/blood-vessel-segmentation" target="_blank">https://github.com/tail-island/blood-vessel-segmentation</a><br> <a href="https://www.kaggle.com/code/ojimaryoji/sennet-hoa-2nd-place-solution?scriptVersionId=159388443" target="_blank">https://www.kaggle.com/code/ojimaryoji/sennet-hoa-2nd-place-solution?scriptVersionId=159388443</a></p> <h1>Data</h1> <p>To make it easier cutting out the data, I created a 3D Numpy array and adjusted the scale. I created <em>all</em> and <em>dense</em> data because I was planning to do curriculum learning in the order of <em>all</em> to <em>dense</em>. However, since it took a long time to learn in my PC, I only trained on sparse data this time. Also, I did not normalize or clipping the data because I thought there should not be a big difference since the data is visible to the human eye.</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/create_volumetric_images.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/create_volumetric_images.py</a></p> <h1>Train</h1> <p>I generate data from random positions and rotations in each <em>n</em> epochs. To reduce data generation time, I used multiple processes.</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/dataset.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/dataset.py</a></p> <p>The neural network is U-Net3D.</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/model.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/model.py</a></p> <p>I used binary-focal loss. Optimizer is AdamW and learning rate is scheduled by cosine-decay.</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/train_0.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/train_0.py</a></p> <h1>Submit</h1> <p>Prediction is made by tiling. get_candidate() finds candidates with a given ratio and <strong>get_blood_vessels()</strong> removes small unconnected chunks.</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/submit.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/submit.py</a></p> <p>Searching the big blood vessel chunk (and clip) version, private score is 0.756793 and public score is <strong>0.000000</strong>…</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/submit_.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/submit_.py</a></p> <p>Other scores…</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5895718%2Ff7be4975a186ddc226fd463f32632d0a%2Fscores.png?generation=1707479299719710&amp;alt=media"></p> <h1>What's happend?</h1> <p>What's happened? My name is written at 2nd place? I remember my public score was 0.43 and my place was 1052…</p> <h1>Overview</h1> <p>My solution consists of U-Net3D (128x128x32), threshold adjustment, and <strong>post-processing</strong> to remove unconnected vessels as they are false positives.</p> <ul> <li>data augmentation using random rotation (and position), same as 1st place solution.</li> <li>U-Net3D. I assumed that 3D would be more accurate because it provides more information. I think this assumption might be wrong, since the 1st place solution uses 2.5D, .</li> <li>Binary-focal loss. Since there are a few positive data.</li> <li>Adjusting threshold. Since the volume ratio of blood vessels are not so different betwwen persons, the threshold is set according to the ratio.</li> <li><strong>Post-processing</strong>. Since blood vessels are supposed to be connected, extract small chunks with depth-first-search and remove them.</li> </ul> <p><a href="https://github.com/tail-island/blood-vessel-segmentation" target="_blank">https://github.com/tail-island/blood-vessel-segmentation</a></p> <h1>Data</h1> <p>To make it easier cutting out the data, I created a 3D Numpy array and adjusted the scale. I created <em>all</em> and <em>dense</em> data because I was planning to do curriculum learning in the order of <em>all</em> to <em>dense</em>. However, since it took a long time to learn in my PC, I only trained on sparse data this time. Also, I did not normalize or clipping the data because I thought there should not be a big difference since the data is visible to the human eye.</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/create_volumetric_images.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/create_volumetric_images.py</a></p> <h1>Train</h1> <p>I generate data from random positions and rotations in each <em>n</em> epochs. To reduce data generation time, I used multiple processes.</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/dataset.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/dataset.py</a></p> <p>The neural network is U-Net3D.</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/model.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/model.py</a></p> <p>I used binary-focal loss. Optimizer is AdamW and learning rate is scheduled by cosine-decay.</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/train_0.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/train_0.py</a></p> <h1>Submit</h1> <p>Prediction is made by tiling. get_candidate() finds candidates with a given ratio and <strong>get_blood_vessels()</strong> removes small unconnected chunks.</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/submit.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/submit.py</a></p> <p>Searching the big blood vessel chunk (and clip) version, private score is 0.756793 and public score is <strong>0.000000</strong>…</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/submit_.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/submit_.py</a></p> <p>Other scores…</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5895718%2Ff7be4975a186ddc226fd463f32632d0a%2Fscores.png?generation=1707479299719710&amp;alt=media"></p> <h1>What's happend?</h1> <p></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5895718%2Fc3db76cc66c0f14a28e86393c735b925%2Fpublic-private.png?generation=1707470652290411&amp;alt=media"></p> <p></p> <p>What's happened?</p>
SenNet + HOA - Hacking the Human Vasculature in 3D
41st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Firstly, we would like to express our gratitude to Kaggle and the organizers for hosting this exceptional competition. Through participating in this contest, we have gained a deeper understanding of the challenges and methodologies involved in medical image recognition.</p> <h2>Introduction</h2> <p>We submitted separate solutions within our team.</p> <ul> <li>I submitted an ensemble model of se_resnext101_32x4d and Vision Transformer (mit_b2), which achieved a score of <strong>0.834</strong> on the public leaderboard. The private leaderboard score was <strong>0.586</strong>.</li> <li><a href="https://www.kaggle.com/ryosukesaito" target="_blank">@ryosukesaito</a> submitted an ensemble model of EfficientNet and SE-ResNeXt which achieved a score of <strong>0.857</strong> on the public leaderboard. The private leaderboard score was <strong>0.519</strong>.</li> <li>The high public leaderboard score achieved by <a href="https://www.kaggle.com/ryosukesaito" target="_blank">@ryosukesaito</a>’s submission might have been a contributing factor to our ability to submit my somewhat ambitious notebook, possibly leading to our winning a silver medal.</li> </ul> <h2>My (@jooott) Solution</h2> <h3>Overview</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2640938%2F580d1b944e38bb1924f6ec8480ac2b65%2F5.PNG?generation=1708327636242492&amp;alt=media"></p> <h3>Key points</h3> <p>I struggled significantly with stabilizing the training process.</p> <ul> <li>To address this, I used Accumulate Grad Batches to effectively increase the batch size to 128, which stabilized the training.</li> <li>A major factor in the significant improvement in score was the application of stronger data augmentation. The data augmentation strategy was inspired by <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/417496" target="_blank">the 1st place solution of the Vesuvius Challenge - Ink Detection</a>.</li> <li>I also think that scaling up the training images from 512px to 1024px contributed to the increase in score.</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2640938%2Fe3e49e0014af22209252039314919b6d%2F10.PNG?generation=1708327710646310&amp;alt=media"></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2640938%2F01cf2045817c40092ded7e721c4ff52e%2F6.PNG?generation=1708327729992246&amp;alt=media"></p> <pre><code>train_transform = A.Compose( [ A.RandomScale( scale_limit=(1.0, 1.20), =cv2.INTER_CUBIC, =0.1, ), A.RandomResizedCrop( image_size, image_size, scale=(0.8, 1.0), =1 ), A.RandomBrightnessContrast(=0.75), A.ShiftScaleRotate(=0.75), A.OneOf([ A.GaussNoise(var_limit=[10, 50]), A.GaussianBlur(), A.MotionBlur(), ], =0.4), A.CoarseDropout( =1, =int(image_size * 0.1), =int(image_size * 0.1), =0, =0.5), A.CLAHE(=0.2), A.GridDistortion(=5, =0.3, =0.05), ToTensorV2(=), ] ) </code></pre> <h2>Muku's (@ryosukesaito) solution</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2640938%2Fa10c8d4428a2dd305f8f94b3bddbb6c7%2FUntitled%20(4).png?generation=1708327848313831&amp;alt=media"></p> <h3>key points</h3> <ul> <li><p>In my architecture, Detection/Segmentation of kidney region is performed before predicting blood vessel area.</p> <ul> <li>Detection contributed to inference speedup (especially in the yz/zx direction), since it is possible to skip vessel segmentation in frames where kidney is not detected, and to reduce image size by cropping.</li> <li>Segmentation masks were used to reduce FP outside the kidney.</li> <li>For both annotations, I used LangSAM <a href="https://github.com/luca-medeiros/lang-segment-anything" target="_blank">(luca-medeiros/lang-segment-anything: SAM with text prompt</a>). This allowed me to prepare annotation data with a few manual adjustments.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2640938%2F3f044b330cb47140058deac70a7db57c%2FUntitled%20(5).png?generation=1708327892920221&amp;alt=media"></li> <li>I use YOLOv8n for Detection and EfficientNet-B0 for Segmentation.</li></ul></li> <li><p>Various pre/post processing improved LB/PB scores slightly, but steadily.</p> <ul> <li><p>In the yz/zx axis image, blood vessels at the edge may be cut off. Since the inference accuracy was poor in this area, I improved the inference accuracy by pseudo-closing the vessels with mirror-padding before inference.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2640938%2Fb30520c1fb82b034bf913908cf65487d%2FUntitled%20(6).png?generation=1708327958129459&amp;alt=media"></p></li> <li><p>After binarization of the results, defects may occur in the vascular prediction region as shown below. For this reason, morphological closing and fillPoly processing were added as post-processing steps.<br> These contributed to a slight score improvement in CV/LB/PB.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2640938%2F84de68a4219ff31a9f3623c4e83fc8f2%2FUntitled%20(7).png?generation=1708328014528398&amp;alt=media"></p></li></ul></li> <li><p>In my experiments, ideas that contribute to generalization ability (strong augmentation, pseudo labeling, etc…) could not adopted as final submits, because they resulted in a decrease in CV/LB…<br> However, I regret that I should not have been too aware of the unstable CV/LB, as the sample was not large enough for this competition.</p></li> </ul>
SenNet + HOA - Hacking the Human Vasculature in 3D
474th Ranked Solution for the SenNet + HOA - Hacking the Human Vasculature in 3D Challenge
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Thank You</h1> <p>I first of all would like to thanks all the solution writers, competition participants, competition organizers and community members for hosting another interesting challenge useful for the mankind and important to build the analytical skills necessary for a quantitative competition.</p> <h1>Overview</h1> <p>The primary goal as prescribed by the researchers of this competition is to segment blood vessels. The objective is to create a model trained on 3D Hierarchical Phase-Contrast Tomography (HiP-CT) data from human kidneys to help complete a picture of vasculature throughout a body. Currently, human expert annotators manually trace the vascular structures — which is a slow process. Even with expert annotators, each new dataset takes 6+ months to complete. Machine learning approaches using this manual data do not generalize well to new datasets because of the variability of both human anatomy and to changes in the image quality as HiP-CT technology continues to improve and change.</p> <h2>The Data</h2> <p>The dataset is split into three main sub-categories :- train, test and sample submission. We break down the training data to get more accurate results as images and labels, and obtain the exact pixel breakdown, and similarly divide the testing data into images and labels too. The sample submission file is meant to provide a basic illustration ahead of the main submission file.</p> <h2>Submission Format</h2> <p>For each id in the test set, you must predict rle, a run-length encoded instance segmentation mask, where id represents {dataset}_{slice} for an image with path test/{dataset}/images/{slice}.tif. Represent the RLE for an empty mask as 1 0.</p> <p>The submission should contain a header and have the following format:<br> '<br> id,rle<br> kidney_5_0,1 1 100 10<br> kidney_5_1,1 1 100 10<br> kidney_6_0,1 0<br> kidney_6_1,1 0<br> '</p> <h2>The Model - Weighted Ensemble</h2> <p>The main model used to generate accurate predictions is a UNet built on a ResNet50 backbone for faster computation and greater accuracy. The first part is to create a library importing code base where we define all the necessary library imports. </p> <p>We move to the next important part which is defining the main model to create a solid baseline for further action. We define several parameters the first being the model paths and then moving to defining the model name, image size, input size, tile size and most importantly the threshold value needed for further calculations. </p> <p>'<br> model_path_i = 9 <br> model_path_i9 = 11</p> <p>class CFG:<br> model_name = 'Unet'<br> backbone = 'se_resnext50_32x4d'<br> in_chans = 1<br> image_size = 1024<br> input_size = 1024<br> tile_size = image_size<br> stride = tile_size // 4<br> drop_egde_pixel = 0<br> target_size = 1<br> chopping_percentile = 2e-4</p> <pre><code> = = = = [, , ] = </code></pre> <p>'</p> <p>We further add code to define various model paths in addition to our model to define our data distribution pipeline :-</p> <p>'<br> model_path = [<br> "/kaggle/input/2-5d-cutting-model-baseline-training/se_resnext50_32x4d_19_loss0.12_score0.79_val_loss0.25_val_score0.79.pt",<br> "/kaggle/input/training-6-512/se_resnext50_32x4d_19_loss0.09_score0.83_val_loss0.28_val_score0.83.pt",<br> "/kaggle/input/training-6-512/se_resnext50_32x4d_19_loss0.05_score0.90_val_loss0.25_val_score0.86.pt",<br> "/kaggle/input/training-6-512/se_resnext50_32x4d_19_loss0.05_score0.89_val_loss0.24_val_score0.86_midd.pt",<br> "/kaggle/input/training-6-512/se_resnext50_32x4d_24_loss0.05_score0.90_val_loss0.23_val_score0.88_midd.pt",<br> "/kaggle/input/training-6-512/se_resnext50_32x4d_24_loss0.04_score0.91_val_loss0.23_val_score0.88_midd.pt", # 25 025 rot 512 center<br> "/kaggle/input/blood-vessel-model-1024/se_resnext50_32x4d_24_loss0.10_score0.90_val_loss0.16_val_score0.85_midd_1024.pt",<br> "/kaggle/input/blood-vessel-model-1024/se_resnext50_32x4d_24_loss0.10_score0.90_val_loss0.12_val_score0.88_midd_1024.pt", # lr = 8e-5<br> "/kaggle/input/blood-vessel-model-1024/se_resnext50_32x4d_24_loss0.91_score0.09_val_loss0.91_val_score0.09_midd_1024.pt", # 60e-5 <br> "/kaggle/input/sn-hoa-8e-5-27-rot0-5/se_resnext50_32x4d_26_loss0.10_score0.90_val_loss0.12_val_score0.88_midd_1024.pt", # 8e-5-27-rot0-5<br> "/kaggle/input/sn-hoa-8e-5-27-rot0-5/se_resnext50_32x4d_30_loss0.10_score0.90_val_loss0.13_val_score0.88_midd_1024.pt",<br> "/kaggle/input/sennet-kidney-1-and-3/model_real_23.pt" # 31 8e 05<br> ]<br> '</p> <p>The next part is where we define the code to make changes to our original model, i.e., make it a custom model for use in present and future.</p> <p>The code for the same is below :-</p> <p>'<br> class CustomModel(nn.Module):<br> def <strong>init</strong>(self, CFG, weight=None):<br> super().<strong>init</strong>()</p> <pre><code> . .model smp.( encoder_name.backbone, encoder_weightsweight, in_channels.in_chans, classes.target_size, activation, ) .batch .batch def forward_(, image): output .model(image) output[:, ] def forward(, x: tc.): x x.to(tc.float32) x norm_with_clip(x.reshape(, x.shape[:])).reshape(x.shape) .input_size .image_size: x nn.functional.interpolate(x, size(.input_size, .input_size), mode'bilinear', align_corners) shape x.shape x [tc.rot90(x, ki, dims(, )) i range()] x tc.cat(x, dim) with autocast(): with tc.no_grad(): x [.forward_(x[i .batch:(i ) .batch]) i range(x.shape[] x tc.cat(x, dim) x x.sigmoid() x x.reshape(, shape[], shape[:]) x [tc.rot90(x[i], ki, dims(, )) i range()] x tc.stack(x, dim).mean() .input_size .image_size: x nn.functional.interpolate(x[], size(.image_size, .image_size), mode'bilinear', align_corners)[] x </code></pre> <p>def build_model(weight=None):<br> load_dotenv()</p> <pre><code> = CustomModel(CFG, weight) .cuda() </code></pre> <p>'<br> The key features in the code defined here are the forward functions which define the type of interpolation, linearity and also the reshaping and rotation criterions. Furthermore, we define the rotation vectors for our image and labelling, from 1024 to 1024*1024 to define various parameters in consideration. </p> <p>We further add code to increase our functionalities in our code as follows :-</p> <p>'<br> def rle_encode(mask):<br> pixel = mask.flatten()<br> pixel = np.concatenate([[0], pixel, [0]])<br> run = np.where(pixel[1:] != pixel[:-1])[0] + 1<br> run[1::2] -= run[::2]<br> rle = ' '.join(str(r) for r in run)<br> if rle == '':<br> rle = '1 0'<br> return rle</p> <p>def min_max_normalization(x: tc.Tensor) -&gt; tc.Tensor:<br> """input.shape=(batch,f1,…)"""<br> shape = x.shape<br> if x.ndim &gt; 2:<br> x = x.reshape(x.shape[0], -1)</p> <pre><code>min_ = x.(=-, keepdim=True)[] max_ = x.(=-, keepdim=True)[] min_.mean() == and max_.mean() == : x.() x = (x - min_) / (max_ - min_ + ) x.() </code></pre> <p>def norm_with_clip(x: tc.Tensor, smooth=3e-5):<br> dim = list(range(1, x.ndim))<br> mean = x.mean(dim=dim, keepdim=True)<br> std = x.std(dim=dim, keepdim=True)<br> x = (x - mean) / (std + smooth)<br> x[x &gt; 5] = (x[x &gt; 5] - 5) * 1e-3 + 5<br> x[x &lt; -3] = (x[x &lt; -3] + 3) * 1e-3 - 3<br> return x</p> <p>def add_edge(x: tc.Tensor, edge: int):<br> mean_ = int(x.to(tc.float32).mean())<br> x = tc.cat([x, tc.ones([x.shape[0], edge, x.shape[2]], dtype=x.dtype, device=x.device) * mean_], dim=1)<br> x = tc.cat([x, tc.ones([x.shape[0], x.shape[1], edge], dtype=x.dtype, device=x.device) * mean_], dim=2)<br> x = tc.cat([tc.ones([x.shape[0], edge, x.shape[2]], dtype=x.dtype, device=x.device) * mean_, x], dim=1)<br> x = tc.cat([tc.ones([x.shape[0], x.shape[1], edge], dtype=x.dtype, device=x.device) * mean_, x], dim=2)<br> return x<br> '</p> <p>Functionalities such as the RLE code, min-max normalization, normalize w/ clip and add-edge define the additional features that this code provides for our ease in operating with the image pixel data. </p> <p>Further, we define a data pipeline to add data into our system with a pipeline feature to ensure no data leakage and high efficiency in data processing.</p> <p>We finally define the output format before defining the submission code to consolidate our work, with the code for output as follows :-</p> <p>'<br> def get_output(debug=False):<br> outputs = []</p> <pre><code>if debug: paths = [] else: paths = () outputs = [[], []] for path in paths: x = (path, ) labels = tc.(x, dtype=tc.uint8) mark = (x, path).() for axis in [, , ]: debug_count = if axis == : x_ = x labels_ = labels elif axis == : x_ = x.(, , ) labels_ = labels.(, , ) elif axis == : x_ = x.(, , ) labels_ = labels.(, , ) if x.shape[] == and axis != : break dataset = (x_, path) dataloader = (dataset, batch_size=, shuffle=False, num_workers=) shape = dataset.x.shape[-:] x1_list = np.(, shape[] + CFG.tile_size - CFG.tile_size + , CFG.stride) y1_list = np.(, shape[] + CFG.tile_size - CFG.tile_size + , CFG.stride) for img, index in (dataloader): img = img.() img = (img[], CFG.tile_size // )[None] mask_pred = tc.(img[:, ], dtype=tc.float32, device=img.device) mask_count = tc.(img[:, ], dtype=tc.float32, device=img.device) indexs = [] chip = [] for y1 in y1_list: for x1 in x1_list: x2 = x1 + CFG.tile_size y2 = y1 + CFG.tile_size indexs.([x1 + CFG.drop_egde_pixel, x2 - CFG.drop_egde_pixel, y1 + CFG.drop_egde_pixel, y2 - CFG.drop_egde_pixel]) chip.(img[..., x1:x2, y1:y2]) y_preds = model.(tc.(chip)).(device=) if axis == CFG.axis_second_model: y_preds = ( * y_preds + * model9.(tc.(chip)).(device=)) if CFG.drop_egde_pixel: y_preds = y_preds[..., CFG.drop_egde_pixel:-CFG.drop_egde_pixel, CFG.drop_egde_pixel:-CFG.drop_egde_pixel] for i, (x1, x2, y1, y2) in (indexs): mask_pred[..., x1:x2, y1:y2] += y_preds[i] mask_count[..., x1:x2, y1:y2] += mask_pred /= mask_count mask_pred = mask_pred[..., CFG.tile_size // :-CFG.tile_size // , CFG.tile_size // :-CFG.tile_size // ] labels_[index] += (mask_pred[] * * CFG.axis_w[axis]).(tc.uint8).() if debug: debug_count += plt.() plt.(img[, CFG.in_chans // ].().().()) plt.() plt.(mask_pred[].().().()) plt.() if debug_count &gt; : break outputs[].(labels) outputs[].(mark) return outputs </code></pre> <p>'</p> <p>We finally define the submission data code as follows :-</p> <p>'<br> for index in range(len(ids)):<br> id = ids[index]<br> i = 0</p> <pre><code> x output: index &gt;= (x): index -= (x) += : break mask_pred = (output &gt; TH)() mask_pred2 = (mask_pred, , image_size=) mask_pred = mask_pred2() not is_submit: plt() plt(mask_pred) plt() debug_count += debug_count &gt; : break rle = (mask_pred) submission_df( pd(data={ : id, : rle, }, index=) ) </code></pre> <p>submission_df = pd.concat(submission_df)<br> submission_df.to_csv('submission.csv', index=False)<br> '</p> <h1>Future Scope</h1> <p>The future of this model is to add more features into the ResNet50 baseline model to enhance the efficiency of the system, and also certain vector rotations can lead to loss in data accuracy after a certain number of operations, so we need to define better methods for optimizing the number of rotations. Also, the model used here is too simple and needs to be made more efficient to enable faster computation and reduced operations.</p>
SenNet + HOA - Hacking the Human Vasculature in 3D
4th place solution. Boundary DoU Loss is all you need!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks!<br> Not exactly. <br> For the 2d model, during the training, I performed 512x512 non-empty mask random cropping of images. During the epoch, I did 2 such crops for each image from all views. In a such setup, one epoch was approximately 1000 batches with batch size 32 and took 20 minutes on 2 RTX 4090.</p> <p>For the 3d model, the cropping was balanced 50/50 of empty/non-empty 192x192x192 crops. One epoch here was 2000 samples with batch size 4. In this case epoch took 6 minutes. </p> <p>thank you!</p> <p>Great question! <br> I discovered this loss early in the competition and switched after a few weeks. By that time I did all my experiments with the efficientnet_b3.<br> So on my CV I got +2%, on the public LB the boost was ~1.5%, while on the private leaderboard I’ve got +5%</p> <p>Thanks a lot!<br> And thank you for your questions. </p> <ul> <li>Regarding the inference. It’s just an empirical finding. It boosted the score on both CV and LB by ~0.003.</li> <li>iterative process. I’ve started with the severe augmentation scheme, but investigating the learning curves I’ve noticed a heavy underfitting. So I dropped everything and stated to add augmentations one by one. </li> <li>same here. It’s not about tuning the number of epochs, it’s more about tuning the amount of batches till plateau. Usually, I start with 100k-250k batches. In this approach “the epoch” is just how often you want to check the validation score. </li> </ul> <p>I guess both. But for this specific task and surface Dice metric it fit perfectly, so the boost is so huge. </p> <p>Luck + some tricks in the training and inference process. </p> <p>I was not able to select the top model based on only two data points (local CV and public LB), but at some point the shakeup didn’t impact me much. </p> <p>Yeah, so with the usage of pseudos I was able to get around 2% on CV, 1% on the public LB, and 4% on private LB. </p> <p>This normalization is based not on a single image but on a stack of images. <br> First, you calculate the stats for the stack (kidney_1, kidney_2, etc), each stack will have its own stats.</p> <p><code>xmin = np.percentile(volume, low)</code><br> <code>xmax = np.max([np.percentile(volume, high), 1])</code></p> <p>and afterward apply:<br> <code> image = (image - xmin) / (xmax - xmin)</code><br> <code>image = np.clip(image, 0, 1) </code></p> <p>First of all, I would like to start my solution description with a few important words:</p> <p><em>I would like to thank the Armed Forces of Ukraine, the Security Service of Ukraine, Defence Intelligence of Ukraine, and the State Emergency Service of Ukraine for providing safety and security to participate in this great competition, complete this work, and help science, technology, and business not to stop but to move forward.</em></p> <h1>Context</h1> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation" target="_blank">https://www.kaggle.com/competitions/blood-vessel-segmentation</a> </li> <li>Data context: <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/data" target="_blank">https://www.kaggle.com/competitions/blood-vessel-segmentation/data</a></li> </ul> <h1>Overview of the approach:</h1> <p>My final model is a mixture of 2d and 3d models with d4 tta. For the 2d model, the multiview tta was applied. All models were trained in a 2-fold setup with kidney_2 and kidney_3_dense selected as validation sets. The ensembling was performed with equal weights for both 2d and 3d models.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F207760%2F0e15deeafd39981c337888fdaf27e2c9%2FScreenshot%202024-02-07%20at%2002.40.31.png?generation=1707266453616337&amp;alt=media"></p> <h1>Details of the submission</h1> <h2>Data preparation and training data and validation scheme</h2> <p>All final (3d and 2d) models were trained on kidney_1_dense, kidney_2, kidney_3_dense, kidney_3_sparse and pseudo labels <a href="http://human-organ-atlas.esrf.eu" target="_blank">50um_LADAF-2020-31_kidney_pag-0.01_0.02_jp2_</a>. Initially, I used slice-wise normalization to normalize images but later switched to stack-wise normalization based on percentiles.</p> <p>The 2D model was trained in a multiview setup: all images were stacked in a tensor and sliced in different axes afterward. During the training, the set of augmentations and sampling strategy was crucial. The weighted sampling was based on sparsity percentage: dense samples had a weight of 1, while sparse samples had a weight equal to their sparsity. For pseudo labels, I chose the same weight as for kidney_2, e.g.: </p> <pre><code>kidney_1_dense: , kidney_2: , kidney_3_dense: , kidney_3_sparse: , 50um_LADAF--31_kidney_pag-_jp2_: . </code></pre> <p>The augmentation scheme was the next one, with a chance of 0.5 CutMix augmentation being applied. The cropping was performed from the same organ and the same projection axis. Afterward, on top of CutMix, the next augmentation pipeline was applied:</p> <pre><code>A.Compose( [ A.PadIfNeeded(*crop_size), A.CropNonEmptyMaskIfExists(*crop_size, p=), A.ShiftScaleRotate(scale_limit=), A.HorizontalFlip(p=), A.VerticalFlip(p=), A.RandomRotate90(p=), A.OneOf([ A.RandomBrightnessContrast(), A.RandomBrightness(), A.RandomGamma(), ],p=,), ],p=,) </code></pre> <p>The crop size was set to 512. I’ve also tried higher resolution, but it performs +- the same result. </p> <p>I did some experiments with 2.5d approaches (3 and 5 channels), but it produced the same result or worse. </p> <p>The 3d model augmentation scheme contained only d4 augmentations and random crops. The cropping was performed with a 0.5 probability of an empty mask. This was motivated by false positives that appeared outside the kidney volume. This could be improved by incorporating the two-class 3D segmentation, but I didn’t have much time and resources to perform such an experiment. Thus, I decided to create a post-processing that would handle this. <br> The crop size for the 3d model was 192x192x192.</p> <p>Both models were trained in a 2-fold setup where as validation, I used kidney_2 (fold_1) and kidney_3_dense (fold_0). Removal of kidney_1 from the training set caused performance degradation in performance in both CV and LB, so I dropped the fold_2 and didn't perform training in that setup.</p> <h2>Model setup</h2> <p>The best results I was able to get using the efficientnet family models with UnetPlusPlus decoder and SCSE attention from the segmentation_models_pytorch library. I’ve tried the resnet50 model, like it was mentioned in the discussion section, different transformers and seresnext models, but could overcome the performance of efficientnet-b5 (which performed the best on both CV and LB). On my local validation, the score I was able to get with efficientnet_b7 encoder and mit_b5 encoder, but on the LB the score was significantly lower.<br> The training was performed for 30 epochs with a Cosine LR scheduler starting from 3e-4 to 1e-6. I saved the top 3 checkpoints and used the best-last checkpoint for the submission.<br> The model efficientnet_b5_UnetPlusPlus trained in such a setup was able to score 0.878 on the public LB and 0.714 on the private LB at a 0.05 threshold. </p> <p>Here yellow is TP, green is FP, and red is FN.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F207760%2F0484711261976f944aca4341305752f2%2FindividualImage.png?generation=1707265123800657&amp;alt=media"><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F207760%2F157fa4895ef81f6e1fc0dbb8e685c17d%2FindividualImage-2.png?generation=1707265135912886&amp;alt=media"></p> <p>The 3d model was heavily inspired by the nnUnet model architecture and was pretty much the same. Instead of the native nnUnet model, I used DynUnet from the monai library with almost default configuration and trained in almost the same setup as for nnUnet. As the optimizer, I used SGD with initial LR 0.01 and the Cosine Annealing LR scheme instead of LinearLR and trained for 500 epochs with 2000 samples per epoch. <br> This model scored 0.869 (0.868 and 0.866 -- 0 and 1 folds respectively) on the public LB and 0.694 on the private LB (0.758 and 0.663 -- 0 and 1 folds respectively). </p> <p>Both models were trained using the BoundaryDOULoss (<a href="https://arxiv.org/pdf/2308.00220.pdf)" target="_blank">https://arxiv.org/pdf/2308.00220.pdf)</a>, which performed the best. I’ve tried to modify it to perform better on sparse data but failed. </p> <h2>Pseudo labeling</h2> <p>Based on the preprint, I downloaded the additional data from <a href="http://human-organ-atlas.esrf.eu" target="_blank">http://human-organ-atlas.esrf.eu</a> site (2 datasets). It appeared, that one of the datasets overlaps with the kidney_3, so I dropped it to prevent leakage. I used the other one to generate pseudo labels. For pseudo labeling, I used an ensemble of 2d models (efficientnet-b5 and efficientnet-b6 with UnetPlusPlus) trained with the same setup but without CutMix. The correct setup of CutMix as well as the 3d model I was able to discover close to the competition deadline, so I didn’t retrain the original ensemble and stick to the first version of pseudos. </p> <h2>Inference setup and Post-processing</h2> <p>The inference for both models was performed using sliding_window_inference from monai library. Additionally, for 2d model I performed multi-view tta, which helped to detect small vessels and improve overall performance. </p> <p>For the 2d model, the crop size was 800 pix, while for the 3d – 256 pix with 0.25 overlap and Gaussian merging. All models used d4_transform from ttach library. I’ve forked the ttach repository and implemented the logic for 3d images, but the inference time increased significantly, and there was no major boost in performance, so I’ve sticked with 2d d4_transform for both 2d and 3d models :)</p> <p>As I mentioned before, the 3d model had decent performance on the non-empty cubes, while empty ones were confusing the model. To handle this issue, decided to experiment with post-processing. The idea was the next one: let's try to find ROI where the vessels were presented. Since the 2d model didn’t have such a problem I’ve decided to find a bounding polygon for vessels for each 2d slice. Having a mask of ROI, I multiplied it with 3d model predictions and got a boost from 0.869 to 0.881 public LB and 0.701 private LB for a single 3d model.</p> <p>Ensembling the 2d model and 3d model predictions with weights 1 and 1, I was able to improve the score from 0.881 to 0.884 on the public LB and 0.712 on the private LB.</p> <p>Another post-processing approach that I’ve tried is to use Canny filters from cv2 to segment the kidney. This segmentation algorithm was not perfect, but applying such post-processing boosted my score from 0.884 to 0.892 on the public LB while failing on the private LB, scoring just 0.313.</p> <h2>What didn’t work</h2> <ul> <li>nnUnet out of the box. At the beginning of the challenge and after the pre-print reading, I tried to reproduce the result with nnUnet. The local score was promising, but the LB was 0. My intuition behind this issue related to the data normalization and spacing (scale), but I didn’t try to fix it and decided to build my own solution.</li> <li>BCE and Focal Loss.</li> <li>Transformers in both 2d and 3d model</li> <li>Zoom and brightness augmentation for 3d images</li> <li>Pseudo on top of sparse datasets. I’ve tried to fulfill the sparsity of the dataset by pseudo labeling and aggregation, but it didn’t improve the score.</li> <li>Additional projections. I’ve performed experiments with 2d models and additional slices generated from the 3d stack, but LB performance dropped by 20% while CV was about the same. </li> <li>Auxiliary outputs such as distance transform or center of mass. </li> <li><strong>and the most important: validation</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F207760%2Fcd379db59f3575778ebc4eca63ef0189%2FScreenshot%202024-02-07%20at%2002.29.01.png?generation=1707265761683297&amp;alt=media"></li> </ul> <p>P.S. If you were able to read all of this, the top score on the private LB was a simple mix of 2d and 2.5d models with 1 and 3 channels :) </p> <p>P.P.S. Thank you for reading!</p> <h2>Links</h2> <ul> <li><a href="https://www.kaggle.com/datasets/igorkrashenyi/50um-ladaf-2020-31-kidney-pag-0-01-0-02-jp2" target="_blank">Pseudo labels </a></li> <li><a href="https://github.com/burnmyletters/blood-vessel-segmentation-public" target="_blank">Source code</a></li> <li>Inference code <a href="https://www.kaggle.com/code/igorkrashenyi/4th-place-solution/notebook" target="_blank">https://www.kaggle.com/code/igorkrashenyi/4th-place-solution/notebook</a> + <a href="https://www.kaggle.com/code/igorkrashenyi/fork-of-multiview-2-5-sennet-hoa-inference-v3" target="_blank">https://www.kaggle.com/code/igorkrashenyi/fork-of-multiview-2-5-sennet-hoa-inference-v3</a></li> </ul> <p>added the source code to the post </p> <p><a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/475052#2640504" target="_blank">https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/475052#2640504</a></p>
SenNet + HOA - Hacking the Human Vasculature in 3D
5th place solution - 3D interpolation is all you need (updated with code)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I want to express my deepest gratitude to the organizers of this competition. The reason I mostly compete in CV competitions is because I love it. Especially in medical competitions such as this one. So I'm really happy we managed to work with such a great technology (resolution is insane). </p> <h3>Validation</h3> <p>Initially I thought it's gonna be a challenge validation-wise, since we simply don't have enough data for reliable validation. To make the validation without leakage and similar to test, I decided to train my pipelines in 2 ways: </p> <ul> <li>Take kidney_1 as base for trainings, kidney_3 for validation</li> <li>Take kidney_3 as base for trainings, kidney_1 for validation</li> </ul> <p>Since test is densely annotated, I wanted to compute metrics only on densely annotated kidneys, which eliminated kidney_2 from the discussion. </p> <h3>Data</h3> <p>I always believe that data is the key. So I tried my hardest to utilize the additional datasets provided by organizers at <a href="https://human-organ-atlas.esrf.eu" target="_blank">Human Organ Atlas</a>. </p> <p>In the end, I decided to use to following with the help of pseudo-labeling:</p> <ul> <li>LADAF-2020-31 kidney</li> <li>LADAF-2020-27 spleen</li> </ul> <p>In other words, after examining the pseudo annotations on spleen, I realized that they are quite good and should serve as a good regularization method. </p> <p>Additionally, I tried to use heart + brain + lung. However, my models make semi-accurate predictions for lung, but horrible for heart + brain. So in the end I decided to stick with kidney + spleen. </p> <h3>Pseudo annotations</h3> <p>I would say there are 2 important points</p> <p>The first one is don't pseudo annotate everything right away. In order to create full pseudo annotations, I run a 4-step process:</p> <ul> <li>Train on kidney_1. Pseudo-annotate kidney_2</li> <li>Train on kidney_1 + kidney_2. Pseudo-annotate the 2020-31 kidney.</li> <li>Train on kidney_1 + kidney_2 + 2020-31-kidney. Pseudo-annotate 2020-27 spleen.</li> <li>Train on kidney_1 + kidney_2 + 2020-31-kidney + 2020-27 spleen. </li> </ul> <p>The second point is that don't use hard labels. In other words, don't apply thresholding to the predictions. Simply use soft labels (predictions are sigmoided to be in the range of [0,1]) for training. </p> <h3>Loss</h3> <p>My baseline go-to loss in semantic segmentation is <code>CE + Dice + Focal</code>. This worked quite well in this competition. However, since we have a surface metric, I wanted to weight the boundaries of masks more heavily. </p> <ul> <li>What didn't work: losses I found in open-source repositories (like Hausdorff Distance loss).</li> <li>What worked really well in terms of Surface Dice, FP and FN on validation: CE with x2 weights for boundaries. </li> </ul> <p>So in the end I decided to use <code>CE_boundaries + Dice + Focal</code> for most of my models, and <code>CE_boundaries + Twersky + Focal</code> for a single model.</p> <p>Twersky was focusing more on FN rather than FP, but more on that in the next section. </p> <pre><code> (torch.nn.modules.loss._Loss): (): ().__init__() self.bound = EdgeEmphasisLoss(alpha=bound_alpha) self.dice = smp.losses.DiceLoss(mode=) self.focal = smp.losses.FocalLoss(mode=) self.bound_weight = bound_weight self.dice_weight = dice_weight self.focal_weight = focal_weight (): ( self.bound_weight * self.bound(preds, gt, boundaries) + self.dice_weight * self.dice(preds, gt) + self.focal_weight * self.focal(preds, gt) ) (nn.Module): (): (EdgeEmphasisLoss, self).__init__() self.alpha = alpha (): bce_loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction=) weighted_loss = bce_loss * ( + self.alpha * boundaries) weighted_loss.mean() </code></pre> <h3>Preprocessing</h3> <p>After analyzing initial models and its errors, I realized that my biggest issue is FN, not FP. In other words, my models simply don't see some masks, mostly the small ones. </p> <p>So I decided to increase the resolution of my trainings with crops from 512x512 to 1024x1024. However, after a couple of hours of training it hit me: that doesn't make much sense. By going from 512x512 to 1024x1024 I don't really increase resolution (each pixels holds the same real-world size), just the context, and 512x512 seemed like a big-enough context already. </p> <p>Instead, I decided to do the following: </p> <pre><code> (nn.Module): (): ().__init__() self.upscale_factor = upscale_factor self.model = Unet( encoder_weights=encoder_weights, encoder_name=encoder_name, decoder_use_batchnorm=decoder_use_batchnorm, in_channels=in_channels, classes=classes, ) (): x = torch.nn.functional.interpolate( x, (x.shape[-] * self.upscale_factor, x.shape[-] * self.upscale_factor), mode= ) x = self.model(x) x = torch.nn.functional.interpolate( x, (x.shape[-] // self.upscale_factor, x.shape[-] // self.upscale_factor), mode= ) x </code></pre> <p>This approached worked really well and I could clearly see improvements both on CV, and LB.</p> <h3>Models</h3> <p>I used only U-Net models from SMP with different backbones. Tried a lot of things, but for final ensembles decided to settle on the following:</p> <ul> <li>effnet_v2_s</li> <li>effnet_v2_m</li> <li>maxvit_base</li> <li>dpn68 </li> </ul> <p>Maxvit was trained on 512x512 crops, effent and dpn - on 512x512 with x2 interpolation. Crops were used from xy, xz, and yz axes. During inference, I use the same crops resolution with overlaps of crops_size / 2 (so that's 256). In other words, sliding window approach.</p> <p>Augmentation were medium-level in terms of intensity. </p> <pre><code> A.Compose( [ A.ShiftScaleRotate( p=, shift_limit_x=(-, ), shift_limit_y=(-, ), scale_limit=(-, ), rotate_limit=(-, ), border_mode=cv2.BORDER_CONSTANT, ), A.RandomBrightnessContrast( brightness_limit=(-, ), contrast_limit=(-, ), p=, ), A.HorizontalFlip(), A.VerticalFlip(), A.OneOf( [ A.GridDistortion(border_mode=cv2.BORDER_CONSTANT, distort_limit=), A.ElasticTransform(border_mode=cv2.BORDER_CONSTANT), ], p=, ), AT.ToTensorV2(), ], ) </code></pre> <h3>Post processing</h3> <p>I tried to use cc3d to remove small objects, it made weak models better, but no difference for ensemble.</p> <h3>Private resolution</h3> <p>Now, this part is really tricky. My huge thanks to the organizers for announcing the test resolutions. It sincerely warms my heart to see organizers interact with participants that much here on the forum. Really, thank you. </p> <p>One approach is not to do anything. You train your model on 50um/voxel, inference on 63um/voxel. Considering I use conv-based backbones (except for maxvit) that have some level of scale-invariance + have scale augs in validation, this might work.</p> <p>The second approach is to do rescaling. I believe the correct approach for rescaling is the following: </p> <pre><code> test_kidney == : private_res = public_res = scale = private_res / public_res d_original, h_original, w_original = test_kidney_image.shape test_kidney_image = torch.tensor(test_kidney_image).view(, , d_original, h_original, w_original) test_kidney_image = test_kidney_image.to(dtype=torch.float32) test_kidney_image = torch.nn.functional.interpolate(test_kidney_image, ( (d_original*scale), (h_original*scale), (w_original*scale), ), mode=).squeeze().numpy() </code></pre> <p>…</p> <pre><code>d_preds, h_preds, w_preds = preds_ensemble.shape preds_ensemble = preds_ensemble.view(, , d_preds, h_preds, w_preds) preds_ensemble = preds_ensemble.to(dtype=torch.float32) preds_ensemble = torch.nn.functional.interpolate(preds_ensemble, ( d_original, h_original, w_original, ), mode=).squeeze() </code></pre> <p>So we do 3D resize instead of 2D one: re-scale image from 63um (private) to 50um (public + CV), compute predictions, and re-scale them back to 63um. Simply going for 2D would work as well, but theoretically you end up with different spatial and temporal resolutions in that case. </p> <p>This trick helped. To give a single point (I don't have much else): the same ensemble scores 0.634 on private without interpolation, and 0.670 - with interpolation. </p> <p>To be honest, I didn't think it would make that much difference. I tried the following experiment locally: </p> <ul> <li>Download kidney in 25um resolution. Compute predictions in 25um, interpolate them to 50um, compute metrics. This approach brought my 0.92 surface dice to 0.895. Which is quite good, considering we're talking about x2 interpolation in all 3 directions (that's 8 times less volume) and the fact that it's harder to detect small objects in smaller resolution.</li> <li>Download kidney in 25um resolution. Interpolate image to 50um, compute predictions, compute metrics. This approach essentially provided the same metrics as in the case of simply using 50um from organizers. </li> </ul> <p>So even though I didn't really think interpolation is that important, it also didn't hurt (I was afraid of interpolation artifacts), so I used it for both final subs. </p> <h3>Final subs</h3> <p>Both subs have an ensemble of 3 models, each inferenced on all 3 axes without TTA (TTA took too much time, and didn't really help on CV). </p> <ul> <li>First sub. CV: 0.84 (kidney_1), Public: 0.768. Private: 0.566<br> <code>Maxvit_ce_dice_focal</code> + <code>effnet_v2_s_ce_dice_focal</code> + <code>effnet_v2_m_ce_dice_focal</code> trained on kidney_3, validated on kidney_1. This approach didn't work that well on CV, and also on Public and Private. </li> <li>Second sub. CV: 0.923 (kidney_3), Public: 0.855. Private: 0.691<br> <code>Maxvit_ce_dice_focal</code> + <code>effnet_v2_s_ce_bounds_dice_focal</code> + <code>dpn_68_ce_bounds_twersky_focal</code>.</li> </ul> <p>Code:</p> <ul> <li>Inference notebook <a href="https://www.kaggle.com/code/ivanpan/final-submission/notebook" target="_blank">link</a></li> <li>Training code <a href="https://github.com/ivanpanshin/segment-vasculature-5th-place" target="_blank">link</a></li> </ul> <p>Thanks! </p> <p>Nope. I think it could have helped in the ensemble, but in my experience U-Net with skip-connections is pretty much always enough. Which is quite crazy considering it's pretty much 10 years old. </p> <p>Hm. I can. I don't have a lot of data points, but I will share the best ones I can. </p> <ol> <li><p>I have a single data point here that I still have access to. <code>CE + Dice + Focal</code>: SD: 91.5, FP: 120K, FN: 160K. <code>CE_boundaries + Dice + Focal</code>: 92.3, FP: 120K, FN: 140K. These metrics are for kidney_3 used as validation. </p></li> <li><p>This is tricky. If I utilize pseudo label on external data based on trainings from kidney_1, then I don't see a lot of improvement either on CV, or LB. The killer feature is std. Even though you can get the same metrics on CV, and LB, by training with pseudo-annotations LB scores become muuuch more consistent. For example, my original trainings (without pseudo) score somewhere between 0.83-0.88 on Public. However, after I add pseudo to training, I don't think my LB was even below 0.87 for these models. </p></li> </ol> <p>Additionally, if I start training from kidney_3 (not kidney_1), the difference on CV is very evident (still doesn't work very well on Public or Private though).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2755695%2F678efcccf7ae440d9f77bbace7323466%2FScreenshot%202024-02-08%20at%2013.39.04.png?generation=1707392353733609&amp;alt=media"></p> <p>Green - kidney_3, yellow - kidney_3 + kidney_2, blue - kidney_3 + kidney_2 + kidney_external </p> <p>Sure thing. </p> <pre><code>bound_alpha=, bound_weight=, dice_weight=, focal_weight= </code></pre> <p>Let me elaborate. The reason for bound_alpha=1 is to give boundaries twice as much importance as to other areas of the mask. I tried to set it higher (so even more importance to boundaries), but it was worse on CV and LB.</p> <p>The reason for other weights is the absolute value of losses during training. If you train a model with CE, and then Dice (separately) you will notice that on average Dice loss values are 2 orders of magnitude higher (that is, 100 times bigger). So I wanted to combine losses in such a way that their values are on the same scale. Additionally, CE and Focal have the same scale (expectedly, since Focal is just CE with bells and whistles), but I simply didn't want to give too much importance to Focal.</p>
SenNet + HOA - Hacking the Human Vasculature in 3D
6th place solution: Luck is All You Need
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi, Kagglers!</p> <p>After finding yourself on the Leaderboard after Grand Shakeup and restoring your mental health, we can dive deep into 6th place solution, but before this, a few very important words:<br> <em>I would like to thank the Armed Forces of Ukraine, Security Service of Ukraine, Defence Intelligence of Ukraine, and the State Emergency Service of Ukraine for providing safety and security to participate in this great competition, complete this work, and help science, technology, and business not to stop but to move forward.</em></p> <h1>Validation. Not really</h1> <p>I was using for validation 2 organs (in different folds): kidney_3_dense and kidney_2. After releasing a <a href="https://www.kaggle.com/code/junkoda/fast-surface-dice-computation" target="_blank">fast version of 3D Surface Dice</a>, I was able to compute validation scores while training, and I received the next insights:</p> <ul> <li>Tracking the score on kidney_2 was useless for me. The validation score decreased from epochs 2-3 on kidney_2</li> <li>Scores on kidney_3_dense were meaningful for checking “radical” features, like additional data and new losses. But then optimal score fluctuated between 0.9-0.925 dice without any reasonable correlation with Public or Private score</li> <li>The optimal threshold on kidney_3_dense was optimal for Private, Public, and kidney_3_dense scores - 0.1 and lower <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1690820%2Fa4050c6e4e6d27662cfddc62201e2f71%2Fkidney_3_dense_val.png?generation=1707321782497972&amp;alt=media"></li> <li>Resize to constant um/voxel (I have picked 50 um/voxel) for prediction increased optimal threshold both on CV and Public but decreased optimal score dramatically. But on Private, it became one of the most robust approaches<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1690820%2F6f0a8a87cf758aa4920353ace23c11cf%2Fkidney_3_dense_resize_val.png?generation=1707323076488076&amp;alt=media"><br> In summary, Validation did not work (at least mine). It is not strange because of solo data point in CV, Public, and Private </li> </ul> <h1>Data</h1> <p>I was using all train data except kidney_1_voi sample<br> In order to enlarge training data I have used 50um_LADAF_2020_31_kidney_pag from <a href="https://human-organ-atlas.esrf.eu/search?organ=kidney" target="_blank">Human Organ Atlas</a> <br> For data normalization, I was using the approach proposed by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> - <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/456118#2552053" target="_blank">percentile normalization</a></p> <h1>Training setup</h1> <p>I mostly stick to the 2.5D approach with 5 slices.<br> I started from one view model and iterated along the last axis, but then I switched to a multiview and used slices by all three axes in the training<br> I have used 512 square crops with Non Empty probability of 0.5, pretty much standard augmentations and CutMix within one organ and one view with 0.5 probability and 1.0 alpha :</p> <pre><code>: : [ A.PadIfNeeded( min_height=crop_size, min_width=crop_size, always_apply=, ), A.OneOrOther( first=A.CropNonEmptyMaskIfExists(crop_size, crop_size), second=A.RandomCrop(crop_size, crop_size), p= ), ], : A.Compose( [ A.ShiftScaleRotate( scale_limit=, ), A.RandomRotate90(p=), A.HorizontalFlip(p=), A.VerticalFlip(p=), A.Transpose(p=), A.OneOf( [ A.RandomBrightnessContrast(), A.RandomBrightness(), A.RandomGamma(), ], p=, ), ToTensorV2(transpose_mask=), ] ), : , : {: , : }, </code></pre> <p>I was using Adam optimizer and reduced learning rate with CosineAnnealingLR starting from 1e-3 and ending with 1e-6<br> Regarding loss function choice, I started with classical BCE+Dice loss and then tried to implement the loss function, which will directly optimize the metric, but unfortunately, it did not work well. Luckily, I have come across BoundaryLoss, which worked firstly comparable to BCE+Dice loss and then better. Interesting fact that the best (not selected) model was trained on BoundaryLoss + 0.5 Focal Symmetric Loss and scored 0.756 on Private, 0.867 on Public, and 0.916 on kidney_3_dense, which is a pretty much balanced score (of course, comparing to all other models score distribution 🙂)<br> I was training for 30 epochs. repeating the original train set 3 times and the pseudo train set 2 times<br> I have used a batch size of 14 samples and trained with DDP on 2 GPUs, so the final batch size was 28</p> <p>After I saw <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/461213#2594751" target="_blank">the post</a> about promising results from 3D models, I started exploring 3D approaches, and they worked pretty well. In order to make it train without NaNs I have changed the optimization strategy and switched to SGD, with momentum=0.99, weight_decay=3e-5, nesterov=True and also changed starting learning rate to 1e-6 - taken from <a href="https://github.com/Project-MONAI/tutorials/tree/main/modules/dynunet_pipeline" target="_blank">monai example</a><br> As the overall image resolution of the image was increased dramatically, I had to reduce the batch to 3 on one GPU, so the aggregated batch size was 6. I was training in total for ~120K iterations<br> As for augmentations - they were pretty much the same as in the 2.5D setup, except from Zoom. </p> <pre><code>: : mt.Compose( [ mt.OneOf([ mt.RandRotate90d(keys=(, ), prob=, spatial_axes=(-,-)), mt.RandRotate90d(keys=(, ), prob=, spatial_axes=(-,-)), mt.RandRotate90d(keys=(, ), prob=, spatial_axes=(-,-)) ]), mt.RandFlipd(keys=(, ), prob=, spatial_axis=-), mt.RandFlipd(keys=(, ), prob=, spatial_axis=-), mt.RandFlipd(keys=(, ), prob=, spatial_axis=-), mt.RandScaleIntensityd(keys=(), prob=, factors=) ] ), </code></pre> <p>Zoom worked better on CV but worse on Public and also on Private (Why? - who knows …)</p> <h1>Neural Networks</h1> <p>I was mostly using <a href="https://arxiv.org/abs/1905.11946" target="_blank">EfficientNet family</a> as an Encoder (from noisy student weights), started from B3, then switched to B5, and unfortunately, B7 did not work well for me both on CV and Public </p> <p>Interestingly se_resnext50_32x4d performed not well on Public LB (0.852) and CV (0.909) but really well on Private (0.702)</p> <p>As for Decoder I was mostly using Unet++. I have tried <a href="https://arxiv.org/abs/2004.08790" target="_blank">Unet3+</a> but it showed considerably worse results</p> <p>As for 3D Nets, I was using <a href="https://monai-dev.readthedocs.io/en/fixes-sphinx/networks.html#dynunet" target="_blank">DynUNet</a> and adopted model architecture according to <a href="https://github.com/Project-MONAI/tutorials/blob/main/modules/dynunet_pipeline/create_network.py#L19" target="_blank">next script</a>. I have tried to use pretrained Unet from <a href="https://monai.io/model-zoo.html" target="_blank">MONAI Model Zoo</a> but it performed badly on all sets </p> <h1>Inference and Post Processing</h1> <p>I was using 512 sliding window with 0.5 overlap, flip TTA, and last checkpoint from 2 folds.<br> After switching to multi view model, I have also added multi view TTA</p> <p>The next step was the creation of a kidney mask. I have tried several approaches </p> <ol> <li>Using segmentation net, trained on <a href="https://www.kaggle.com/datasets/squidinator/sennet-hoa-kidney-13-dense-full-kidney-masks" target="_blank">this dataset</a> + slight post-processing for removing binary holes and small connected regions </li> <li>Using an algorithmic approach based on intensity thresholding, erosion, and dilation</li> </ol> <p>The first one had a pretty high FP rate but nearly zero FN rate, while the second one had a pretty high FN rate. Both of them performed nearly ideal on kidney_3, so did not really influence the fold 0 scores, but an algorithmic approach cut out kidney regions for kidney_2 and dramatically reduced the fold 1 score. BUT at the same time, the second approach improved Public score (0.874-&gt;0.882). I understood that it was 90% overfit to Public LB, but I have decided to take the risk</p> <h1>Final Model</h1> <p>For final submission, I have selected the following ones:</p> <ul> <li>Pure 2.5D -&gt; algorithmic post-processing<ul> <li>Public: 0.886</li> <li>Private: 0.681</li> <li>Kidney 3 dense score: 0.917 </li></ul></li> <li>2.5D (weight 3.0) blended with 3D (weight 1.0) <ul> <li>Public: 0.871</li> <li>Private: 0.676</li> <li>Kidney 3 dense score: ~0.918<br> For both models, I used 0.05 threshold </li></ul></li> </ul> <h1>The most popular rubric of this competition: Not Selected Best Submission</h1> <p>Here, I want to point out several of the most exciting approaches for me </p> <ul> <li>Pure 2.5D but add Symmetric Focal loss with 0.5 coefficient <ul> <li>Public: 0.867</li> <li>Private: 0.756</li> <li>Kidney 3 dense score: 0.916</li></ul></li> <li>Resize 2d slices to 50 um/voxel for prediction and then resize back <ul> <li>Public: 0.799</li> <li>Private: 0.753</li> <li>Kidney 3 dense score: 0.907</li></ul></li> <li>Resize the whole volume with scipy.zoom to 50 um/voxel for prediction and than resize back <ul> <li>Public: 0.7 resize back </li> <li>Public: 0.726</li> <li>Private: 0.745</li> <li>Kidney 3 dense score: Have not checked </li></ul></li> <li>Solo 3D model <ul> <li>Public: 0.849</li> <li>Private: 0.723</li> <li>Kidney 3 dense score: 0.915<br> For me, it was logical to pick first or second, but as for all other better submissions, it sounds to me like pure random.</li></ul></li> </ul> <h1>Conclusions</h1> <p>Computing metrics on one data sample leads to severe shakeups 🙂</p> <h1>Closing words</h1> <p>I hope you have not fallen asleep while reading. Finally, I want to thank the entire Kaggle community, congratulate all participants and winners. Special thanks to Indian University Bloomington, University College London, Yashvardhan Jain (@yashvrdnjain), Claire Walsh (@clairewalsh), the Kaggle Team, and other organizers.</p> <ol> <li>I have used 2 options: resize the whole volume with scipy.zoom and resize each slice with albumnetations. Second worked better. Intuition is that you get rid of additional domain shift caused by resolution difference </li> <li>I have used pseudo labels </li> </ol> <p>Congratulations on your 3d place!</p>
SenNet + HOA - Hacking the Human Vasculature in 3D
7th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First and foremost, I'd like to extend my gratitude to Kaggle and the competition organizers for creating such a compelling event. Despite joining the contest relatively late, I was able to quickly get up to speed thanks to insightful discussions by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> , the informative videos by <a href="https://www.kaggle.com/yoyobar" target="_blank">@yoyobar</a> , and the vibrant exchanges within the community.</p> <h2>Overview</h2> <p>I employed a hybrid approach for the model, utilizing 2.5D images as input. The architecture combines a 2D Unet framework with 3D convolutional layers. From my research and the community's insights, it seemed that a full 3D model might offer superior results compared to 2D models. However, due to the high computational costs associated with 3D models, I choose a blend of 3D convolutions within a 2D Unet structure, striking a balance between efficiency and performance.</p> <h2>Data Preparation</h2> <h3>Generating 3D Rotational Slices</h3> <p>I augmented the dataset with 3D rotation. The process begins by assembling the images into a 3D volume, followed by rotating two axes and extracting slices along the remaining axis. The rotation angles used are as follows:</p> <pre><code>rotation_angles = [ [, ], [, -], [-, ], [-, -], [, ], [, -], [-, ], [-, -], [, ], [, -], [-, ], [-, -] ] </code></pre> <p>Post-rotation, some slices exhibited increased areas of black background. To maintain data quality, I retained only those slices where the target segmentation was present and the black background constituted less than 50% of the slice area. </p> <h4>Rotate data sample</h4> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2369671%2F2d8dc94653829769d2cc750903ab67de%2Fk1_z_pseudo_rotset7_1257.jpg?generation=1707579928485450&amp;alt=media"></p> <h3>Pseudo Labeling</h3> <p>The pseudo labeling process involved:</p> <ol> <li>Generating additional slices for kidney1 and kidney3 using the aforementioned technique.</li> <li>Training a model with the augmented dataset.</li> <li>Applying the model to pseudo label kidney2, followed by generating extra slices for it in a similar manner.</li> </ol> <h2>Model</h2> <h3>Architecture</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2369671%2F2152324997361e303fb5fdfed3811b7d%2FScreenshot%202024-02-10%20at%2023.52.55.png?generation=1707579666652474&amp;alt=media"></p> <h3>Training and Inference Details</h3> <h4>Training</h4> <p>The training setup was as follows:</p> <ol> <li>The model takes a 3-channel 2.5D image as input and outputs a 3-channel prediction.</li> <li>Normalize the input base on the std of each kidney</li> <li>I used a combination of loss functions: BCEWithLogitsLoss, DiceLoss, and FocalLoss. The loss for each of the three channels was calculated separately, with the middle channel assigned a higher weight of 0.9.</li> <li>Optimizer: AdamW</li> <li>Scheduler: CosineAnnealingLR</li> <li>Images were cropped to a size of 1024x1024 for processing.</li> </ol> <h4>Inference</h4> <p>For inference:</p> <ol> <li>A single model was used for predictions.</li> <li>The model operated at the original image resolution.</li> <li>Similar to training, the input comprised 3-channel 2.5D images, with the output being a 3-channel prediction, primarily focusing on the middle channel.</li> <li>Normalize the input base on the std of each kidney</li> <li>Predictions were made along the x, y, and z axes, and the results were averaged.</li> <li>Test Time Augmentation (TTA) included horizontal flipping.</li> <li>Post-processing steps involved applying thresholds of 0.2, followed by a 3D closing operation.</li> </ol>
SenNet + HOA - Hacking the Human Vasculature in 3D
93rd Place Solution for the SenNet + HOA - Hacking the Human Vasculature in 3D
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I started late in the beginning of January. This is only my second kaggle competition and I am happy with the enormous learning and the outcome of placing in the top 10% with bronze. Thank you to the fellow teams for sharing your knowledge and to the organizers for a well-run competition.</p> <h1>Context</h1> <p>Business context: <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/overview" target="_blank">https://www.kaggle.com/competitions/blood-vessel-segmentation/overview</a><br> Data context: <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/data" target="_blank">https://www.kaggle.com/competitions/blood-vessel-segmentation/data</a></p> <h1>Overview of the approach</h1> <p>My final model was an ensemble of two UNet 2D models trained with 1024x1024 and 512x512 sizes with equal weighting in inference. I used slices from X,Y and Z projections for both training and inference, a simplification inspired by the 2.5D Unet paper. </p> <p>For training, I used segmentation_models_pytorch smp.Unet architecture with resnext50_32x4d backbone and started with imagenet weights. Trained with kidney_1_dense and kidney_3_dense and validated with kidney_2. For preprocessing, used histogram equalization followed by minmax normalization. Adam optimizer, CosineAnnealingLR scheduler and smp.losses.DiceLoss were used for training. Following augmentations were used in training.</p> <pre><code>aug_prob = train_aug_list = [ A.Rotate(limit=, p=aug_prob), A.RandomScale(scale_limit=(,),interpolation=cv2.INTER_CUBIC,p=aug_prob), A.PadIfNeeded(min_height=img_size[], min_width=img_size[], p=), A.RandomCrop(img_size[], img_size[], p=), A.RandomBrightnessContrast(p=aug_prob), A.GaussianBlur(p=aug_prob), A.MotionBlur(p=aug_prob), A.GridDistortion(num_steps=, distort_limit=, p=aug_prob), ToTensorV2(transpose_mask=), ] </code></pre> <p>For inference, I padded the image to 3072x3072 and used a 3x3 grid of 1024x1024 size tiles to run the model on. 4 rotations used on each tile for inference and averaged for Test Time Augmentation aka TTA. Used sigmoid activation layer in Unet. Also, ran inference on X, Y and Z projections and created three prediction volumes. On each projection’s prediction volume, applied sigmoid threshold of 0.0001 to get binary mask volumes, then transposed and added binary masks from the three projections, then used a majority voting to get final predictions.</p> <h1>Details of the submission</h1> <h2>What was special about the submission</h2> <p>-I did no resizing in training or inference to reduce errors and artifacts from downsizing and aspect ratio changes. In training, I used a random crop. In inference, I used padding and tiling.<br> -Fixed sigmoid threshold independent of dataset, as opposed to using top Nth percentile for thresholding produced more stable result on private LB. I was bumped up by 616 in ranking.<br> -Ensemble of 1024+512 gave higher score than each model applied separately.<br> -4x rotation Test Time Augmentation in inference boosted the score by 0.007 in public LB.<br> -During local validation and spot checking, my models were achieving very high 2D dice scores on the middle slices in the volumes. Most of the FP and FN errors were on the edge slices in the tiny vessels (1 or 2 pixel errors).<br> -While I thought histogram equalization was a secret sauce (since it improved public LB score over mean/std/clip normalization), it turns out my mean/std/clip normalization model produced much better private LB score.<br> -There were many high scoring inference notebooks publicly shared in this competition. While I studied them to understand what other teams are doing and did in fact get many great ideas, I chose not to use large sections of code directly since there were many questionable choices and I didn’t understand why those notebooks were producing the high public LB scores. This approach kept my solution unique and generalized enough to move up in the private LB.</p> <h2>What was tried and didn’t work</h2> <p>-The striding on 3 or 5 consecutive slices to create a multi-channel image for training and inference did not work well for me, both score wise and CPU/GPU/Memory resource wise.<br> -Tried median blur for preprocessing which did not work well.<br> -Went from efficientnetb0 to resnext50_32x4d. Maybe somewhere in between would have been better.<br> -It would have been better to stick with mean/std/clip norm as opposed to histogram equalization for preprocessing<br> -Tried inference with Z projection only initially, after 0.04 improvement in LB score with XYZ projections and majority voting, and decided to use it going forward.<br> -There were suggestions in public high-scoring notebooks to reduce augmentation probability to 0.05, I tried low augmentation and although that produced higher validation dice scores during training, almost always produced lower score in public LB. So, I decided to increase to 0.10 probability for augmentation, now that I read the solution writeups from top scoring teams, I realize an even higher augmentation would have been better<br> -I had tried training with 90% of slices of all three kidneys dense 1, 2 and 3 and validating with 10% of the slices. Since I read in the discussion board that there is label shift in kidney 2, I switched to using only kidney1 and kidney 3 for training. The results got slightly better. Every incremental improvement counted.<br> -Since I realized most of segmentation errors were in the first 100 slices or so with the tiny vessels, I tried to train a separate model with first 200 slices of all three kidneys. Model didn’t work at all on both early slices and middle slices, probably because there was not enough data to train.<br> -Tried morphological opening for post processing the predicted mask. I expected that it will reduce some false positives without affecting true positives that much. But nope. It made both false positives and false negatives significantly worse. Then, tried removing all one pixel blobs for post processing. It was a disaster with the score. Also, tried majority voting across three slices to retain positives, it was a disaster. After that, I decided that no post processing is best. Any improvement had to come from better model, not predicted mask cleanup.<br> -Tried 6x and 8x TTA with flips and rotations, but score was worse than 4x TTA.</p> <h2>What I didn’t try which I would consider next time</h2> <ol> <li><p>A loss function that considers 2D boundary loss since the “surface dice score” used for test evaluation uses only the 3D surface boundaries. 5th place solution used a clever loss function of CE_boundaries + Dice + Focal. 4th place solution used a BoundaryDOULoss (<a href="https://arxiv.org/pdf/2308.00220.pdf" target="_blank">https://arxiv.org/pdf/2308.00220.pdf</a>)</p></li> <li><p>Take into account that the native scan resolution of private test data is different (63um/voxel) from train (50 and 50.16um) and public test data (50.28um/voxel). This can be accomplished with 2D or 3D resize during inference or equivalent scaling augmentations in training. Top 5 solution write ups talk about this.</p></li> <li><p>Somehow incorporate the sparse annotations in training. I completely ignored the kidney_3_sparse and kidney_2 (also sparse and had shifted labels) in training which limited my training data to kidney_1_dense and the small number of slices from kidney_3_dense. Pseudo-annotating the sparse data is an approach I saw that many of the top scoring solutions used.</p></li> <li><p>Experiment more with different model architectures and ensembles? I was bogged down with the basics and getting the foundations working. Also, with the Kaggle 30 hours per week GPU limit, didn’t have time and resources for this.</p></li> <li><p>Increase the augmentation probabilities in intensity and scaling. Many of the top solutions have done this.</p></li> <li><p>2.5D approach with 5 consecutive slices concatenated as a 5-channel image. 6th place solution has successfully used this. I was running into CPU/GPU/Mem resource issues, but those could have been potentially resolved with more time and effort. </p></li> </ol> <h2>Preprocessing examples</h2> <p>Histogram equalization followed by Minmax normalization:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F18199244%2Fe119a342eda6f290c058e94c97728e78%2FNorm-HistogramEqualization.png?generation=1707343051563429&amp;alt=media"></p> <p>Mean/Std normalization with clipping followed by Minmax normalization:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F18199244%2F9e72cb4f6f49eee6aae6213df2833e90%2FNorm-MeanStdClip.png?generation=1707343078883718&amp;alt=media"></p> <h2>Padding and tiling kidney slice for inference</h2> <p>This is showing 2048x2048 padding and 512x512 tiles which I later changed to 3072x3072 padding and 1024x1024 tiles.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F18199244%2F4acb3f75490749342dab0c8e3f7f91de%2FTiledKidneySlice.png?generation=1707342738234654&amp;alt=media"></p> <h2>Sample segmentation results on slices for illustration</h2> <p>Slice 1000 from kidney_1_dense Label Vs Prediction. Prediction is color coded as Green for True Positives, Red for false positives and Blue for false negatives. Also all blobs are dilated 5x5 to observe the tiny blobs visually. Most of the false positives and false negatives are 1 pixel area blobs. However, there are also some 1 pixel area true positive blobs in this slice and many more of those in early slices for example 0100. So, cannot blindly remove 1 pixel blobs.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F18199244%2F3d16023f25a7f35a225c15c2d88cc107%2FSegmentedKidney1_1000.png?generation=1707342563232471&amp;alt=media"></p> <p>Slice 0100 from kidney_1_dense Label Vs Prediction. 119 pixels in label, 133 pixels found, 93 pixels true positives, 40 pixels false positives, 26 pixels false negatives. Many of the blob sizes are tiny 1 to 4 pixels area.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F18199244%2F632bff25ba557b49017796618251f092%2FSegmentedKidney1_0100.png?generation=1707342598764919&amp;alt=media"></p> <h1>Updates After the competition ended</h1> <p>After the competition ended, some new opportunities opened up for learning. First, I read the top solution write-ups looking for inspiration, especially low-hanging fruit ideas for adding to my existing implementation. Second, the private and public scores are now visible, enabling us to know how we do on the two test sets. Third, the 5 per day submission limit is lifted, so can do a lot more experiments more quickly. Given these, I was able to improve my private score to 0.682 which would have been 6th place (however not genuinely since I would never have chosen this submission due to the public score being so low). Well, anyway, here it is, my best improved score.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F18199244%2F574ddf3fa25f91bab4dad90f81efd542%2FSenNetHOA-BestPrivateScore.png?generation=1707777659990418&amp;alt=media"></p> <p>Here are the ideas I added to my implementation to achieve the above score.</p> <ol> <li>Changed histogram equalization preprocessing to mean/std/clip normalization (I learned from my own prev submissions scoring higher in private score)</li> <li>Changed DiceLoss to custom loss that is a combination of boundary loss, dice loss and focal loss (shared by 5th place solution)</li> <li>Changed the unet architecture to include an upscale layer (shared by 5th place solution)</li> <li>Changed to heavy augmentations in scale and intensity (strategy used by several top solutions)</li> <li>Used -0.45, 0.05 for scale limit to downsize more than upsize, to account for the private test set resolution. (shared by 3rd place solution)</li> </ol> <h1>Sources</h1> <p>Following kaggle sources were hugely helpful, many thanks to the contributors.</p> <p><a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/456118" target="_blank">https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/456118</a><br> <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/464768" target="_blank">https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/464768</a><br> <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/468525" target="_blank">https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/468525</a><br> <a href="https://www.kaggle.com/code/junkoda/fast-surface-dice-computation" target="_blank">https://www.kaggle.com/code/junkoda/fast-surface-dice-computation</a><br> <a href="https://www.kaggle.com/code/hengck23/lb0-808-resnet50-2d-unet-xy-zy-zx-cc3d" target="_blank">https://www.kaggle.com/code/hengck23/lb0-808-resnet50-2d-unet-xy-zy-zx-cc3d</a><br> <a href="https://www.kaggle.com/code/yoyobar/2-5d-cutting-model-baseline-inference" target="_blank">https://www.kaggle.com/code/yoyobar/2-5d-cutting-model-baseline-inference</a><br> <a href="https://www.kaggle.com/code/yoyobar/2-5d-cutting-model-baseline-training" target="_blank">https://www.kaggle.com/code/yoyobar/2-5d-cutting-model-baseline-training</a><br> <a href="https://www.kaggle.com/code/misakimatsutomo/inference-1024-should-have-a-percentile-of-0-00149" target="_blank">https://www.kaggle.com/code/misakimatsutomo/inference-1024-should-have-a-percentile-of-0-00149</a></p> <p>Other sources I used for background and inspiration:<br> <a href="https://doi.org/10.48550/arXiv.2311.13319" target="_blank">https://doi.org/10.48550/arXiv.2311.13319</a><br> <a href="https://arxiv.org/abs/1902.00347" target="_blank">https://arxiv.org/abs/1902.00347</a><br> <a href="https://arxiv.org/abs/2010.0616" target="_blank">https://arxiv.org/abs/2010.0616</a></p>
SenNet + HOA - Hacking the Human Vasculature in 3D
9th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>thanks, I will share inference notebook after finalize.</p> <p>Thanks, here is my EMA code.<br> I think your code(ema update position) is correct.</p> <pre><code># PyTorch Lightning Module def training: self.ema_model.update(self.model) : def : super(ModelEMA, self). # make a copy the model accumulating moving average weights self. = deepcopy(model) self..eval self.decay = decay self.device = device # perform ema on different device from model set self.device is not None: self..(device=device) def : torch.no: ema_v, model_v zip(self..state.values, model.state.values): self.device is not None: model_v = model_v.(device=self.device) ema_v.copy) def update(self, model): self.m) def set(self, model): self. </code></pre> <p>Thank you for host, I learned a lot from this competiton!<br> also special thanks to <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> . I refer <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> discussion topics many times.</p> <h1>TL;DR</h1> <ul> <li>MaxVit tiny</li> <li>xy, xz, yz inference</li> <li>heavy regularization</li> <li>probability threshold</li> </ul> <h1>Solution</h1> <h2>Model</h2> <ul> <li>MaxVit tiny</li> </ul> <pre><code> (nn.Module): (): (SenUNetStem, self).__init__() kwargs = ( in_chans=in_chans, features_only=, pretrained=, out_indices=((encoder_depth)), ) self.conv_stem = Conv2dReLU(in_chans, , , use_layernorm=, padding=) self.encoder = timm.create_model(encoder_name, **kwargs) self._out_channels = [ , ] + self.encoder.feature_info.channels() self.decoder = UnetDecoder( encoder_channels=self._out_channels, decoder_channels=decoder_channels, n_blocks=encoder_depth, use_batchnorm=decoder_use_batchnorm, center= encoder_name.startswith() , attention_type=decoder_attention_type, ) self.segmentation_head = SegmentationHead( in_channels=decoder_channels[-] + , out_channels=classes, activation=activation, kernel_size=, ) self.n_time = n_time self.pickup_index = pickup_index (): B, C, H, W = x.shape h = (H//)* w = (W//)* x = x[:,:,:h,:w] stem = self.conv_stem(x) features = self.encoder(x) features = [ stem, ] + features decoder_output = self.decoder(*features) masks = self.segmentation_head(decoder_output) masks = F.pad(masks,[,W-w,,H-h,,,,], mode=, value=) masks[:,] SenUNetStem( encoder_name=, classes=, activation=, ) </code></pre> <h2>Dataset</h2> <ul> <li>kidney1 and 3 dataset.</li> </ul> <h2>Training Tricks</h2> <p>I focus on regularization trick.<br> because this competiton have unstable cv, public is not related.<br> Moreover host shared public/Private LB information images, I hink it indicate unstable.</p> <ul> <li>EMA</li> <li>50epochs</li> <li>AdamW(Weight Decay 1e-2)</li> <li>CutMix(until 25ep)</li> <li>MixUp(until 25ep)</li> <li>DiceLoss(smooth_factor=0.1)</li> <li>Heavy Augmentation </li> </ul> <pre><code> train_aug = A.Compose([ A.HorizontalFlip(=0.5), A.VerticalFlip(=0.5), A.RandomBrightness(=0.1, =0.7), A.OneOf([ A.GaussNoise(var_limit=[10, 50]), A.GaussianBlur(), A.MotionBlur(), A.MedianBlur(=3), ], =0.4), A.OneOf([ A.GridDistortion(=5, =0.3, =1.0), A.OpticalDistortion(=1., =1.0) ],=0.2), A.ShiftScaleRotate(=0.7, =0.5, =0.2, =30), A.CoarseDropout(=1, =0.25, =0.25), ToTensorV2(=) ]) </code></pre> <ul> <li>Crop(512)</li> </ul> <h2>Inference</h2> <p>Inference is xy, xz, yz axis, and crop 512, stride 256.</p> <h2>Post-Process</h2> <p>Probability threshold(=sigmoid output). I didn't use percentile which method used many public notebook and past segmentation competiton(e.g. Volcano).<br> Because I checked percentile threshold in local cv, it's not stable. I didn't use it.</p> <h2>Not worked</h2> <ul> <li>Bigger models(maxvit base, small)</li> <li>large size inference(1024), 512 is enough for this competiton.</li> <li>Rotate90</li> <li>Pretrained Other volumes(kidney_2/kidney_1_volumes)</li> </ul> <p><a href="https://www.kaggle.com/code/tereka/simpleunet-xy-xz-yz-v2-nbp-b749ff/notebook" target="_blank">https://www.kaggle.com/code/tereka/simpleunet-xy-xz-yz-v2-nbp-b749ff/notebook</a></p> <p><a href="https://www.kaggle.com/brendanartley" target="_blank">@brendanartley</a> <br> thanks! EMA decay rate is 0.99 in this competition.<br> if EMA decay value is low, model convergence is very slow..</p> <p>thanks.<br> Does Original decoder means smp decoder ?<br> I would like to input 1/2 x 1/2 image, original max vit 1/4.<br> I need to customize smp unet.</p>
SenNet + HOA - Hacking the Human Vasculature in 3D
Rank 46 Solution - keep the CV discipline
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>thanks <a href="https://www.kaggle.com/pkyangno1" target="_blank">@pkyangno1</a> !</p> <p><a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/473097" target="_blank">https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/473097</a></p> <p>I think you were aware of the kidney_4 (5/6?) being online, when I saw it, seeing how noisy it was made me reconsider using anything that I couldn't test (on CV) (thrsholding, removing small objects etc). I think I could have taken more risks in this comp but I'm content with a silver</p> <p>I'm guessing that will be the case for a lot of people ! maybe some golds not being selected here and there as well</p> <h2>Our solution</h2> <p>Thanks to the organisers for the learning opportunities and thanks <a href="https://www.kaggle.com/shashwatraman" target="_blank">@shashwatraman</a> for helping me in this comp.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F12870466%2Fcd4079e4c550c311d7a36b823d1dfd17%2FScreenshot%20from%202024-02-07%2000-59-23.png?generation=1707264535063295&amp;alt=media"></p> <p><strong>Submission Picking rule:</strong><br> We played against the shakeup by keeping the CV simple and the one that made the most sense for us: validation on kidney_3 dense. We didn't trust the LB at all in this competition and decided to pick the submission that followed the following criteria:</p> <ul> <li>Tiling Model - avoid weird shapes of the private LB</li> <li>Used interpolation augs (resizing and random resize crop) - resolution shift</li> <li>Last epoch weights (no checkpointing) - avoid overconfident CV</li> </ul> <p><strong>After that, the training protocol is very simple and there is no post/preprocessing done.</strong></p> <ul> <li>2D unet</li> <li>Tiling as mentionned before (384x384)</li> <li>maxvit_small_384 single model (no ensembling) (we didn't have time xd)</li> <li>15 epochs</li> <li>volume norm</li> <li>light augs </li> <li>Low threshold: 0.1</li> </ul> <p><strong>Takeaways:</strong><br> Most of the things we tried in the comp didn't work, the things that worked best on CV was the 2.5d and 3d approaches, but I didn't want to trust that the z axis resolution was going to allow such models.<br> I thought about distilling those preds to a 2D model but i didn't have the time for it.<br> I would love to participate in another 3D comp where everything is more stable and where I can use a bit more imagination than this one, where most of the score came from having the discipline of not looking at the LB. <br> Even if I'm proud of our CV, our best submission scored 0.615, and if I really stuck to my rules, I would have picked it, but the LB for it was so low I did coward away from taking it, lesson learned.</p> <p>I think top 100 team fall in the same category, if we didn't join the comp so late (3weeks before end), we would probably have ended up over fitting due to the pressure of LB :)</p>
SenNet + HOA - Hacking the Human Vasculature in 3D
[2nd Public/90th Private] Solution for the SenNet + HOA - Hacking the Human Vasculature in 3D
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks a lot, at least I won a gold medal in the discussion :D</p> <p>After being heartbroken for 48 hours, I finally recovered. I decided to share my method, which might help solve some of friends' doubts.</p> <p>First of all, I want to thank my good brother, who spent more than 1,000 rmb calling a sexy beauty to give me a massage and relax, which made me realize that there are still such wonderful things in this world besides competition.</p> <p>At the same time, I would like to thank <a href="https://www.kaggle.com/yoyobar" target="_blank">@yoyobar</a> for providing the training and inference notebooks. All my experiments are based on these two notebooks.</p> <p>I also want to thank <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> , <a href="https://www.kaggle.com/lihaoweicvch" target="_blank">@lihaoweicvch</a> and others who provided such detailed experimental details early in the competition, which reduced many difficulties I had in the initial stage of the experiments</p> <h2>Training Strategy:</h2> <ul> <li>High probability strong data augmentation - mixup pre-training for over 300 epochs, using kidney1 and kidney3 both dense and sparse datasets.</li> <li>Low probability data augmentation - mixup - EMA fine-tuning on kidney1 and kidney3 in dense datasets, with kidney2 dataset for validation.</li> <li>Validation metric uses average dice score from thresholds in (0.1, 0.2 … 0.9) range. </li> <li>Loss function is simple dice loss.</li> <li>Training resolution is 1024x1024.</li> <li>Silde-window inference with 5 TTA + 50um resolution inference for public test, 5 TTA + 50um and 60um resolution inference for private test, using F.interpolate(…, scale=0.8)</li> <li>Threshold selection: 1. Based on fixed value, 2. Based on percentile, 3. Based on Otsu adaptive thresholding.</li> </ul> <h2>Model Choices:</h2> <table> <thead> <tr> <th>Model</th> <th>Public Score</th> <th>Private Score</th> <th>Thresholding Strategy</th> <th>Notes</th> </tr> </thead> <tbody> <tr> <td>se-resnext50-unet</td> <td>0.891</td> <td>0.546</td> <td>Percentile</td> <td>Local receptive field</td> </tr> <tr> <td>se-resnext50-unet</td> <td>0.363</td> <td>0.553</td> <td>Otsu</td> <td>-</td> </tr> <tr> <td>gcnet50-unet</td> <td>0.881</td> <td>0.458</td> <td>Percentile</td> <td>Global receptive field</td> </tr> <tr> <td>gcnet50-unet</td> <td>0.857</td> <td>0.435</td> <td>Otsu</td> <td>-</td> </tr> <tr> <td>2.5d-hrnet-w32-unet</td> <td>0.868</td> <td>0.546</td> <td>Percentile</td> <td>Excellent detail recovery ability</td> </tr> <tr> <td>effiecientnet-b3-unet</td> <td>0.878</td> <td>0.517</td> <td>Percentile</td> <td>Excellent inference speed</td> </tr> <tr> <td>se-resnext50-unet+gcnet50-unet</td> <td>0.857</td> <td>0.576</td> <td>0.16</td> <td>-</td> </tr> <tr> <td>se-resnext50-unet+gcnet50-unet</td> <td>0.895</td> <td>0.542</td> <td>Percentile</td> <td>-</td> </tr> <tr> <td>se-resnext50-unet+gcnet50-unet</td> <td>0.883</td> <td>0.518</td> <td>Otsu</td> <td>-</td> </tr> <tr> <td>se-resnext50-unet+gcnet50-unet+effiecientnet-b3-unet</td> <td>0.886</td> <td>0.544</td> <td>Percentile</td> <td>-</td> </tr> <tr> <td>se-resnext50-unet+gcnet50-unet+effiecientnet-b3-unet-2scale</td> <td>-</td> <td>0.589</td> <td>0.1</td> <td>-</td> </tr> <tr> <td>se-resnext50-unet+gcnet50-unet+effiecientnet-b3-unet-2scale</td> <td>-</td> <td>0.452</td> <td>Percentile</td> <td>-</td> </tr> </tbody> </table> <p>All the experiments, the results in private are not very ideal, perhaps due to the thresholds, or it could be due to other reasons<br> Additionally, since I cannot confirm whether private is the result of downsampling an entire kidney or a subset of a kidney, I did not spend too much time on it in the 2.5d method.<br> At the same time, in the 2-scale inference, some weights were fine-tuned at a 60um resolution, but there was actually no significant improvement.</p> <h2>Reasons for Failure:</h2> <ul> <li><p><strong>Threshold selection:</strong> In my models, lower thresholds achieved better private scores, which contradicted my early idea that "higher thresholds could better reduce false positives and improve the private results". Additionally, the threshold selection method based on proportion carries too much risk. Choosing a fixed threshold is a more stable approach.</p></li> <li><p><strong>Validation metric:</strong> Since kidney2 was sparsely annotated, I chose average dice as the validation metric. And I found that at different thresholds, a higher dice with smaller dice variance usually did not result in a poor public score, but this is not a positive correlation, which may have introduced a potential risk of overfitting.</p></li> <li><p><strong>Data augmentation:</strong> I gradually added many data augmentations, assuming that if the validation metric did not decrease after adding augmentations, the model's generalization ability would be improved to some extent. The final result told me that one or more augmentation methods, such as mixup, might have affected the private score.</p></li> </ul> <p>Nevertheless, this competition has ended. Congratulations to all the winners who have achieved such great results, and also to all the friends who have given their best in this competition but did not achieve good scores whose algorithm skills must have improved significantly.</p> <p>Here are some notebooks I used for inference (ugly code, no time to clean it up for now due to the Chinese New Year is coming). The related code and weights have been set to public. If anyone is interested, feel free to refer to them.<br> <a href="url" target="_blank">https://www.kaggle.com/code/tanxxx/inference-se-tugc</a><br> <a href="url" target="_blank">https://www.kaggle.com/code/tanxxx/2d-tu-gcresnext50ts-inference</a><br> <a href="url" target="_blank">https://www.kaggle.com/code/tanxxx/se-resnext-inference-1024</a><br> <a href="url" target="_blank">https://www.kaggle.com/code/tanxxx/tu-hrnet-w30-and-effb3-inference</a></p> <p>thanks, i love my brother so much!!!</p>
SenNet + HOA - Hacking the Human Vasculature in 3D
[3rd Place solution] Refine from Sparse to Dense
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi <a href="https://www.kaggle.com/samshipengs" target="_blank">@samshipengs</a> . I am not sure if I think it right. But here is my thought:</p> <p><strong>It's all about data distribution</strong>. A crucial fact in this competition is that we only have 3 kidneys for train and 2 kidneys in test. The 3 training kidneys only cover a narrow feature space, while the public kidney and private kidney also cover another two narrow feature space. Their feature space may have little overlap, much less than most CV tasks. So the more feature space our model can cover, the more winning chance we have.</p> <p>Therefore, </p> <ol> <li><p>if we retrained for private. Then even the pseudo label is noisy, at least the model <code>has seen</code> these kinds of data. Much better than unseen.</p></li> <li><p>When we make pseudo for k3 sparse and k2 sparse, we are making the label less noisy. Then we can better learn the data distribution of kidney 2/3 with less risk to overfitting to the wrong annotations. The data distribution of kidney1+2+3 is much larger than two of them. So the more kidney we trained on, the more overlap the feature space of our model shares with the test set.</p></li> </ol> <p>I am not a naive English speaker. Please tell me if any expression made you confused.</p> <p>First and foremost, I would like to extend my gratitude to the organizers and the official Kaggle team for orchestrating such an outstanding competition. I joined the contest at a very late stage. Despite having some experience with segmentation competitions, I must express my appreciation to <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> , <a href="https://www.kaggle.com/yoyobar" target="_blank">@yoyobar</a> , and <a href="https://www.kaggle.com/junkoda" target="_blank">@junkoda</a> (implementation of metric), as well as the other community participants for their open-source contributions and discussions, which allowed me to quickly get up to speed with this contest.</p> <p>My approach was strikingly straightforward, relying solely on <strong>2D models</strong> and only utilizing <strong>smp</strong> (segmentation models pytorch) and <strong>timm</strong> (pytorch image models) in the whole training and inference pipeline. </p> <h2>Global key points</h2> <ol> <li><strong>Refining labels from sparse to dense.</strong></li> <li><strong>Emulating the magnification factor of the test set.</strong></li> <li><strong>Maintaining an appropriate resolution.</strong></li> </ol> <h2>1. From Sparse to Dense</h2> <p>Given that half of the training set has dense labels (kidney 1, kidney 3 dense), and the other half was sparse, utilizing dense labels to refine sparse ones was a crucial step. The overall process entailed:</p> <ol> <li>Training UNet(maxViT512) and UNet(EfficientNetv2s) using kidney 1 and kidney 3 dense.</li> <li>Generating supplemental labels for kidney 3 sparse using the trained UNet maxViT512 and UNet EfficientNetv2s models.</li> <li>Resuming the training of UNet maxViT512 and UNet EfficientNetv2s for a few epochs with kidney 1, kidney 3 (dense, sparse plus supplemental labels).</li> <li>Repeating the step2 on kidney 2.</li> <li>Training three UNet models (with EfficientNetv2s, SeResNext101, MaxViT512) and one UNet++ using all real labels from all kidney plus pseudo labels.</li> </ol> <p>Note: As the organizers disclosed the proportion of annotations within kidney 3 and kidney 2, I endeavored to select thresholds based on pixel quantity as close as possible to the official proportion when choosing threshold values for pseudo label.</p> <h2>2. Emulating the Magnification of the Private Test Set</h2> <p>A pivotal reason for my decision to participate in this competition was the disclosure of the magnification factors for the training and test sets by the hosts. The training set had a magnification of 50um/voxel, the public test set was the same at 50um/voxel, while the private test set was at 63um/voxel. A larger magnification factor implies a lower resolution. For instance, a 600um object would occupy 12 pixels in both the training and public sets, but only 10 pixels in the validation set. Hence, during training, <strong>I set the scaling center to 0.8</strong>, rather than 1, with a scaling range of 0.55 to 1.05, to simulate the private test set.</p> <pre><code>.ShiftScaleRotate(shift_limit=., =(-., .), =, =, =.), </code></pre> <h2>3. Maintaining an Appropriate Resolution</h2> <p>In this competition, training and inference along the x-axis, y-axis, and z-axis separately was a very important trick. However, this introduced a significant risk. The entire test set contained 1500 slices, with the public test set accounting for 67% and the private test set for 33%. This means that the private test set comprised only about <strong>500 slices</strong>. Inferring along the z-axis with a higher resolution (e.g., 1024) was feasible. But if inferring along the y-axis or x-axis, it would mean that one of the edges would only be 500 pixels long. At that point, if the model and code were configured for a larger resolution (say 1024), there would be a substantial risk of a huge shake down.</p> <p>My models primarily operated at a resolution of 512, with one model switching to higher resolution weights for larger resolution slices when the slice have appropriate resolution.</p> <table> <thead> <tr> <th>Model</th> <th>Backbone</th> <th>Resolution</th> <th>public</th> <th>private</th> </tr> </thead> <tbody> <tr> <td>UNet</td> <td>MaxViT-Large 512</td> <td>512</td> <td>0.846</td> <td>0.727(submission1)</td> </tr> <tr> <td>UNet</td> <td>SeResNext</td> <td>512</td> <td>0.819</td> <td>0.753</td> </tr> <tr> <td>UNet</td> <td>Efficiennet_v2_s</td> <td>448, 832</td> <td>0.799</td> <td>0.703</td> </tr> <tr> <td>UNet++</td> <td>Efficiennet_v2_l</td> <td>512</td> <td>0.817</td> <td>0.692</td> </tr> <tr> <td>ensemble</td> <td>-</td> <td>-</td> <td>0.846</td> <td>0.727(submission2)</td> </tr> </tbody> </table> <h2>4. Train on all data if convergence is Stable</h2> <p>During the early stages of the competition, whether validating on kidney 2 or kidney 3, I observed that if I trained for 20 epochs, after the initial few epochs, the dice coefficient (not surface dice) variation on the validation set was very minimal, with the MaxVit512 large model exhibiting the least fluctuation. Considering that we only had three kidneys, I decided to train on all kidneys directly after completing the pseudo labeling process, given the stability in convergence.</p> <h2>5. Minimizing the Impact of Threshold Values</h2> <p>I am grateful for the method provided by <a href="https://www.kaggle.com/junkoda" target="_blank">@junkoda</a> for calculating metrics. My most stable single model was able to maintain very minor fluctuations in the surface dice score (less than 1) within a threshold range of 0.2. After model fusion, the stable threshold range could be potentially in 0.3~ 0.4. A stable threshold is extremely crucial in segmentation competitions. In this competition, as my final model lacked a validation set, I had to utilize thresholds searched with earlier trained models that included a validation set and apply them to the final version of the model. Fortunately, the models trained on the full dataset appeared to possess threshold values very close to those from the earlier models trained with k1+k2 (sparse), and validate on k3. At the same time, the fluctuation of threshold values across kidney 3 dense, public, and private was very small.</p> <h2>6. Heavy augmentation on intensity.</h2> <p>As mentioned by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> , difference kidneys has large variance on intensity. So I used a heavy intensity augmentation.</p> <pre><code>.RandomBrightnessContrast(p=.), .RandomGamma(p=.), </code></pre> <h2>7. Quick ablation</h2> <table> <thead> <tr> <th>Model</th> <th>Backbone</th> <th>points mentioned above</th> <th>public</th> <th>private</th> </tr> </thead> <tbody> <tr> <td>UNet</td> <td>MaxViT-Large 512</td> <td>3, 5, 6</td> <td>0.818</td> <td>0.586</td> </tr> <tr> <td>UNet</td> <td>MaxViT-Large 512</td> <td>3, 4, 5, 6</td> <td>0.857</td> <td>0.633</td> </tr> <tr> <td>UNet</td> <td>MaxViT-Large 512</td> <td>2, 3, 4, 5, 6</td> <td>0.849</td> <td>0.652</td> </tr> <tr> <td>UNet</td> <td>MaxViT-Large 512</td> <td>1, 2, 3, 4, 5, 6</td> <td>0.846</td> <td>0.727</td> </tr> </tbody> </table> <h2>8. Final Submission</h2> <p>My final submissions were a single model of MaxViT and a ensemble of the four models. Surprisingly, both submissions scored same at 0.727. I did not use any form of weighting and MaxViT only constituted a quarter of the ensemble submission, but their scores were totally the same on private LB. Even more astonishing was that the single-model score of SeResNext on private LB turned out to be the highest. Its cv was nothing extraordinary, its convergence was not more stable than MaxViT's, and its public leaderboard score was not high, so I had no reason to choose it.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7285387%2F8fa373c75008e7cf3064b7f4c4089175%2F1.png?generation=1707272539926386&amp;alt=media" alt="1"></p> <p>Finally, I would like to extend my gratitude once again to the organizers, Kaggle, and all the participants again!</p> <hr> <p><strong>Inference(Submission)</strong> Code is published:</p> <ol> <li><a href="https://www.kaggle.com/forcewithme/sennet-final-submission2" target="_blank">MaxVit512 scored 0.727</a></li> <li><a href="https://www.kaggle.com/code/forcewithme/sennet-top3-final-submission?scriptVersionId=162311084" target="_blank">Ensemble submission scored 0.727</a></li> </ol> <p><strong>Training</strong> Code is published in the <a href="https://www.kaggle.com/datasets/forcewithme/sennettop3-training-code/data" target="_blank">kaggle dataset</a>. </p> <p>I guess that the 2nd, 3rd, 5th, 6th points mentioned in my post is the potential reason for the huge shake. If a participant had missed any one of these aspects and had not prepared adequately, they might have been at risk of experiencing a super huge 'shake down'. </p> <p>Conversely, if participants were aware of these issues, or if their models happened to circumvent them, their performance on both the public and private LB would be more consistent, or they might experience a 'shake up'.</p> <p>This is a preliminary guess, and perhaps a more comprehensive conclusion will emerge once more participants have disclosed their strategies. </p> <p>Hi <a href="https://www.kaggle.com/cody11null" target="_blank">@cody11null</a> , I added some ablation in the post. If you are interested, any questions are welcome! </p> <p>Hi <a href="https://www.kaggle.com/cody11null" target="_blank">@cody11null</a> , thank you for asking this. Take kidney 3 sparse as an example. My pseudo label process contains 3 steps: </p> <ol> <li>Inference on the sparse kidney, <strong>save the mask in float format</strong>. Don't use any threshold to binarize them now!</li> <li>Calculated the positive pixels. It's a simple <code>np.sum</code> operation.</li> <li>Search the threshold, get TP, FP and FN, and find the threshold that meets the hosts description (85% for kidney 3 sparse) most. Suppose the model can find all the target perfectly, all the FP on sparse label can be treated as pseudo labels</li> </ol> <p>The reason that I start with kidney 3 instead of kidney 2, is that half of kidney 3 has dense annotation. According to the host's paper, if a model is trained and test on the same kidney, the dice is super high. So I think the models can predict very well on kidney 3.</p> <p>When making the pseudo label on kidney 2, I can't find a appropriate threshold that perfectly meets 65% sparsity. Hence, I trained difference models on 0.1, 0.15, 0.2 for better diversity in ensembeling, respectively. But according to my final few submissions, I think the pseudo threshold doesn't matter too much.</p> <p>The UNet-SeResNext101 in my pipeline is somehow fitting(maybe overfitting) the private LB. Probing with different resolution and threshold several times, it can achieve 0.78 on private LB. </p> <table> <thead> <tr> <th>Resolution</th> <th>Threshold</th> <th>public</th> <th>private</th> </tr> </thead> <tbody> <tr> <td>512</td> <td>0.2</td> <td>0.820</td> <td>0.753</td> </tr> <tr> <td>512</td> <td>0.15</td> <td>0.816</td> <td>0.757</td> </tr> <tr> <td>512</td> <td>0.1</td> <td>0.811</td> <td>0.766</td> </tr> <tr> <td>384+512</td> <td>0.1</td> <td>0.807</td> <td>0.778</td> </tr> <tr> <td>384+512</td> <td>0.12</td> <td>0.811</td> <td>0.775</td> </tr> <tr> <td>384+512</td> <td>0.08</td> <td>0.801</td> <td>0.780</td> </tr> </tbody> </table>
SenNet + HOA - Hacking the Human Vasculature in 3D
🥈24th Place Solution(A potential solution for achieving private 0.70)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>TLDR</h1> <p>The key factors in enhancing the model's robust performance included <strong>Tversky loss</strong>, <strong>increased inference size</strong>, and <strong>resolution augmentation</strong>. These elements ultimately played a significant role in the model's survival during shakeups. The final model is an ensemble composed of 2D U-Net models based on the RegNetY-016 architecture</p> <h1>Interesting Point</h1> <p>It was a truly challenging competition to secure a reliable validation set (which, unfortunately, I couldn't achieve). Upon reviewing the results, I discovered a significant difference between the public and private sets. Surprisingly, there were submissions from the past that would have made it into the gold zone. It's astonishing, considering I didn't think much of those submissions and didn't end up submitting them anyway.<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8251891%2Fe5a2fef330678d164bc8ff54d06c3e34%2F2024-02-07%2021-58-54.png?generation=1707310804960891&amp;alt=media"></p> <p>In my experiments, I found that giving a higher weight to the positive class in the Tversky loss (with a larger beta) resulted in better performance on the private leaderboard. However, for my final submission, where I trained with a smaller beta value in the Tversky loss, the model performed well on the public leaderboard but surprisingly poorly on the private leaderboard. <strong>This discrepancy may be attributed to the higher resolution of the private dataset, which likely contained more detailed input information. Consequently, it suggests that a lower threshold for the model logits might have been more appropriate in this context. Furthermore, scaling the image (1.2, 1.5, …) during inference seemed to dilute input information, reducing the impact of resolution.</strong></p> <h1>Data Processing</h1> <p>Three consecutive slices were used as input for the model, and training was performed with the original image size. Despite experimenting with an increased number of slices (5, 7, 9…), the results revealed a decrease in performance.</p> <p>In the early stages of the competition, I trained the model by resizing the data to a specific size. However, as I mentioned in the <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/463121" target="_blank">discussion</a>, the Resize function in the Albumentations library applies nearest interpolation to masks, causing significant noise in fine labels and resulting in a notable decrease in performance. Therefore, I opted to use the original image size.</p> <h1>Augmentation</h1> <p>Awaring of the different resolutions between public and private data, I aimed to create a model robust to resolution variations. Additionally, understanding the existence of resizing during the binning process, I employed blur augmentation. </p> <pre><code>def blur_augmentation(x): h, w, = x.shape = ..uniform(, ) x = A.Resize(int(h*), int(w*))(=x)[''] x = A.Resize(h, w)(=x)[''] x </code></pre> <p>Moreover, since the channels were constructed by stacking depth, I applied the following augmentations.</p> <pre><code>def channel_augmentation(x, =0.5, =3): assert x.shape[2]==n_channel np.random.rand()&lt;prob: x = np.flip(x, =2) return x </code></pre> <p>Finally, given the inherent noise in the data annotations, I implemented strong cutout augmentation to prevent overfitting.</p> <pre><code>A.Cutout(=8, =128, =128, =0.8), </code></pre> <p>These approaches effectively contributed to performance improvement in both CV and LB.</p> <h1>Model</h1> <p>The model, like most others, employed a 2D U-Net architecture. For the CNN backbone, I utilized the lightweight model RegNetY-016. Other than that, the settings remained consistent with the default values in SMP (Segmentation Models PyTorch).</p> <p>Despite investing significant time in developing a 3D-based model, it failed to demonstrate notable score improvements in both CV and LB. Due to resource constraints, I shifted my focus to a 2D model.</p> <pre><code>class CustomModel(nn.Module): def __init__(self): super(CustomModel, self).__init__() self.n_classes = 1 self.in_chans = 3 self.encoder = timm.create_model( , =, =, =self.in_chans, ) encoder_channels = tuple( [self.in_chans] + [ self.encoder.feature_info[i][] i range(len(self.encoder.feature_info)) ] ) self.decoder = UnetDecoder( =encoder_channels, decoder_channels=(256, 128, 64, 32, 16), =5, =, =, =None, ) self.segmentation_head = SegmentationHead( =16, =self.n_classes, =None, =3, ) self.train_loss = smp.losses.TverskyLoss(=, =0.1, =0.9) self.test_loss = smp.losses.DiceLoss(=) def forward(self, batch, =): x_in = batch[] enc_out = self.encoder(x_in) decoder_out = self.decoder(*[x_in] + enc_out) x_seg = self.segmentation_head(decoder_out) output = {} one_hot_mask = batch[][:, None] training: loss = self.train_loss(x_seg, one_hot_mask.float()) : loss = self.test_loss(x_seg, one_hot_mask.float()) output[] = loss output[] = nn.Sigmoid()(x_seg)[:, 0] return output </code></pre> <h1>Train</h1> <ul> <li>Scheduler : lr_warmup_cosine_decay </li> <li>Warmup Ratio : 0.1</li> <li>Optimizer : AdamW </li> <li>Weight Decay : 0.01</li> <li>Epoch : 20</li> <li>Learning Rate : 2e-4</li> <li>Loss Function : TverskyLoss(mode='binary', alpha=0.1, beta=0.9)</li> <li>Batchsize : 4</li> <li>Gradient Accumulation : 4</li> </ul> <h1>Inference</h1> <p>Scaling the image size by 1.5x during inference consistently resulted in score improvements in CV, public LB, and private LB. This acted as a form of dilation, significantly reducing false negatives and enhancing the model's performance. <strong>Simply increasing the image size by 1.2 for inference resulted in a 0.1 improvement on the private leaderboard.</strong></p> <h1>Didn't Work</h1> <ul> <li>I attempted to enhance the utility of kidney2 and kidney3 through pseudo-labeling, but it did not result in significant score improvement.</li> <li>Efforts to create a more robust model using heavier augmentations did not yield substantial effects.</li> <li>Experimenting with larger CNN models led to issues of overfitting.</li> </ul> <h1>Code</h1> <p><a href="https://github.com/siwooyong/SenNet-HOA-Hacking-the-Human-Vasculature-in-3D" target="_blank">https://github.com/siwooyong/SenNet-HOA-Hacking-the-Human-Vasculature-in-3D</a></p>
Severstal: Steel Defect Detection
10th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you, Chris! Frankly, I haven't focused that much on the thresholds specifically, I have mostly been looking at the results of response/area thresholds grid search as a sanity check, to make sure that optimal settings are in "reasonable" range (response threshold ~0.5, area threshold ~500 for defect1/2, 1000 for defect3 and 1500-2000 for defect4) and that metric is changing smoothly (wide plateau instead of a sharp peak) around the optimal settings.</p> <p>That being said, I did notice that submission size for defect3-only submission went down noticeably towards the end of competition. I had concerns that my validation scheme is too conservative and when possible tried to resolve ties between settings in a way that increases the produced submission size, in particular my best submission had threshold 0.45 for defect3. But this ensemble was trained with pseudo-labels, so it might've been too careful on its own, and perhaps lower threshold was just balancing that.</p> <p>After reading few people reporting that higher thresholds result in better private LB score I also wanted to try submissions with higher thresholds, but didn't manage to make any before submissions got disabled, so I guess I'll have to wait 2 more weeks to check that.</p> <p><strong>Overall approach</strong> 1) Separate models for different defects (since the number of classes is small - it is feasible) * + can use completely different architectures/pipelines/samplers etc * + can combine trained models for different defects instead of retraining a single model * + encourages to decompose the problem (4-defect detection) into smaller subproblems (4 single-defect detections) – it allows to focus on perfecting them separately, without the fear of spoiling results of a different subproblem * + easier to reuse code for binary segmentation * + no need to balance defect class losses * - wastes valuable submission time → smaller size/number of models can be ensembled * - more training time * - no potential for synergy between detectors of different defect types</p> <p>Doing this was a “greedy solution” - I already had experience with binary segmentation, and I’m a bit sad that I missed out on a chance to learn more new stuff within the scope of this competition. On the other hand, it maximized my chances of winning, so if I had to make a decision again – I would do the same thing. Besides, I can still try to match/beat the benchmark of my current approach with a single model for multiple defects after the end of the competition, even though there would be less motivation for doing that :)</p> <p>2) Not predicting defect2 * + less models → less training / research / inference time * + trying to predict defect2 was always decreasing the pubLB for me, so not predicting it → higher score * - if private test set contains more instances of images with defect2 – score would degrade</p> <p>Initially I was going to predict only defect3 and defect4, but at some point I hit a wall in trying to improve my score, and tried predicting defect1 and defect2. Surprisingly for me, defect1 allowed a noticeable gain in score, even though it was less than a half from theoretically possible one, so I decided to keep it. Defect2, on the other hand, was even more rare, and I have been unable to get any gain from predicting it. Anyway, to protect myself against a scenario where it is more frequent in private test set, I have included a model for defect2, tuned to produce predictions only when it is very confident, and reserved a second selected submit for it. After the competition deadline it turned out that private test set didn’t seem to contain more of defect2, but I think that it was still a reasonable use of a second submit selection.</p> <p>3) Training on full size images * + model has more context → easier to make a correct decision (avoid a false positive/negative) * + no need for an extra fine-tuning stage on full images / less domain mismatch if no fine-tuning is done * - more time and GPU RAM is required → less experiments, smaller batch size * - less “unique” samples seen by model → more potential overfitting</p> <p>I’ve tried training on crops as well as on full sized images, and in all my experiments training on crops was inferior. I’ve seen that some solutions mention that when they were cropping images with defects they constrained the crops to contain defect – it might be the key to proper training on crops, but I haven’t tried it.</p> <p>4) 3-fold validation * + larger size of validation set → more stable metric estimates * +“less similar” models → ensembling is more beneficial * +- less models produced – this is ok, since I am anyway constrained in how many models I can fit into prediction time quota * - smaller size of training set → potentially weaker individual models <br> * - longer validation time</p> <p>This might have been important for avoiding negative consequences of shake-up in this competition. Keeping in mind that my validation set is 2x larger than public test set, I was highly reluctant to base any decisions about my solution on the impact on public leaderboard score, and was “trusting“ only changes that improve both CV and pubLB. At some point I started to worry that I’m overfitting to pubLB anyway, since adding more models to my ensemble decreased pubLB, indicating that I got lucky with my submit, and small variations of predictions caused a drop in pubLB. I decided to add RandAugment-type (more details in Augmentations section) to the validation set, hoping that this way I would select models/hyperparams that are more robust under different conditions, besides it would allow artificially increasing the size of validation set.</p> <p><strong>Encoder</strong> SE ResNeXt-50 / EfficientNet-b3, pretrained on ImageNet. First model still seems like a good tradeoff between size/speed and accuracy, and efficientnet is relatively new and I wanted to try using it for segmentation tasks. Ironically, its implementation in PyTorch is not very efficient (at least, compared to the one in TensorFlow, see <a href="https://github.com/lukemelas/EfficientNet-PyTorch/issues/19">this github issue</a>). This limited usage of “upscaled“ versions of it, and I settled on b3, which was close to seresnext50, although the batch size that I was able to fit into GPU RAM was smaller (3 vs 4) and it potentially decreased batchnorm performance.</p> <p><strong>Model</strong> Experimented with Unet, FPN and PSPNet from great <a href="https://github.com/qubvel/segmentation_models.pytorch">Segmentation models</a> by <a href="/pavel92">@pavel92</a> . Unet and PSPNet performed noticeably worse with default settings and were discarded from the experiments at the early stages of competitions.</p> <p>I compared FPN with several variations that I considered to be reasonable, listed below. FPNA: vanilla FPN. FPNB: take not the 4 uppermost layers of encoder, but 4 lowermost. Intuition was that if a defect is a local artifact, then using stronger semantic information is unnecessary, and using higher-resolution layers instead would allow to better locate small defects. In my experiments it outperformed FPNA at 128x800 resolution, but underperformed at full resolution – perhaps receptive field was insufficient. FPNC: take all (5) levels of encoder, this might help to handle different scale levels properly. Unfortunately the increase in GPU RAM usage and training time didn’t translate to improvement in performance in my tests. FPND: Instead of summing outputs of encoder layers, concatenate them (reducing the number of channels in decoder layers’ output to keep final number of channels the same) - this is similar to hypercolumns, which worked well in TGS competition. This version seemed to be comparable with FPNA, and outperformed it in defect4 detection, perhaps due to the fact that this defect had significantly larger scale variation compared to other 3.</p> <p>I experimented a bit with deep supervision (with a aux output for each defect class presence), and even though I initially have gotten some improvement using this scheme, I have later managed to match this result without DS, and decided to drop it to avoid additional complexity.</p> <p>I have also tried to train classifiers for rejecting empty images before classification to avoid false positives, but haven’t been able to get more accuracy than I already had with segmentation models, in the end the only classifier that I used was for defect1 (taken from Heng CherKeng’s starter guide) – it seemed to have a better precision than my defect1 models.</p> <p><strong>Augmentations</strong> Did a “grid search” for optimal augmentations, training same small model with a single augmentation for a fixed number of epochs and validating without augmentations. Surprisingly, “base augmentation” (no augmentations) was always better, even compared to horizontal flip. This was not something I have encountered before, and main guess was lack of model capacity to handle the variation after introducing augmentations. Proper way of testing it was running the experiment again with a bigger model, but I didn’t have time/resources to do it properly, so I decided to make an assumption that horizontal flip doesn’t destroy important information and shouldn’t lead to overgeneralization, and other “generic” augmentations should be treated carefully.</p> <p>I also tried few “custom“ augmentations: “SteelShift” (cyclically shift image, handling steel sheet border separately) to simulate a capturing steel sheet at a different time on a moving conveyor belt, and “BorderLoc” (encode distance to steel sheet border in a separate channel, black areas and images with no black areas are handled separately). Motivation for adding 2 custom augmentation, targeted at images with steel sheet borders was the fact, that even though there was correlation between CV and pubLB, CV was always better, and the only significant difference between 2 sets was the fraction on images where steel sheet border was visible.</p> <p>Impact of these 2 “custom” augmentations was somewhat hard to asses, I included some models with “SteelShift” into final ensemble to increase diversity of predictions, and didn’t include “BorderLoc”, because it would require either applying it to all images (and discard models trained without it), or rewriting inference code to apply different augmentations to different models in ensemble)</p> <p>I ended up with follwing augmentation “modes”:</p> <p>a) only horizontal flip b) <a href="https://arxiv.org/abs/1909.13719">RandAugment</a> - up to 2 augmentations chose randomly from the list below (<a href="https://github.com/albu/albumentations">albumentation</a> names): * HorizontalFlip * VerticalFlip * ISONoise * IAAAdditiveGaussianNoise * CoarseDropout * RandomBrightness * RandomGamma * IAASharpen * Blur * MotionBlur * RandomContrast</p> <p><strong>Loss</strong> 0.75 * BCE + 0.25 * LovaszHinge Lovasz worked significantly better than Dice. Perhaps a good idea was a separate fine-tuning stage, where only Lovasz was used, but I never got around to testing that. Also wanted to try replacing BCE with Focal, but didn’t have time and also was worried about proper coefficient balancing – focal is usually significantly smaller than CE, and magintude might change during training.</p> <p><strong>Post-processing</strong> - binarize averaged predictions using threshold, determined on validation set - zero masks with number of predicted pixels less than a threshold, determined on a validation set</p> <p>I have also thought about filling holes and removing small components in predictions, but after looking at predictions on the public test set decided that it will not change the score much, but might backfire in unexpected way on unseen data.</p> <p><strong>Pseudo-labeling</strong> After many teams started to score more than 0.920 on pubLB it became evident that at least some of them are using semi-supervised learning. It didn’t seem to be extremely beneficial, because size of public test set in this competition wasn’t very big compared to train set size, but it should at least help to reduce to domain mismatch between train and public test sets, so I decided to give it a go, after plateauing in score on pubLB. First I’ve tried to include all predicted test negative samples (this was easier, since I didn’t need the masks) into the train set (keeping folds unchanged and not including pseudo-labeled images into validation set, to be able to track improvement) and fine tune the models I had using this data. It didn’t help much, so I moved on to including all predicted test set images the same way into training set and training from scratch. This allowed to get a single model score equal to the ensemble score on pubLB, but ensembling such models didn’t provide any improvement – it made sense, since most likely the models were just “remembering” masks for all the samples from the public test set, and there was no variation between predictions on the public test set images. Finally, I decided to do it the “proper” way, and included only confident samples into the training set. I measured confidence similar to <a href="https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/69291#latest-592781">approach of winners of TGS competition</a> - count the number of pixels with confidence &lt; 0.2 or &gt; 0.8, and consider images that have more than N confident pixels to be reliable. I’ve noticed, that no matter how certain/good the predictions are, there is always an uncertain region along the predicted defect boundary (which makes sense, since the ground truth annotation was fairly arbitrary in terms of boundary). To avoid this uncertain area affecting the “prediction confidence” estimate, I’ve applied morphological operations to “grow” confident regions – this allowed to “close” uncertain areas along the boundaries, and still kept the “really” uncertain defect predictions. This was done differently for each defect type, since they had different areas and boundary uncertainties. This approach helped me to move past 0.917 on pubLB to 0.919, with most of this change attributed to improved predictions for defect3. I have also applied pseudo-labeling to defect1 and defect4, but been unable to get improvement for defect1 (makes sense, since my predictions for it were far from being perfect), and unclear results for defect4 (not really sure why, maybe due to more skewed defect/nondefect distribution of data).</p> <p><strong>Closing thoughts</strong> Thanks to organizers of this competition and all participants - it was a really fun and intense experience! This is a second competition where I have been using <a href="https://dvc.org/">DVC</a> and <a href="https://github.com/albu/albumentations">albumentation</a> libraries, and first for using <a href="https://github.com/catalyst-team/catalyst">Catalyst</a>, <a href="https://github.com/qubvel/segmentation_models.pytorch">Segmentation Models</a> and <a href="https://app.wandb.ai/">Weights &amp; Biases</a> - really enjoying these, if you haven't tried them yet - I strongly encourage you to. </p>
Severstal: Steel Defect Detection
12th place solution overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It's english transliteration for russian word, basically means "savvy" or "mother wit"</p> <p>Team: <a href="/delpro">@delpro</a> <a href="/pavel92">@pavel92</a> <a href="/ilyadobrynin">@ilyadobrynin</a> We joined finally the last day of merging deadline and start woking on joint solution. At that time we have quite popular approaches with first step as classification (to remove most of non defective images) and second step as segmentation for defects localization. But at the end came up with 3-step solution. Our multilabel FPN and PSPNet segmentation models perform not so good in pixel-level segmentation, but works awesome to reduce number of false positive predictions after first classification step, so we decided use them as intermediate classification step. </p> <p>Following you will find more detailed explanation of each step.</p> <h3>Stage 1. Multilabel classification</h3> <p>Implemented by <a href="/ilyadobrynin">@ilyadobrynin</a> On the Stage 1 there was a multilabel classifier to detect images with at least one type of defect.</p> Model: <ul> <li>Best 3 of 5 folds Senet154 trained on resized images (128x800) with simple BCE loss</li> </ul> Augmentations: <ul> <li>Normalization</li> <li>Resize</li> <li>h-flip, v-flip</li> <li>no TTA</li> </ul> <p>After training we have found thresholds for binarization with maximizing f1-score for the given class, so each class have its own threshold. It allowed us to exclude almost half of the images and speed up inference.</p> <h3>Stage 2. Multilabel segmentation</h3> <p>Implemented by <a href="/delpro">@delpro</a> <a href="/ilyadobrynin">@ilyadobrynin</a> On the Stage 2 there was some overfit magic. There we have mean ensemble of the multilabel segmentation nets: - 4 folds of PSPNet on se_resnext101_32x4d (Awesome Qubvel implementation), trained on full images with BCE + Jaccard loss; - Custom FPN on senet154 trained on crops (256x256) with simple BCE loss - Custom PSPNet on senet154 trained on crops (256x256) with simple BCE loss</p> Training <ul> <li>Pretrain on crops (256x256), batch size 32, BCE + Jaccard</li> <li>Fine-tune on full size with 0.1 * lr, batch size 4 with gradients accumulation up to 20 images</li> </ul> Augmentations: <ul> <li>Normalization</li> <li>CropNonEmptyMaskIfExists: crop image with defect area if one exists, else make random crop (we have contributed this transformation to <a href="https://github.com/albu/albumentations/pull/342">Albumentations</a> during competition)</li> <li>h-flip, v-flip</li> <li>only h-flip on TTA</li> </ul> Details: <p>There was a trick in the custom FPN and PSPNet training. First of all, we have trained a multilabel Senet154 classifier on the crops. After that we use this classifier as a backbone for the segmentators. It speed up training significantly with the same quality.</p> Post processing: <p>First of all, we removed small objects and holes from the image. Then we assume that there is no objects if the sum of positive pixels is less than a minimum threshold (unique for the each class). It boosted our score on the public leaderboard, but could cause overfit. On this stage we have no thresholds optimization for the ensemble of models, since the default thresholds give us better result. Second stage allows us to remove many of the False Positive images.</p> <h3>Stage 3. Binary segmentation</h3> <p>Implemented by <a href="/pavel92">@pavel92</a> On the Stage 3 we use binary segmentation models, trained on non-empty masks for each class: 2 Unet(seresnext50), 2 FPN(seresnext50), 1 Unet(SeNet154)</p> Training: <ul> <li>loss: SoftDice + BinaryFocal(gamma=2.)</li> <li>optimizer: RAdam</li> <li>sheduler: reduce on plateau</li> <li>sampling: non-empty</li> <li>crop: 256x768</li> </ul> Augmentations: <ul> <li>Normalization</li> <li>h-flip, v-flip</li> <li>rotate180</li> <li>random brightness/contrast</li> <li>jpeg compression</li> <li>random scale (limit=0.1)</li> </ul> <h3>Tips:</h3> <ul> <li>Pseudolabels. For the classification step we use the most confident predicts from the previous submission. The confidence score for each image: <code>np.mean(np.abs(np.subtract(prob_cls, 0.5)))</code> where prob_cls - probability for each class on the image</li> <li>Use <code>torch.jit</code> to serialize your models (it helps a lot to transfer models to kaggle without pain)</li> <li>Trick that helps to improve models quality for about 0.3-0.5 % points for all models - best checkpoints weights average (<strong>weights!, not predictions</strong>). During traning 5 best checkpoints have been saved and then differnet combinations of them evaluated to find the best candidates to average</li> </ul> <h3>What did not work:</h3> <ul> <li>binary classification on the Stage 1</li> <li>multilabel segmentation on the Stage 3</li> <li>multiclass segmentation with aux classification output</li> <li>gated multiclass segmentation with aux classification output and <code>mask = mask_output * cls_output</code> (during traning and inference)</li> <li>we have also tried to fuse conv+batchnorm to reduce models inference time, but get only minor improvements, for <code>se</code> models it is just about ~5% speedup :(</li> </ul> <h3>Usefull libs you have to know:</h3> <ul> <li>[PyTorch] <a href="https://github.com/qubvel/segmentation_models.pytorch">https://github.com/qubvel/segmentation_models.pytorch</a></li> <li>[PyTorch] <a href="https://github.com/qubvel/ttach">https://github.com/qubvel/ttach</a></li> <li>[Keras] <a href="https://github.com/qubvel/segmentation_models">https://github.com/qubvel/segmentation_models</a></li> <li>[Keras] <a href="https://github.com/qubvel/efficientnet">https://github.com/qubvel/efficientnet</a></li> <li>[Keras] <a href="https://github.com/qubvel/tta_wrapper">https://github.com/qubvel/tta_wrapper</a></li> <li>[Everywhere] <a href="https://github.com/albu/albumentations">https://github.com/albu/albumentations</a></li> </ul> <h3>Useful papers:</h3> <ul> <li>Gradients accumulation: <a href="https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255">https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255</a></li> <li>SWA: <a href="https://towardsdatascience.com/stochastic-weight-averaging-a-new-way-to-get-state-of-the-art-results-in-deep-learning-c639ccf36a">https://towardsdatascience.com/stochastic-weight-averaging-a-new-way-to-get-state-of-the-art-results-in-deep-learning-c639ccf36a</a></li> <li>Pseudolabels implementation: <a href="https://arxiv.org/abs/1904.04445">https://arxiv.org/abs/1904.04445</a></li> </ul> <p>Special thanks to my awesome teammates!</p> <p>And, besides, with the tip from the stage 2, when you take a pretrained classifier from stage 1 and use it as a backbone for segmentation, it really speeds up the training process. So, we desided, if we can train as many models as we want, why not?</p> <p><a href="/insaff">@insaff</a> Just 1st and 2nd stage gives us arount 0.91850 on public LB (don't know the score on private LB). After finally team merge we decided to take profit from our combined approaches and make 3rd stage from @qubvel. It gives us boost up to 0.92182 on public LB. Actually, our best model with slightly modified 1st stage gives us 0.90778 on private LB.</p> <p>We use Stage 2 for FP removing from the Stage 1. Actually, its segmentation masks are not very good and we don't use them in the further pipeline. On Stage 3 there are binary segmentation models, trained on the non-empty masks for each class. These masks get higher dice score on the masks with defects. The problem here is that if we have an empty mask, the model will probably predict FP objects on it (since there were no empty masks in training). But thanks to Stage 2, we have only few empty masks on Stage 3 and all works well.</p>
Severstal: Steel Defect Detection
13th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><ul> <li>classification Adam, steplr scheduler base: 1e-3 -&gt; 1e-4 linear: 1e-4 -&gt; 1e-5</li> <li>segmentation Adam, steplr scheduler encoder: 1e-4 -&gt; 5e-5 -&gt; 1e-5 decoder: 1e-4 -&gt; 5e-5 -&gt; 1e-5</li> </ul> <p>Congratulations to you too!</p> <p>I used TITAN RTX in this competition. So it may have been possible to increase batch size.</p> <ul> <li>batch size: 16</li> <li>accumulation step: 1</li> </ul> <p>First of all, thanks to Severstal for hosting this competition. And congrats to the winners!</p> <p>I was very surprised that such a huge shake-up and I am very fortunate to get a solo gold medal. I have tried on this competition only about 10 days, so I haven't done many experiments and my solution is not so special. I was just very lucky.</p> <p>In a short words, my solution is just an ensemble multiple models. From my experience so far, I think it is very important to ensemble various models to build a robust solution.</p> <h3>Classification</h3> <ul> <li>models <ul><li>3x EfficientNet-b4 (first 3 of stratified 10 folds)</li></ul></li> <li>input <ul><li>full size (256 x 1600)</li></ul></li> <li>augmentations <ul><li>random crop rescale</li> <li>hflip, vflip</li> <li>random contrast, random gamma, random brightness</li></ul></li> <li>TTA <ul><li>none, hflip</li></ul></li> <li>threshold label <ul><li>[0.5, 0.5, 0.5, 0.5]</li></ul></li> </ul> <h3>Segmentation</h3> <ul> <li>models <ul><li>EfficientNet-b3 Unet stratified 4fold w/ full size image</li> <li>EfficientNet-b3 Unet 1fold w/ random crop 256 x 800</li> <li>3x Unet from <a href="https://www.kaggle.com/lightforever/severstal-mlcomp-catalyst-infer-0-90672">mlcomp + catalyst infer kernel</a> from @lightforever </li></ul></li> <li>augmentations <ul><li>same as classification</li></ul></li> <li>loss function <ul><li>BCEDice (bce weiht=0.75, dice weight=0.25)</li></ul></li> <li>TTA <ul><li>none, hflip</li></ul></li> <li>threshold mask <ul><li>[0.5, 0.5, 0.5, 0.5]</li></ul></li> <li>postprocess</li> </ul> <h3>Predictions on public LB</h3> <p>defect 1 = 82 (128) defect 2 = 5 (43) defect 3 = 601 (741) defect 4 = 110 (120)</p>
Severstal: Steel Defect Detection
14th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Chris. I learned a lot from you in that IG comp. The reason of using relatively heavy augmentations was trying to reduce the effect of domain mismatch that Heng mentioned.</p> <p>I tried, but then I could only use batch size = 3 or less. In TGS comp it was okay because the images were 101x101</p> <p>I just experimented several things that Less Wright did here <a href="https://medium.com/@lessw/res2net-new-deep-learning-multi-scale-architecture-for-improved-object-detection-with-existing-de13095c9654">https://medium.com/@lessw/res2net-new-deep-learning-multi-scale-architecture-for-improved-object-detection-with-existing-de13095c9654</a></p> <p>I don't know if deep-stem is actually better than 7x7 or not because I didn't do any direct comparison.</p> <p>Yeah it should work, when all your models are using hinge loss. But it might also be a problem when you try to ensemble a model trained with hinge loss and one with cross entropy. </p> <p>replace first 7x7 conv with several 3x3 convs. </p> <p><a href="https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/66924">https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/66924</a></p> <p>Sigmoid outputs on logits of hinge loss are not posterior probabilities. Think about SVM.</p> <p>Maybe? I tried slower augmentations.... After removing some of them, I thought the speed was Okay so I didn't investigate further</p> <p>Lovasz from start</p> <p>This time, I was trying to be more methodological instead of randomly picking up ideas from papers. I spent more time thinking and digging into why my current model was doing bad before training a new model. Plus luck, I finally got my first gold.</p> <h1>Data augmentation and pre-processing</h1> <p>I used <code>albumentation</code> to do data augmentation <code> def aug_medium(prob=1): return aug.Compose([ aug.Flip(), aug.OneOf([ aug.CLAHE(clip_limit=2, p=.5), aug.IAASharpen(p=.25), ], p=0.35), aug.OneOf([ aug.RandomContrast(), aug.RandomGamma(), aug.RandomBrightness(), ], p=0.3), aug.OneOf([ aug.ElasticTransform(alpha=120, sigma=120 * 0.05, alpha_affine=120 * 0.03), aug.GridDistortion(), aug.OpticalDistortion(distort_limit=2, shift_limit=0.5), ], p=0.3), aug.ShiftScaleRotate(rotate_limit=12), aug.OneOf([ aug.GaussNoise(p=.35), SaltPepperNoise(level_limit=0.0002, p=.7), aug.ISONoise(p=.7), ], p=.5), aug.Cutout(num_holes=3, p=.25), ], p=prob) </code> Then I took 256x512 crops from the augmented image. Note that many images contain large black area, and there is always steel part shown on at least one end of an image. Simple random crops would make training inefficient, so here is what I did: 1. Pick a random crop 2. If the proportion of pixels with values &lt; 10 is greater than 85% or the average pixel values of the crop &lt; 15, pick the crop on either left or right end of image depending on which end satisfying the criterion. If nothing works, pick the crop on the right end</p> <h1>Balanced Sampling</h1> <p>To fight against imbalanced classes, I used balanced sampling. I don't like the approach that random picking defect type (or non-defect) under the same probability. It would make the concept of epoch arbitrary. I prefer deterministic approach, so in each epoch, I did something like <code>0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 ...</code> till every single non-defect (class 0) was sampled once.</p> <h1>Model</h1> <h2>Model 1:</h2> <p>End-to-end classification + segmentation asymmetric U-Net on training set only. Backbone: se-resnext50 32x4d Decoder: CBAM attention and hyper-columns. I thought that the last decoder (and maybe the upsample in the second last decoder as well) was kind of redundant, so I removed it. In this way, the output stride of my model became 2. This saved a lot of my GPU memory. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1605499%2F6ea80511606b3aeb6f597b7e012da580%2FSeverstal%20Model.png?generation=1572029254979627&amp;alt=media" alt=""></p> <p>I used a similar setup in TGS salt competition too. Note that, unlike other classes, I fed all images to the defect 3 branch instead of defect 3 images only.</p> <h2>Model 2:</h2> <p>Same as model 1, except using deep-stem and replacing every ReLU with Mish. Trained on training set and pseudo-labels on public test set</p> <h1>Training</h1> <p>Loss: BCE loss for classification, symmetric Lovasz-Hinge for segmentation Optimizer: Adam Scheduler: Warmup+Flat+Cosine. 0.5 epoch linear warmup, 49.5 epochs flat at 1e-4, and 50 epochs cosine to 0 Batch size: 6 No fine-tuning on full images</p> <h1>Post-processing</h1> <p>Thresholds: since I was using hinge loss, I didn't tune segmentation thresholds Classification: [0.55, 0.99, 0.25, 0.5] Minimum pixels: [0, 0, 1200, 0] These values were determined by one fold of model 1. After the competition ended, I tried 0.5 for classification and 0 for minimum pixels for all four defects. It gave me 0.90625 on private test.</p> <p>TTA: original, hflip and vflip</p> <p>The masks are actually polygonal bounding boxes. It means that no holes in each part of defect. I randomly saw <a href="https://scikit-image.org/docs/dev/user_guide/tutorial_segmentation.html">this page </a>. I decided to gave <strong>Edge-based segmentation</strong> in that page a try. This post-processing consistently gave me 0.00005-0.0001 boost on local validation, public test and private test.</p> <h1>Pseudo-labelling</h1> <p>Obtained pseudo-labels from 3 folds ensemble of model 1. Removed defect 1 with classification output &lt; 0.85 and defect 3 with classification output &lt; 0.75. I kept the amount of test images around 35% train images for each epoch.</p> <h1>Ensemble</h1> <p>2 folds of model 1 and 1 fold of model 2. Simple average before thresholding. My best single fold of model 1 gave me 0.90310 on private test.</p> <h1>Things didn't work</h1> <p>See <a href="https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/114199#657496">this post</a></p> <p>Congratulations to all winners! P.S. I am not releasing my code at this moment since the efficiency prize is still ongoing.</p> <p><code>Scheduler: Flat+Cosine. 50 epoch flat at 1e-4, and 50 epoch cosine to 0</code></p> <p>So total 100 epochs</p> <p>Well It could work. We can grid search all thresholds (classification, segmentation, minimum pixels and minimum pixels for small object removal) according to the metric on valid set, but...... It took much longer and the output is less sensitive around 0.5 if hinge loss is used.</p> <p>And to be honest, I don't think averaging sigmoid output out of hinge loss logit is a good way to do multi-fold stacking, but I couldn't find a better way. I tried majority vote, it was worse. </p>