Title of Competition
stringclasses
343 values
Title of Writeup
stringlengths
6
139
User
stringclasses
1 value
Writeup
stringlengths
419
80.4k
Statoil/C-CORE Iceberg Classifier Challenge
How to win a gold medal with basic DL technics and a simple stacking
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Statoil/C-CORE Iceberg Classifier Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It is called TTA (test time augmentation). When you rotate an image containing a ship, it will still contain a ship on it, right? :) So let's say you predict a single test image 10 times, one time for each transformation (e.g. rotation). The model will give you ten different output probabilities for a ship, then you average all of them to get a single output, but a more confident, robust one, as Nikita already said. Long story short it works like that ;)</p> <p>Exactly. I just found out that using a leak the way that Pavel Pleskov does could lead us to the 9th place in the leaderboard:</p> <p><a href="https://www.kaggle.com/truepk/17th-place-leak-9th-place-lb-0-0985-0-1076">https://www.kaggle.com/truepk/17th-place-leak-9th-place-lb-0-0985-0-1076</a></p> <p>Hi @andohuman and thanks for your comments! We used an augmentation on every single model on both training and testing data, even after a pseudo labeling step. Basically, variations were like these:</p> <ul> <li>Enable / disable X-Y flips</li> <li>Rotate images (10-30 degrees)</li> <li>Do height/width shifts</li> <li>Do zoom (0.0-0.2)</li> </ul> <p>Augmentation methods for sure depend on a model we use, but most of the time we just set a random combination in a reasonable range of parameters to add more diversity to a model and to our final ensemble eventually.</p> <p>Thank you! My congratulations on your silver medal :) Since there are no true labels for a test set, it's hard to say the exact potential score on the LB, but as you can see threads with other solutions, people achieve up to 0.08 with that leak ;)</p> <p>Here you can find all our submissions used on a first level and also ensembles and a final submit <a href="https://drive.google.com/drive/folders/1xbftjMwWVMevkBsa7l5jQ4jZl28a61WQ?usp=sharing">https://drive.google.com/drive/folders/1xbftjMwWVMevkBsa7l5jQ4jZl28a61WQ?usp=sharing</a></p> <p>We seem to be losers because we didn't use it :D So sad to be honest....</p> <p>Finally it's time to reveal our pipeline, that is basically very dumb and obvious, but is still robust to solve deep learning competitions. All references you can find in my github repo here: <a href="https://github.com/sergeyshilin/kaggle-statoil-iceberg-classifier-challenge">https://github.com/sergeyshilin/kaggle-statoil-iceberg-classifier-challenge</a></p> <ol> <li><p>Using script <strong>network/train_predict_cv.py</strong> we generated a bunch of various single-model submits for further stacking. The best model gave us about 0.1390 on public, but ~0.15 on private LB. Main approach: Different amount of folds (from 4 to 8), 10-TTA and pseudo-labelling with a threshold of 0.03 on some submits. The whole picture of all out single models can be found here: <a href="https://docs.google.com/spreadsheets/d/1tqZuVnq6dCHZmvWTaRkVmdQd9qJS7fdQ_tYyAx2wvhU/edit?usp=sharing">https://docs.google.com/spreadsheets/d/1tqZuVnq6dCHZmvWTaRkVmdQd9qJS7fdQ_tYyAx2wvhU/edit?usp=sharing</a> <img src="https://lh6.googleusercontent.com/P7rKC8SLWQDc2hrtS8rXixHl-eBCKfX5Kczbul6bV14AqUxDCM7g776m1DqTjVNrBJo9-Lm350J7wl3catPE=w1280-h918-rw" alt="How our table with submissions looks like"> I would also like to mention, that the main boost up (from ~0.13 to 0.11) we got after we added to the stacking a few neural networks trained on FFT transformations of bands (submits 076-080). Submits 082-084 were generated by a script <strong>network/transfer_learning.py</strong> that does almost the same thing except we predict outputs using an SVM (or LogisticRegression) model right on a flatten features from the top CNN layer. I believe that this approach also added some diversity to the ensemble.</p></li> <li><p>25 LightGBM + XGBoost ensembles on submits listed above along with other features. We also tried to add a simple dense network as an ensembling algorithm, but eventually it didn't work successfully ¯\_(ツ)_/¯. The best combination of these ensembles gives 0.1118, but we didn't pick this submit as a final one :( Scripts used: </p> <ul><li><strong>ensembling/make_metadata.py</strong> (to generate additional features)</li> <li><strong>ensembling/ensemble.py</strong> to run 25 ensembles on random subsamples of submits and features</li></ul></li> <li><p>Mean is used on the 3rd level of the submission tree. But we also decided to calibrate predictions with a power of 1.2 and crop them to 0.001 and 0.999. Other power values could work even better, but we simply didn't have time to test all of them. Anyways, this technic gave us a good improvement (about 0.003 log_loss) Main file to see: <strong>ensembling/make_submit.py</strong></p></li> </ol> <p>And.... This pretty much it. Feel free to ask questions if you have any, and I will gladly answer :)</p> <p><em>P.S.</em> I know, this is not a very elegant solution, but robust one in terms of overfitting (all our Public LB scores are very similar to Private LB scores, not on each individual model, but on ensembles). I also hope our code could help someone to participate in future DL contests or at least to provide with a good starting point.</p> <p><em>P.P.S.</em> and we were blind enough to use a leakage mentioned by @ppleskov <a href="https://www.kaggle.com/c/statoil-iceberg-classifier-challenge/discussion/48146#272990">https://www.kaggle.com/c/statoil-iceberg-classifier-challenge/discussion/48146#272990</a></p> <p>How much better score do you think it could possibly give us?</p> <p>Unfortunately not every one, as you can see :)</p> <p>Thanks! Sure thing, I will upload all our submissions to my google drive soon and will place a link here</p> <p>Thank you! That's exactly why I created this repo, to keep all things structured in one place and to easily retrain different models with only a few changes</p>
Statoil/C-CORE Iceberg Classifier Challenge
My take on the data leak
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Statoil/C-CORE Iceberg Classifier Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I saw many people are discouraged by the data leak. But there is some bright side about it that I found during the competition.</p> <p>Like other top finishers, I also utilized the fact that images with the same <code>inc_angle</code> tend to have the same label. But I tried two ways to use this information:</p> <ol> <li><p>Group images by <code>inc_angle</code> and consult their hand made labels' statistics as features. </p></li> <li><p>Group images by <code>inc_angle</code> and consult statistics of predictions on these images made by a ConvNet as features.</p></li> </ol> <p>At least for me, they don't differ too much in results, and I ended up using the second approach (partly because it offers consistency across the handling of train and test images).</p> <p>The first approach would be considered a total data leak utilization that's not so useful for real world applications, because it requires partial hand labeling whenever new data are acquired. </p> <p>But the second approach only uses the fact that <code>inc_angle</code> is related to geographical locations of the images taken. No more hand labeling is needed for predicting new data, since "pseudo labels" would be used instead. It might be useful after all.</p> <p>I meant using existing labels in the training set.</p>
Statoil/C-CORE Iceberg Classifier Challenge
Silver medal from lucky ensemble public kernels
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Statoil/C-CORE Iceberg Classifier Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you! Yes, after big ensembles difficult check what best for final.</p> <p>Welcome! I will glad see your kernel</p> <p>Next time i wanna team up with you))</p> <p>Thank you!</p> <p>Thank you . You're welcome!</p> <p>Thank you!</p> <p>I didnt have hardware for this competition, but big surprise take 🥈 from ensemble public kernels: <a href="https://www.kaggle.com/golubev/silver-best-public-kernel-0-1258-0-1374-lb">https://www.kaggle.com/golubev/silver-best-public-kernel-0-1258-0-1374-lb</a></p> <p>Hi man! Oh sad! I has overfitting too, and i expected it. But when a lot of people has overfitting more terrible its was for me surprise. You gave for people a lot of good kernels, i think you have something to be proud! Dont give up and plz continue share great kernels(without you, not will this kernel too)!</p>
Statoil/C-CORE Iceberg Classifier Challenge
Simple Solution Top 2% (61st)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Statoil/C-CORE Iceberg Classifier Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, so lucky, not much overfitting...</p> <p><em><strong>Source Code ( correlations around 0.5 )</strong></em></p> <ul> <li><p>Public kernel 1258-1374 <a href="https://www.kaggle.com/golubev/silver-best-public-kernel-0-1258-0-1374-lb/code">https://www.kaggle.com/golubev/silver-best-public-kernel-0-1258-0-1374-lb/code</a></p></li> <li><p>Public kernel 1400 <a href="https://www.kaggle.com/supersp1234/best-single-model-lb-0-1400/comments">https://www.kaggle.com/supersp1234/best-single-model-lb-0-1400/comments</a></p></li> <li><p>Two simple ensemble outputs 1541 &amp; 1266 in public LB</p></li> </ul> <p><em><strong>My approach</strong></em></p> <ol> <li>Weighted mean, for example, Best public Kernel *0.5 + Mean of Source Code *0.5</li> <li>Minmax the outputs of step 1 ( &gt;0.75 --&gt; max of source, &lt;0.25 --&gt; min of source )</li> </ol> <p>I'm glad to help u. Thanks.</p> <p>Sorry, 1 and 2 means step 1 and step 2 of one approach.</p> <p>It's hard to say, I tried to use better combination with low correlation and high public LB.</p>
Stay Alert! The Ford Challenge
Methods/Tips From Non-Top 3 Participants,
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stay Alert! The Ford Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|>David:&nbsp; That's very interesting (and clever) that ignoring the trials with all zeros for isalert led to a higher predictive power.&nbsp; I wonder if there is a combination of the fact that (1) such trials convey little useful information, given that there are plenty of instances where isalert==0 in other trials and (2) those trials have many rows of data with substantially the same values, leading to over-learning of those relationships.&nbsp; I also hadn't realized that second order / interaction impacts would generate predictive lift in a randomForest -- it's good to know that you gained some improvement from that technique also.<br><br>@Jay:&nbsp; I found that the need for sampling was reduced when I changed the default nodesize from 1 or 5 to 25.&nbsp; That builds smaller trees and requires less memory and processing power.&nbsp; That said, it's interesting that we both created around a 10% sample to whittle down the overall dataset.&nbsp; R's randomForest() implementation also requires quite a bit of memory to execute on large datasets; I ran out of memory several times on a 4GB machine.<br><br>@Melipone:&nbsp; "No Free Hunch" is kaggle.com's official blog.&nbsp; Find it at http://www.kaggle.com/blog<br><br>@Christopher:&nbsp; Thank you for the interesting detail, especially your list of transformations and the insight regarding the concavity in the middle of your v11 ROC curve.&nbsp; While the competitive aspect of these challenges is fun and a great motivator, I think the real benefit is to learn some good practices from others -- so thank you for sharing.<br><br>If we weren't spread out all over the world, this would make for a great Friday happy hour conversation (to the confusion of other bar patrons, no doubt).<br><br mce_bogus="1"> While it's universally interesting to understand what methods were used by the top participants (especially in this contest where there are some large gaps in AUC at the top), I suspect that many others who participated also have clever methods or insights.&nbsp; While we wait for the top finishers to post on "No Free Hunch", I thought it would be interesting to hear from anyone else who might wish to share.&nbsp; Many of the models are quite good and would produce better results than the methods used by persons in industry.<br><br>My results (#15):<br><br>Overall method:&nbsp; <br><br>randomForest() in R, 199 trees, min node size of 25, default setting for other values<br><br>Sampling:&nbsp; <br><br>Used 10% of the training dataset to train the randomForest.&nbsp; Also included any data points that were within 500ms of a state change (where isalert shifted from 1 to 0 or vice-versa).&nbsp; About 110,000 rows total.<br><br>Data Transformations:&nbsp; <br><br>Tossed out correlated variables, such as p7 (inverse correlation with p6) and p4 (inverse correlation with p3)<br>Transformed p3 into an element of {"High", "Mid", "Low"} based on the probability of being alert.&nbsp; Where p3 is an even multiple of 100, the probability of being alert is systematically higher.&nbsp; Where "p3 mod 100" is 84, 16, or 32, there is also a greater chance of being alert ("Mid").&nbsp; Call everything else "Low".<br>The histogram of p5 clearly shows a bimodal distribution.&nbsp; Transformed p5 into a 1/0 indicator variable with a breakpoint at p5=0.1750.<br>Transformed e7 and e8 to lump together all buckets greater than or equal to 4.<br>Transformed v11 into 20-tiles to convert strangely shaped distribution into a discrete variable.<br><br>Tried and Denied:<br><br>Lagging values<br>Moving average<br><br><br>Color Commentary:<br><br>RandomForest's ability to "fit" the training data presented was very strong.&nbsp; However, the out-of-bucket (OOB) error rate, as reported by R, was highly misleading.&nbsp; The OOB error rate could be driven down to the 1-3% range.&nbsp; However, those models produced somewhat worse results on a true out-of-sample validation set.&nbsp; Keeping randomForest tuned to produce OOB error rates of 8-10% produced the best results in this case.<br><br>Because many of the training cases are similar, randomForest performed better when using just a sample of the overall training data (hence the decision to train on only about 110,000 rows).&nbsp; RandomForest also under-performed when the default nodesize (either 1 or 5) was used.&nbsp; The explicit adjustment of nodesize to other values, such as 10, 25, and 50, produced noticeably different error rates on true out-of-sample data.<br><br><br><br>&nbsp; <br><br><br><br mce_bogus="1">
StumbleUpon Evergreen Classification Challenge
Sharing my solution - Ranked #4
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: StumbleUpon Evergreen Classification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello everyone! In the spirit of community learning, I'm sharing the code that I used to produce my top-scoring submission. You can get it on github at&nbsp;<a href="https://github.com/saffsd/kaggle-stumbleupon2013">https://github.com/saffsd/kaggle-stumbleupon2013</a>&nbsp;.</p> <p>This submission scored 0.87777 (Rank #300) on the public leaderboard and 0.88764 on the private leaderboard (Rank #4).</p> <p>My original submission was inspired by Abhishek's &quot;Beating the Benchmark&quot; post on the competition forum [2]. I found it quite interesting that such a simple solution was proving so difficult for many people&nbsp;to beat, and that prompted me to start investigating if there was any&nbsp;possibility to do better. The approach I took has previously been described as &quot;overkill analytics&quot; [3] - the plan of action was simply&nbsp;to generate as many derivative features as possible from each webpage,&nbsp;and then use machine learning to establish the best weights for them&nbsp;via the training data.</p> <p>Feature extraction is implemented in `raw2tsv.py`, and this was used to generate features in a TSV format. I generated three broad classes of&nbsp;features: (1) text-based, where each feature is a string, (2) count-based,&nbsp;where occurrances of particular items were counted, and (3) length-based,&nbsp;where the lengths of particular fields were counted. In the end, based on&nbsp;my internal cross-validation results, I only utilized the text-based features for the submission. Feature extraction relied heavily on parsing&nbsp;the HTML of the document, and extracting the content of various nodes.&nbsp;`raw2tsv.py` was written to be re-usable as a general profiler for HTML&nbsp;documents.</p> <p>`SEP09b.textstacking.py` implements the actual classifier. I used&nbsp;Abhishek's code as a starting point, and that can be recognized in the&nbsp;choice of some parameters as well as some of the general structure, but&nbsp;I have also modified the code substantially. The biggest change is probably&nbsp;the implementation of a stacking-based approach to combining the features&nbsp;I generated. Details of this approach can be found in a paper I wrote in&nbsp;2012 [4], where I applied it to a different task. The gist is simple - generate &quot;features&quot;, which are chunks of text - for example, the title of&nbsp;the document, the body of the document, etc. For each feature, construct&nbsp;the bag-of-words and use that to train a logistic regression classifier,&nbsp;then use another logistic regression unit to combine the predictions of&nbsp;each individual classifier.</p> <p>[1] http://www.kaggle.com/c/stumbleupon<br>[2] http://www.kaggle.com/c/stumbleupon/forums/t/5680/beating-the-benchmark-leaderboard-auc-0-878<br>[3] http://www.overkillanalytics.net/<br>[4] http://aclweb.org/anthology-new/U/U12/U12-1019.pdf</p>
StumbleUpon Evergreen Classification Challenge
Thank you --and thoughts on the final ranking
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: StumbleUpon Evergreen Classification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>[quote=Robin East;33139]</p> <p>Would you mind elaborating on what you mean by 'de-noising' terms.</p> <p>[/quote]</p> <p>The way I see it: you have some data, and there is some quantity of relevant information in it. Everything else is noise. Although most classification algos are supposed to ignore noise features, they actually don't, and also, any algo is going to discard part of the info that was present in your data --you're never as accurate as you can. The joint cause of these two problems is that classifiers work from features, and features are always imperfect &nbsp;--every feature will contain part information and part noise (mostly noise, even for the most discriminative features).&nbsp;</p> <p>So, the perfect solution of any ML problem would be to start by finding a transformation from the data space to a feature space where&nbsp;features would be either 100% relevant information, or pure noise. That's what I would call &quot;de-noising&quot; the data. Of course, if it were possible to do it perfectly, then for a binary classification problem you'd end up with a unique feature, with two possible values: category 0 or 1 ^^</p> <p>In our case: the data space is a bunch of words/terms and their frequencies. Each term is part &nbsp;noise and part information. You'd need to map this to a &quot;good&quot; latent semantic space as seen from the point of view of discriminative information in our problem, before running classifiers on it. One thing that is common to try is to rank your terms by how discriminative they are, and then cut the tail (assumed to be mostly noise). It's a good de-noising strategy if you have some pure-noise terms and some very information-rich terms, but it's bad when the information is spread over almost all your terms. Because it means you're discarding a large quantity of information when cutting the tail.</p> <p>So what I was saying in my previous post is: the text was obviously mostly noise, and trying the &quot;cut the tail&quot; strategy was decreasing accuracy, so not the right way to go. In these conditions, how to get de-noised, information rich features? Most straightforward way to do it is to... trust the humans who wrote the webpages, and who kindly put the most relevant terms about the pages in the &quot;keywords&quot; meta-tag, the title of the page, etc. These are essentially &quot;de-noised&quot; versions of the pages.&nbsp;</p> <p>Hopefully that makes sense...&nbsp;</p> <p>&nbsp;</p> <p>Hi all,</p> <p>So I had the surprise to receive an email telling me I'd won --least I can say is that I wasn't expecting it. I'd like to thank all those involved, and in particular the participants to the forums --discussing with you in September was really helpful and interesting. Kaggle is definitely successful at creating a great community around learning.</p> <p>&nbsp;</p> <p>I see many people shocked at how much the final leaderboard differs from the preliminary one. I've got to say, this is easily explained (but the negative psychological impact on the competition was real --for instance I got demotivated early on due to my inability to move up the preliminary ranking despite moving up sharply in CV). The thing is, the test and training sets had important amounts of noise --one experiment I did early on is to identify the list of pages that caused most damage to my score in CV and try to understand what was up with them. The answer is, they had been purely and simply misclassified by the human classifiers crafting the sets... the machine was in these cases better than the humans.</p> <p>So yes, noise. It's frustrating, and I am sincerely sorry for those who feel bad about it. It also means that my victory is somewhat random --maybe I &quot;correctly&quot; classified examples that had been in fact arbitrarily classified when crafting the test set, and anyone from the top ten could have won as well. I don't know. In any case, my final score appears to be rather precisely what I was achieving in CV, so that says that CV is a much better (ie. statistically reliable) experimental guide of performance than the public leaderboard.</p> <p>&nbsp;</p> <p>So, anyway: the data was noisy, and when that is the case, the way to go is to guide your approach on CV only and ignore the leaderboard.</p> <p>&nbsp;</p> <p>I will now open-source my code, and explain my approach (I need to get back into it first, since my last submission was one and a half month ago). It was fairly straightforward: &quot;classical&quot; classifiers applied separately to the TF-IDF of text, urls, metatags --and results merged with a simple linear classifier. I think the only remarkable thing was that I developed my own features by parsing interesting elements in the raw page (metatags...).</p> <p>[quote=Domcastro;33076]</p> <p>Did you try as few variables as possible? I tried a mini competition with myself where I tried to get the highest LB score with the least amount of varables added to the original data. I added 10 in and got .86 on LB so effectively my other 15000 only added in .02 (.88)</p> <p>[/quote]</p> <p>Assuming that by &quot;variable&quot; you mean unidimensional features --I did not actually try to get my feature count down, because efficiency was not an issue, and although there was of course exponentially decreasing information added by further features, that was still not zero information, so results kept getting better (barely but measurably) with more features. My total feature count is pretty high, not sure exactly but likely over 1M.</p> <p>However, after noticing that the <em>entire text </em>of a page added very little information compared to the url only, or to the title only, I started suspecting that you could do better than the text by using small, de-noised versions of the text bodies (since one can assume the texts are pretty noisy --after all they only contain about as much info as 4-5 word-long titles or urls).</p> <p>I noticed that cutting off less significant terms from the text actually slightly decreased accuracy, which means that part of the info content of the text was spread over all these lesser terms. so instead of cutting terms I looked for these de-noised pools of terms in the &quot;description&quot; and &quot;keywords&quot; metatags of the pages. And adding them in added quite a bit of accuracy in CV. In the same vein, the &quot;alt&quot; values of img tags was also information-rich.</p> <p>I also tried a bunch other features that didn't make the cut. Remarkably, even after putting a lot of effort into enriching numerical features by adding my own, numerical features were still not adding any info compared to semantic features only.</p> <p>&nbsp;</p> <p>I just reread my code, it was a bit different from what I remembered. Here's the deal, succinctly (I'll come back with more detail and the source code):</p> <p>&nbsp;</p> <p>Features: raw Tf-idf + pca of the output of a tf-idf transform, for the following:</p> <ul> <li>url</li> <li>text (body)</li> <li>titles + 'keywords' metatags (why this combination? mainly because some pages lack metatags, and the closest thing are the titles)</li> <li>text + url + metatags for 'keywords' and 'description' + 'alt' values for img tags + possibly titles &nbsp;(which is pretty much the combination of all data I extracted from the pages --I might have eliminated titles from the mix after finding they did not impact results much due to redundancy <em><strong>edit</strong>: actually titles were indeed included here</em>)</li> </ul> <p>So, for all these sets of terms, compute in each case a Tf-idf, then train a PCA on the output.&nbsp;</p> <p>Then each resulting set of vectors will be classified with a Random Forest (PCA vectors) and a Logistic Regression (raw vectors). That's 4*2 arrays of predictions, which are then merged with a simple linear classifier. I also experimented with SGD and a few others, not sure if they were part of the code that generated the winning submission, I'll dig into it. <em><strong>edit</strong>: SGD was not used for the winning submission, only LR and RF</em></p> <p>Anyway that's it:&nbsp;&nbsp;<strong>&quot;classical&quot; classifiers (LR &amp; RF) run on the tf-idf (or tf-idf + pca if necessary) of terms extracted from urls, texts (body), titles, metatags, and alt img tags --then predictions merged with a linear classifier</strong>. Each classifier was tuned separately (grid search).</p> <p>&nbsp;</p> <p>One more thing: it might be interesting for the StumbleUpon engineers to note that working *only* with the urls gets you 90% of the way there. So you can be extremely accurate without even having to fetch a page, and with very little computation. In production I'd use something like this.</p>
StumbleUpon Evergreen Classification Challenge
What did you use?
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: StumbleUpon Evergreen Classification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><span style="line-height: 1.4">First let me say: Variance was crazy in this composition. My third place (private score&nbsp;0.88817) scored only&nbsp;0.88027 on the public leader board. It took quite some nerve to select it as my final submission.</span></p> <p>I actually made another submission, which would have won had I selected it (Public:&nbsp;0.88167, Private:&nbsp;0.88915). I though the other result was more robust, even though that one had a higher CV score and lower variance. You can really image me banging my head on the table right now. I think, the lesson here is: if you are paranoid enough about overfitting you can trust your CV score.&nbsp;</p> <p>My first key insight was: this competition is not about which pages are long lasting. This is first and foremost about what people find interesting. The main topics of interest is food (especially recipes). Other topics are that are mixed (but mostly evergreens) are health, lifestyle and excise. Seasonal recipes are actually mixed too. Supposedly funny videos/pics, technology, fashion, sports, sexy pictures are mostly ephemerals. Some sites are in languages other than English, these are mostly (but not all)&nbsp;ephemerals. Sometimes the frontpage of a news site was in there (e.g. http://www.bbc.co.uk) -&gt;&nbsp;ephemeral.</p> <p>My second key insight was: the features other than text are useless. I only used text features.</p> <p>I&nbsp;used an html parser and to actually parse the html that was given. I then gave different weights to each tag (h1, meta-keywords, title, ...). I also used the given boilerplate and a boilerplate that I extracted myself with a tool called boilerpipe. I had more than 100 of these weighted sets. I then used raw (normalized) counts, stemming, tfidf, svd and lda to preprocess them. For each of those more than 300 sets I used logistic regression to get predictions, which I than combined into an ensemble.&nbsp;</p> <p>I did not use n-grams, but should have.</p> <p>I also tried to use meta feature: how many words, what kind of words (pos-tag distribution, common/uncommon, ratio in dictionary) .... The idea was that items with a good (easy to read) writing style are more likely to be evergreens. I got 0.83 out of these features alone, but they did not seem to add anything to the ensemble.&nbsp;</p> <p>I also have a little bit of secret sauce. Something that I spend quite some time on. This eventually just added a little bit to the final ensemble, but maybe is something I will explore more in future competitions.</p> <p>I am really interested what other people used.</p> <p>[quote=Log0;33030]</p> <p>Hi Maarten, fchollet (if you're reading) and many others here, would you think of sharing your solution in code as I believe the devil is in the details. I very much like to learn from each of you here to become a better ML practitioner!</p> <p>[/quote]</p> <p>Will do when I have time to clean it up.</p> <p>Here is one think that I (apparently) did different than rest of the top 10: I did not (only) train one classifier on title, one on body, one meta-description, but I added them together with different weights. I noticed that this works better.&nbsp;I remember that a single classifier, where each html-tag is added together with some weight performs better than an ensemble of classifiers trained individually on each tag.&nbsp;</p> <p>I guess the reason is: if one training example has 'recipe' in the title and another has 'recipe' in the meta-description, both have the same meaning.</p> <p>[quote=fchollet;33316]</p> <p>Merging results from several classifiers reduces your variance, because each classifier will use bits of information that are orthogonal to the information leveraged by the others (like, bayesian models and ensemble models don't &quot;see&quot; the same patterns). One model might have some bias, the others are unlikely to have the same bias, so their average will have less variance.</p> <p>[/quote]</p> <p>I am not sure that makes sense. I think it's actually like this: averaging models allows you to &nbsp;reduce bias and have more variance (a.k.a. overfitting), because variance averages out.&nbsp;</p> <p>[quote=fchollet;33316]</p> <p>This has exponentially decreasing returns the more models you use, so it's typically not worth using more than 2-3 classifiers for a same vector set.&nbsp;</p> <p>[/quote]</p> <p>I think people usually do one tree based model (Random Forest or GBM) and one linear model (SVM or logistic regression).</p> <p>&nbsp;</p>
TGS Salt Identification Challenge
107-th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank. Hope you find it helpful!</p> <h1>107-th place solution for Kaggle TGS Salt Identification Challenge</h1> <p><a href="https://github.com/BloodAxe/Kaggle-Salt">https://github.com/BloodAxe/Kaggle-Salt</a></p> <p>At first glance, this challenge looked like a binary image segmentation task. So at the beginning, I experimented with many model architectures and hyperparameters. Ran hundreds of experiments, thanks for 4x1080Ti rig. Final solution included three U-Net like models with a few bells &amp; whistles. They were ensembled with models of my teammate for final submission.</p> <h2>Lessons learned</h2> <ol> <li>Be patient. Let it train for night, instead of stopping training after 10 epochs if it seems not converging. Don't meditate on those curves. Have a walk!</li> <li>Use heavier encoders. Top-performers reported on using Senet154 as encoder. Funny, I had this encoder, but have not even tried it.</li> <li>Don't rely single-fold validation results. Always do cross-validation. </li> <li>Keep in mind number of submits. In a late game I was unable to merge with another team due to exceeding number of submits.</li> <li>Understand where and why your model fails the most. In this challenge, it was a key to understand and use the fact that there were no solid masks in trainset.</li> <li>Don't be too lazy. 'Assemble mosaic' was in my roadmap since the first week of competition. Shame for me, I didn't use it at all.</li> <li><a href="https://github.com/albu/albumentations">Albumentations</a> library for image augmentations has saved a lot of time on writing data augmentation pipeline. Thanks to this challenge, we were able to find &amp; fix couple of bugs in it.</li> </ol> <h2>Model zoo</h2> <h3>DPN encoder with U-Net like decoder</h3> <p>Dual-path encoder with U-Net decoder implementation borrowed from <a href="https://github.com/selimsef/dsb2018_topcoders/tree/master/albu/src">https://github.com/selimsef/dsb2018_topcoders/tree/master/albu/src</a>.</p> <h3>WiderResNet encoder with U-Net like decoder and Object Context</h3> <ol> <li>WiderResNet38 encoder</li> <li>U-Net like decoder (Double [conv3x3 + bn + relu])</li> <li>OCNet in the central bottleneck. OCNet dilation factors were [2,4,6]</li> <li>SCSE blocks in decoder</li> <li>Input tensor was 3-channel image [I, Y, I * Y]</li> <li>In addition to mask output, model was also predicting salt presence</li> <li>There was additional loss for regularization attached to conv3 output of the encoder</li> </ol> <h3>WiderResNet encoder with U-Net like decoder and hypercolumn</h3> <ol> <li>WiderResNet38 encoder</li> <li>U-Net like decoder (Double [conv3x3 + bn + relu])</li> <li>SCSE blocks in decoder</li> <li>Input tensor was 3-channel image [I, Y, I * Y]</li> <li>In addition to mask output, model was also predicting salt presence</li> </ol> <h2>Dataset &amp; Folds</h2> <p>Trainset was split into 5 folds based on salt area. I used image size of 128x128 (resized with Lancsoz). I experimented with padding, but did not notice any improvement. Also tried 224 patches with padding, but I didn't run full validation on all folds and abandoned it.</p> <h2>Training</h2> <p>There were a 3 losses:</p> <ol> <li>Mask loss (BCE/Jaccard/Lovasz) with weight 1.0</li> <li>Classification loss (BCE) with weight 0.5</li> <li>Auxilarity loss (BCE) with weight 0.1</li> </ol> <p>Training was done in 3 stages:</p> <ol> <li>Warmup train for 50 epochs with BCE loss for mask with Adam.</li> <li>Main train for 250 epochs with BCE+Jaccard loss for mask with Adam.</li> <li>Fine-tune with 5 cycles of cosine annealing and restart.</li> </ol> <h2>Inference</h2> <p>For prediction, I used horisontal flip TTA. Predictions after 5 annealing cycles and main train phase were averaged. In total, there were 60 predictions per model (6 weights x 5 folds x 2 TTA). Masks were conditionaly zeroed if classifier predicted empty mask. Non-zero masks were postprocessed with <code>binary_fill_holes</code> method.</p> <h2>What didn't work at all</h2> <ol> <li>CRF postprocessing</li> <li>Geometric / Harmonic mean at ensembling</li> <li>Mixup augmentation</li> <li>Xception encoder</li> <li>DeepLab-like models</li> <li>Resnet34 :)</li> </ol> <h2>What did work</h2> <ol> <li>Threshold tuning</li> <li>Regularization with auxilarity loss</li> <li>Predicting salt/not-salt</li> </ol> <h2>Tried but not properly tested</h2> <ol> <li>Stochastic weight averaging</li> <li>GANs for data augmentation</li> </ol> <h1>Source Code</h1> <p>Source code is available at <a href="https://github.com/BloodAxe/Kaggle-Salt">https://github.com/BloodAxe/Kaggle-Salt</a>.</p> <h1>Acknowledgement</h1> <p>Thanks for all the participants for a challenging competition, Heng for sharing so many ideas. And for all those who were able to train resnet34 :))) And, of course to all open data science community at <a href="https://ods.ai">ods.ai</a>!</p>
TGS Salt Identification Challenge
11th place solution writeup
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I cannot say for "everybody", but one potential reason to use it is an appealing combination of "complexity" (loose umbrella term I used for number of parameters / number of operations) and "accuracy" (top-1 error on ImageNet, for instance). </p> <p>You can check out some figures <a href="https://github.com/facebookresearch/ResNeXt">on Facebook repo with ResNeXt implementation</a> and see that ResNeXt-50 validation error rate approaches the one of ResNet-101, while being 2x less "complex".</p> <p>Another important point is that I used SE ResNeXt-50 instead of "usual" ResNeXt-50, which is supposed to be even more accurate (see Table 2 in <a href="https://arxiv.org/pdf/1709.01507.pdf">Squeeze-and-Excitation Networks paper</a> for details). I have empirically compared it to carefully tuned ResNet-34, and from the very first epochs of training enjoyed faster convergence, even though single epoch took 2x longer to train.</p> <p>Using larger models seems to further improve results (some of top teams have used bigger ResNeXt and SENet architectures), but it quickly gets to the point of diminishing returns - significant slowdown of training yields marginal improvement in accuracy, slowing down the rate of experiment iterations. </p> <p>Probably with lots of experience and proper intuition (to pursue more promising research directions) you can manage to exploit potential of larger architectures, but I decided to go for the "safer" option here and chosen SE ResNeXt-50 as a primary backbone network.</p> <p><strong><a href="/alexisrozhkov">@alexisrozhkov</a>'s pipeline</strong></p> <p>Model: UNet-like architecture</p> <p>Backbone: SE ResNeXt-50, pretrained on ImageNet</p> <p>Decoder features (inspired by Heng’s helpful posts and discussions):</p> <ul> <li>Spatial and Channel Squeeze Excitation gating</li> <li>Hypercolumns</li> <li>Deep supervision (zero/nonzero mask)</li> </ul> <p>Tile size adaptation:</p> <ul> <li>Pad 101 -&gt; 128</li> <li>Mode - replicate (reflect was worse, reflect horizontally+replicate vertically was same, random selection as an augmentation didn’t improve as well, although more tests needed to be made)</li> </ul> <p>Augmentations:</p> <ul> <li>Random invert (unexpected, but visually it doesn’t change input dramatically. Also tried mean subtraction and derivative instead of raw input, but these didn’t work well)</li> <li>Random cutout (even more unexpected, but some papers indicate that it’s helpful for segmentation, because implicitly it causes model to learn “inpainting” input tiles)</li> <li>Random gamma-correction (makes sense, since tiles seem to be postprocessed in a way that changes brightness)</li> <li>Random fixed-size crop</li> <li>Random horizontal flip</li> </ul> <p>Optimizer (inspired by Peter’s writeups):</p> <ul> <li>SGD: momentum 0.9, weight decay 0.0001</li> <li>Batch size: 16</li> <li>Starting LR determined using procedure similar to LR find from fast.ai course - 5e-2</li> <li>LR schedule - cosine annealing from maximum LR, cycle length - 50 epochs, 10 cycles per experiment</li> <li>Best snapshots according to metric were saved independently for each cycle, final solution uses 2 best cycles per fold</li> </ul> <p>Loss: </p> <ul> <li>Since I used deep supervision it’s a combination of 3 losses - classification loss, pure segmentation loss (using output from an earlier layer, empty samples excluded), total loss (segmentation loss over all samples)</li> <li>Classification loss - BCE</li> <li>Segmentation losses - BCE + Lavasz Hinge*0.5</li> <li>Segmentation loss was evaluated after cropping masks and predictions back to 101 - this seems to “fix” cases where a few-pixel corner mask was predicted on a padded mask with imperfect alignment, but was cropped afterwards, resulting in 0 metric per sample.</li> </ul> <p>Cross-validation</p> <ul> <li>5-fold random split</li> <li>Tried depth, area and “type”-based stratification, but it seemed to degrade result</li> <li>Didn’t have enough time to synchronize folds with Ivan to allow cross-validation of ensemble results</li> </ul> <p><strong><a href="/sawseen">@sawseen</a>’s pipeline</strong></p> <p>Model: Modified Unet</p> <p>Backbone: SE-ResNeXt-50, pretrained on ImageNet</p> <p>Decoder features:</p> <ul> <li>Dilated convolutions with dilation from 1 to 5</li> <li>Hypercolumns</li> <li>ASP_OC_Module before last Convolution layer</li> <li>Deep supervision (zero/nonzero mask, nonzero mask segmentation)</li> <li>Dropout</li> </ul> <p>Tile size adaptation</p> <ul> <li>Replication up to 128</li> </ul> <p>Augmentations:</p> <ul> <li>Random Rotation up to 10 degree</li> <li>Random Crop and Scale</li> <li>Random Horizontal flip</li> </ul> <p>Optimizer:</p> <ul> <li>SGD: momentum = 0.9, weight decay = 0.0001</li> <li>Batch size: 16</li> <li>Lr schedule. Pretrain for 32 epochs with lr = 0.01. Then SGDR was applied for 4 cycles with cosine annealing: lr from 0.01 to 0.0001. Each cycle lasts for 64 epochs. </li> </ul> <p>Loss:</p> <ul> <li>Segmentation loss <ul><li>Pretrain: 0.1 * BCE loss + 0.9 * lovasz loss (Elu + 1)</li> <li>SGDR cycles: 0.9 * BCE loss + 0.1 * lovasz loss (Elu + 1)</li></ul></li> <li>Optimization loss = 0.05 * BCE(zero/nonzero mask classification) + 4 * 0.12 * segmentation loss per deep supervised nonzero masks + 1.0 * result segmentation loss </li> </ul> <p>Cross-validation:</p> <ul> <li>5 fold without stratification. For each fold 2 best snapshots were chosen for result predictions</li> </ul> <p><strong>Ensembling</strong></p> <p>Each of us trained multiple models (5 folds x 2 best cycles), which were used to predict masks. These 10 predictions were averaged per fold, and then a threshold was applied. Ivan chose a more conservative value of 0.5 for each fold, Alexey was more adventurous and selected a threshold that was resulting in a best metric - even though it might lead to overfitting to validation set. Then ranges were aligned and predictions were averaged again; resulting predictions were thresholded and used for submission.</p> <p><strong>Postprocessing</strong></p> <p>Few days before competition deadline we’ve decided to arrange predictions according to publicly shared csv for mosaics. This was the “eureka” moment of the competition - ground truth annotation immediately became clear, and we started to look for ways to “help” the network solve the task better, taking the annotation specifics into account. </p> <p>Unlike some other teams, we think that ground truth annotation made sense. It seems that the way the data was acquired makes it hard to get reasonable measurements underneath the salt domes, and also it seems that salt only forms a cap and doesn’t take the whole volume under this cap (I hope that people with domain expertise will correct if that’s a wrong assumption). In this case it wasn’t physically-sound to label block underneath salt as salt (because probably they were not). Next question is why to label the salt dome thickness this particular way? Our hypothesis is that it’s hard to make this prediction given tiles without boundary, so they were marked as “not containing salt”, and the rest of tiles were marked properly. Probably the tile size was chosen to match the expected thickness of the salt dome, but it’s just a guess.</p> <p>Surprisingly, it turned out that our networks were already doing a very reasonable job in most cases and were just lacking enough context - a “bird’s eye view” should’ve been able to fix that. 2 days before the deadline we realized a way to do this - pass the tile ids along the pipeline and then apply post-processing heuristics using tile neighbor information to improve consistency of result, even though it was a bit risky (different depth values for neighboring samples probably imply that tiles were taken from different “slices” of a scan volume, so not necessarily they had aligned ground truth values).</p> <p>We have used both ground truth and predicted masks to construct few rules to improve the segmentation results:</p> <ul> <li>For almost-vertical ground truth tiles (the ones with last 30 rows same, but not fully covered) - extrapolate last row downwards till the end of mosaic</li> <li>For “caps” in ground truth tiles (the ones with fully-covered last row) - zero all tiles below</li> <li>For tiles vertically “sandwiched” between predicted or ground truth masks - replace mask with last row of tile above, if it will cause increase of mask area (ideally it should’ve been a trapezoid from last row of tile above to first row of tile below, but didn’t have time to do that, and potential score gain is small)</li> <li>For tiles below predicted “caps” - do the same as for ground truth case</li> <li>For tiles below almost-vertical predicted tiles - extract downward.</li> </ul> <p>It is interesting that although these heuristics improved public lb score significantly, the effect on private lb score was negligible (~0.1 increase vs 0.001). Some rules didn’t make a difference on public lb, but deteriorated result on private lb significantly (last described rule caused 0.003 drop on private lb).</p> <p>Another observation - seems that for majority of teams private lb score is higher than public nonetheless. It leads us to hypothesize that public/private split wasn’t random and was cherry-picked to meet organizers’ demands in a better way. One hypothesis for why private lb score was usually higher than public - organizers excluded few-pixel masks (which were hard to predict, incurred high penalty and probably weren’t important from business standpoint) from private set.</p> <p>Anyway - thanks to organizers for interesting contest with a “plot twist”, to Heng, Peter and other people on Kaggle forum and ods.ai community that sparked enlightening discussions, and to all participants for making it a very competitive experience!</p>
TGS Salt Identification Challenge
150th place solution (Boring!)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Oh! Let me try to explain: A UNet is what we are all using for segmentation. The UNet consists of an encoder part, and (typically) a fully connected part and then a decoder part. When I write that I've used a ResNet34, that means I've used a pre-trained ResNet34 for the encoder part of my UNet structure. </p> <p><strong>First a warning:</strong> This is probably a boring posting describing my standard solution, which probably have no academic interest to anyone. If you are looking for a special, exciting and fancy solution, this is not it, and you may spend your time reading about another solution. (I'm mostly writing and posting this for my own record.)</p> <h3>Thanks and congratulations</h3> <p>First of all, thanks to the organizers and congratulations to the winners and all the other participants that are satisfied with their solutions. And thanks for everyone that shared code and ideas.</p> <h3>Short description of solution</h3> <p>My solution is pretty simple: Pre-trained ResNet34 in PyTorch with Heng's decoder, and using Peters recipe to gain 0.841 on public LB. I trained in five folds. Then I tried to add SCSE. I think I did that incorrectly (bugs!?), but it still gained a few millipoints on the public LB. Among the bugs was that I only added spatial SE and only on the decoder part. I noticed this mistake when it was way to late. Then I did a plain linear blend of these two models. (I think it was 0.43 non-SE and then 0.57 of SE). The only TTA I used was horizontal flipping.</p> <p>After I did this, I got busy with other things... been to two different conferences, had some vacation and I nearly forgot all about it. At least I didn't submit anything more, even though I saw that there was some really interesting postings of jigsaw puzzles and incorrect labels. I guess utilizing these could have improved my score, but in the moment it mattered, I was just not motivated to do any effort.</p> <h3>Experience and learning</h3> <p>I struggled a lot in the beginning. I tried Keras and a lot of public kernels, but I really could not get anything really going. 0.70-0.75 was the normal scores. The local CV was not working properly either, I was really close to giving up.</p> <p>However, I gathered some of Hengs code and slowly made it work. Thanks a lot to Heng! Then Peters posting helped me slowly improve from there. Thanks a lot to Peter.</p> <p>PyTorch is pretty new to me. I really had a good experience using PyTorch. I've also done most of the training on a virtual machine. A VM feels really flexible, even though it was a bit of hassle to set it up. The VM was configured with 4 GPUs (Tesla K80s) and PyTorch had no problem utilizing all of them. Anything with Keras on multiple GPUs was really much of a hassle to get running properly.</p> <p>I must say I'm satisfied with my final score, considering the relatively small amount of effort I put into this.</p>
TGS Salt Identification Challenge
1st Place Solution with Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I've just put Lovasz output to Sigmoid, normalize obtained numbers to [0;1] interval. and treat them as probabilities</p> <p>We make prediction on test image. For each pixel in the test image we have the probability to be the salt. If this probability &lt; 0.2 or &gt; 0.8 then we assume that it's confident pixel prediction (confident non-salt or confident salt). Confidence of the overall image is percentage of confident pixels out of 101^2 pixels. Finally, for pseudolabels we add only those test images, where this percentage is e.g. greater than 0.95.</p> <p>For pseudolabels one does not use any hand labeling, it's just semi-supervised learning method. And it was frequently used in previous competitions.</p> <p>Thank you! The idea is to increase spatial resolution at the middle block of Unet.</p> <p>Here is a reference discussion: <a href="https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/66940">https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/66940</a></p> <h3>UPD V2: Paper with our solution approach is published at German Conference on Pattern Recognition (GCPR), 2019:</h3> <p><a href="https://arxiv.org/abs/1904.04445">https://arxiv.org/abs/1904.04445</a></p> <h3>UPD: We've made our code available on github:</h3> <p><a href="https://github.com/ybabakhin/kaggle_salt_bes_phalanx">https://github.com/ybabakhin/kaggle_salt_bes_phalanx</a></p> <p>First of all, I'd like to congratulate and thank my teammate <a href="https://www.kaggle.com/phalanx">phalanx</a> for his great contribution and effort! Also, thanks to organizers for this competition and to <a href="https://www.kaggle.com/hengck23">Heng</a> and <a href="https://www.kaggle.com/pestipeti">Peter</a> for their insightful forum posts.</p> <p>It is my first problem in image segmentation, just 3 months ago I knew nothing about segmentation. So, this 1st place is a tremendous bonus for all the knowledge and experience we've gained. I guess, it's a good example for novices: if you work hard, you could achieve high results even with little background.</p> <h2>Local Validation</h2> <p>We created 5 common folds stratified by depth. Score on local validation had pretty solid correlation with the LB.</p> <h2>1st Stage Training</h2> <p>Each of us developed single model based on training data:</p> <hr> <h3>My model</h3> <p><em>Input:</em> 101 -&gt; resize to 192 -&gt; pad to 224</p> <p><em>Encoder:</em> ResNeXt50 pretrained on ImageNet</p> <p><em>Decoder:</em> conv3x3 + BN, Upsampling, scSE</p> <p><em>Training overview:</em></p> <p>Optimizer: RMSprop. Batch size: 24</p> <ol> <li>Loss: BCE+Dice. Reduce LR on plateau starting from 0.0001</li> <li>Loss: Lovasz. Reduce LR on plateau starting from 0.00005 </li> <li>Loss: Lovasz. 4 snapshots with cosine annealing LR, 80 epochs each, LR starting from 0.0001</li> </ol> <hr> <h3><a href="https://www.kaggle.com/phalanx">phalanx</a> model</h3> <p>It was ResNet34 (architecture is similar to resnet_34_pad_128 described below) with input: 101 -&gt; resize to 202 -&gt; pad to 256</p> <hr> <ul> <li>5-fold ResNeXt50 had 0.864 Public LB (0.878 Private LB)</li> <li>5-fold ResNet34 had 0.863 (0.880 Private)</li> <li>Their ensemble scored 0.867 (0.885 Private)</li> </ul> <h2>2nd Stage Training</h2> <p>Based on the ensemble from the 1st stage, we created a set of confident pseudolabels. The confidence was measured as percentage of confident pixel predictions (probability &lt; 0.2 or probability &gt; 0.8).</p> <p>Then, again, we had 2 models:</p> <ol> <li>My ResNeXt50 was pretrained on confident pseudolabels; and 5 folds were trained on top of them. 0.871 (0.890 Private)</li> <li><a href="https://www.kaggle.com/phalanx">phalanx</a> added 1580 pseudolabels to each of 5 folds and trained the model from scratch. 0.861 (0.883 Private)</li> <li>Their ensemble scored 0.870 (0.891 Private)</li> </ol> <h2>3rd Stage Training</h2> <p>We took all the pseudolabels from the 2nd stage ensemble, and <a href="https://www.kaggle.com/phalanx">phalanx</a> trained 2 models:</p> <hr> <h3>resnet_34_pad_128</h3> <p><em>Input:</em> 101 -&gt; pad to 128</p> <p><em>Encoder:</em> ResNet34 + scSE (conv7x7 -&gt; conv3x3 and remove first max pooling)</p> <p><em>Center Block:</em> <a href="https://arxiv.org/abs/1805.10180">Feature Pyramid Attention</a> (remove 7x7)</p> <p><em>Decoder:</em> conv3x3, transposed convolution, scSE + hyper columns</p> <p><em>Loss:</em> Lovasz</p> <hr> <h3>resnet_34_resize_128</h3> <p><em>Input:</em> 101 -&gt; resize to 128</p> <p><em>Encoder:</em> ResNet34 + scSE (remove first max pooling)</p> <p><em>Center Block:</em> conv3x3, <a href="https://arxiv.org/abs/1703.02719">Global Convolutional Network</a></p> <p><em>Decoder:</em> <a href="https://arxiv.org/abs/1805.10180">Global Attention Upsample</a> (implemented like senet -&gt; like scSE, conv3x3 -&gt; GCN) + deep supervision</p> <p><em>Loss:</em> BCE for classification and Lovasz for segmentation</p> <hr> <p><em>Training overview:</em></p> <p>Optimizer: SGD. Batch size: 32.</p> <ol> <li>Pretrain on pseudolabels for 150 epochs (50 epochs per cycle with cosine annealing, LR 0.01 -&gt; 0.001)</li> <li>Finetune on train data. 5 folds, 4 snapshots with cosine annealing LR, 50 epochs each, LR 0.01 -&gt; 0.001</li> </ol> <hr> <ul> <li>resnet_34_pad_128 had 0.874 (0.895 Private)</li> <li>resnet_34_resize_128 had 0.872 (0.892 Private)</li> </ul> <h2>Final Model</h2> <p>Final model is a blend of ResNeXt50 from the 2nd stage and resnet_34_pad_128 from the 3rd stage with horizontal flip TTA: 0.876 Public LB (0.896 Private LB).</p> <h2>Augmentations</h2> <p>We were using pretty similar list of augmentations. My augmentations were based on the great <a href="https://github.com/albu/albumentations">albumentations</a> library:</p> <ul> <li>HorizontalFlip(p=0.5)</li> <li>RandomBrightness(p=0.2, limit=0.2)</li> <li>RandomContrast(p=0.1, limit=0.2)</li> <li>ShiftScaleRotate(shift_limit=0.1625, scale_limit=0.6, rotate_limit=0, p=0.7)</li> </ul> <h2>Postprocessing</h2> <p>We developed postrpocessing based on jigsaw mosaics. Here is an idea:</p> <ol> <li>Find all vertical and half-vertical (bottom half of the image is vertical) images in train data</li> <li>All test images below them in mosaics get the same mask</li> <li>Only one test image above them get the same mask, and only if its depth in mosaic &gt;= 3</li> </ol> <p>Unfortunately, it gave huge boost on Public LB and no boost on Private:</p> <p>0.876 -&gt; 0.884 on Public LB and 0.896 -&gt; 0.896 on Private LB</p> <h2>GPU resources</h2> <ul> <li>I had only a single 1080</li> <li><a href="https://www.kaggle.com/phalanx">phalanx</a> had a single 1080Ti and got another one only during the last week of competition</li> </ul> <h2>Frameworks</h2> <ul> <li>I was using Keras. Special thanks to <a href="https://www.kaggle.com/pavel92">qubvel</a> for his great <a href="https://github.com/qubvel/segmentation_models">repo</a> with segmentation zoo in Keras</li> <li><a href="https://www.kaggle.com/phalanx">phalanx</a> was using PyTorch</li> </ul> <p>I was making experiments in the beginning of competition, and surprisingly 192 worked better than 202 (just x2 upsample)</p> <p>I guess, we will share it, if it is allowed by the organizers</p> <p>I'm not sure about sharing the code (at least, now), but it's interesting that I've got around 0.852 using BCE+DICE. And the rest was gained using Lovasz training. And I made a couple of changes to qubvel architecture: changed last layer convolutions to 1x1, added scSE in decoder and Dropout before the last layer</p> <p>The idea is described in this paper: <a href="https://arxiv.org/abs/1803.02579">https://arxiv.org/abs/1803.02579</a>. And here is one of the discussions in the forum: <a href="https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/66178">https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/66178</a></p> <p>Thank you! All our architectures were kind of 'Unet'-ish, but with some custom tweaks</p> <p>Thanks! Maybe I have written a little bit unclear: 1) Pretrain model on pseudolabels for 150 epochs (50 epochs/cycle with cosine annealing) 2) Take weights after 150 epochs and train 4 snapshots for 5 folds, meaning 4*5=20 final snapshots</p> <p>You could go by the link: <a href="https://www.kaggle.com/c/tgs-salt-identification-challenge/submissions">https://www.kaggle.com/c/tgs-salt-identification-challenge/submissions</a> and see Private LB scores for all your submissions. Moreover, you could make additional late submissions</p> <p>I've added scSE only to the Decoder. You could clone the <a href="https://github.com/qubvel/segmentation_models">qubvel repo</a> and modify U-Net builder <a href="https://github.com/qubvel/segmentation_models/blob/master/segmentation_models/unet/builder.py">here</a>. My changes (including scSE block) could be found <a href="https://github.com/ybabakhin/kaggle_salt_bes_phalanx/blob/master/bes/segmentation_models/unet/builder.py">here</a> </p> <p>Finally, we've made our code available on github: <a href="https://github.com/ybabakhin/kaggle_salt_bes_phalanx">https://github.com/ybabakhin/kaggle_salt_bes_phalanx</a></p> <p>I use a combination of Jupyter Notebooks and PyCharm while developing the code</p>
TGS Salt Identification Challenge
236-th place solution null+Hflip TTA, single model, single fold, single snapshot, no pseudolabel
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Which link?</p> <p><strong>Model</strong></p> <p>se-resnext50 with slightly modified <a href="https://github.com/ternaus/TernausNet/blob/master/unet_models.py#L135">Albunet</a> and some additions:</p> <ul> <li>hypercolumn</li> <li>scSE added at the end of decoder block and after hypercolumn</li> <li>coordconv added in hypercolumn(don't know if it improved my score, but why not?)</li> </ul> <p><strong>Videocards</strong></p> <p>Single 1080ti</p> <p><strong>Augmentation</strong></p> <p>I used <a href="https://github.com/albu/albumentations">albumentations</a> to augment my data:</p> <ul> <li>HFlip</li> <li>RandomBlur</li> <li>RandomContrast</li> <li>RandomBrightness</li> <li>RandomGamma</li> </ul> <p><strong>Train</strong></p> <p>Instead of using ShiftScaleRotate and RandomSizedCrop as augmentations for each epoch I padded images and masks to pad_size in pad_sizes=[128, 160, 192] and after that randomly cropped to crop_size in crop_sizes=[96, 128, 160, 192] if pad_size is more than or equal to item in crop_sizes. So that I do not used interpolation methods at all.</p> <ul> <li>batch_size: 5</li> <li>LR: Adam. start_lr:1e-4; cosine_annealing + ReduceLrOnPlateau ( change alpha_zero in cosine_annealing when lr on plateau reduced); min alpha_zero: 1e-5</li> <li>loss: 0.1*BCE + 0.9 * lovasz</li> </ul> <p><strong>Final Score</strong></p> <ul> <li>Local LB: 0.855</li> <li>Public LB: 0.846</li> <li>Private LB: 0.866</li> </ul> <p>UPD: code available <a href="https://github.com/mrlzla/kaggle_salt">here</a></p>
TGS Salt Identification Challenge
26th place solution (resnet34)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>1. Augmentations:</strong> </p> <ul> <li>RandomCrop, HorizontalFlip, RandomContrast, RandomBrightness, RandomShift, HorizontalShare, RandomRotation, Expand, Resize(224)</li> </ul> <p><strong>2. Model design:</strong></p> <ul> <li>Resnet34 (removed first maxpool) + scSE + Hypercolumn (all activations in decoder - ELU)</li> </ul> <p><strong>3. Training:</strong></p> <ul> <li>5-fold CV (stratified by depth)</li> <li>Batch size: 16</li> <li>SGD: momentum 0.9</li> <li>LR schedule: cosine annealing from 0.01 to 0.0005, cycle length - 50 epochs, 10 cycles per fold</li> </ul> <p><strong>4. Loss:</strong></p> <ul> <li>Lovasz Hinge * 0.9 + BCE * 0.1</li> </ul> <p><strong>5. Prediction:</strong></p> <ul> <li>The mean of the predictions from 5 to 10 cycle of 5 folds. (TTA with horizontal flip, threshold=0.45)</li> </ul> <p><strong>6. Result:</strong></p> <ul> <li>Training and inferencing take about 5 days on 2x 1080ti GPU. The final ensemble trained in this way without any post-processing gets 0.888 Private LB and 0.865 Public LB.</li> </ul> <p><em>Framework: PyTorch</em></p> <p>Thanks! It's about ~40 hours on one GPU</p>
TGS Salt Identification Challenge
28th place solution: single model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Our team: me, <a href="https://www.kaggle.com/insaff">insaff</a>, <a href="https://www.kaggle.com/leon7archer">leon7archer</a></p> <p>Our final solution was a single 5-fold model with hflip TTA</p> <p><strong>Input</strong></p> <p>Input: 128x128, where the1st channel is the source image, the 2nd channel is relative depth (current_depth - min_depth / max_depth - min_depth) and the third is CoordConv (like here <a href="https://eng.uber.com/coordconv/">https://eng.uber.com/coordconv/</a>). We tried higher resolution but faced much slower learning and converge without getting any improvements.</p> <p><strong>Augmentations</strong></p> <p>Brightness , blurs, hflip, scale, rotate, contrast. However, using heavy augmentations could cause problems while training </p> <p><strong>Base model</strong></p> <p>SE-ResNeXt-50 (it outperformed all Resnets and more complex SE-ResNeXt-101). However, we modified the first layer. Firstly we have been using it without max pooling, but then switched on using with stride = 1 by returning back max pooling.</p> <p>ScSE (Spatial-Channel Squeeze &amp; Excitation) both in Encoder and Decoder</p> <p>Hypercolumn</p> <p>Without any dropouts. That was cruel because speeded up the training and improved final result. We even tried to use only 16 filters but ended up using 32.</p> <p><strong>Learning</strong></p> <p>Batch size =20, Adam, Cyclic learning rate (mode='triangular2', base_lr=1e-4, max_lr=3e-4, step_size=1500), heavy snapshot assembling (averaging last 10 best models with exponentially decreasing weights). Using snapshot assembling made useless blending models or for such lb score lower models didn't give any boost.</p> <p>80 epochs with BSE loss and 0.1 BSE + 0.9 Lovasz as a loss function</p> <p>We used 1080ti and 1070ti with pytorch for training.</p> <p><strong>PostProcessing</strong></p> <p>Removing smalls masks and small independent masks (black and white) by using cv2.connectedComponentsWithStats (morphology didn't work that well)</p> <p>Higher (0.40+) threshold seems to give a better score on private LB.</p> <p>Jigsaw didn't give improvements on local LB so we didn't use on the final submission.</p> <p>Do you use just Keras? or you can write on TensorFlow too? </p> <p>Thanks!</p>
TGS Salt Identification Challenge
30th: strong baseline with no post-processing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Greetings everybody, greetings Kaggle on this nice segmentation story and my first! Though I haven't read Discussions much but thanks a lot first of all <a href="/hengck23">@hengck23</a> for his hard networking, thanks ods.ai community and of course my team: <a href="/ddanevskyi">@ddanevskyi</a> <a href="/vshmyhlo">@vshmyhlo</a> .</p> <p>We've got strong folded single model baseline [0.869 and 0.887] and its configuration might be useful for somebody. Unfortunately we started to late and did no post-processing including blendings, ensembles at all so it is what it is.</p> <p><strong>Formula:</strong></p> <ul> <li><p>Deep network (resnet152).</p></li> <li><p>Remove that famous pooling :p .</p></li> <li><p>Adam 150 epochs with decreasing from 10-4 to 10-6 with factor 0.5 (StepLR after changed to Plateau).</p></li> <li><p>Backprop on full image (with replicated borders included) </p></li> <li><p>segment_bce + 10 * boundary_bce first 30 epochs changed to lovasz_hinge + 10 * boundary_bce Loss.</p></li> <li><p>Hypercolumn with 0.5 dropout in decoder.</p></li> <li><p>Sqex in decoder.</p></li> <li><p>Save checkpoint on avg validation between val and val_tta (might be important). Evaluating val on cropped to 101*101 images.</p></li> <li><p>Upsample instead of Deconv, and I've change inner order to conv/upsample/conv compared to original Albunet <a href="https://github.com/ternaus/TernausNet">https://github.com/ternaus/TernausNet</a></p></li> <li><p>Hard augmentations surprisely work best. Contrast, Brightness, Gamma, Blur, Horizontal Flip, Shift and ShiftScale up to 50 pixels.</p></li> <li><p>Last 10 epochs with no augmentations (HorizontalFlip only) helped a bit.</p></li> </ul> <p><strong>Whats would be enogh for gold?</strong></p> <ul> <li><p>Better network like se-resnet, se-resnext.</p></li> <li><p>Pseudolabeling.</p></li> </ul> <p>That's it. Won't take your time anymore. Cheers!</p> <p>As I remember around 1.5 - 2 points averaged but I changed it because I wanted to try conv2d/upsample/conv2d (I started from conv2d/conv2d/upsample than switched to deconv and then back to this approach). Might be statistical error.</p>
TGS Salt Identification Challenge
32nd solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Sorry, it is a typo mistake, it should be 1080Ti. I have not used 1070Ti, so I am not sure about the gap between it and 1080Ti.</p> <p>Yes, pseudo-labels improved private LB 0.002 (from 0.885 to 0.887)</p> <p>Thanks, Gary. Yes, there is few new thing in my solution. I think I got this score probably because I have four 1080, hhh...</p> <p>First of all, special thanks to the kagglers’ excellent sharing, especially <a href="https://www.kaggle.com/hengck23">Heng</a>. I have learnt a lot from this competition. Actually, this is my first kaggle competition, before that, I just ran the start kernel shared by others. I am very lucky to be able to participate in this competition. I will be very happy if my sharing is useful. </p> <p>Here is my code:</p> <p><a href="https://github.com/mingmingDiii/kaggle_tgs">Tgs code</a></p> <p>And here is my solution:</p> <h1>1. Encoder</h1> <p>I have tried a lot of backbone, such as resnet34, senet, se-resnet, etc. But only Densenet and DPN worked for me. I used three encoders in my final models:</p> <ul> <li>densenet without dilation</li> <li>densenet with dilation</li> <li>normal DPN (Dual Path Networks)</li> </ul> <p>(More details can be found in the codes)</p> <h1>2. Decoder</h1> <ul> <li>Unet style structure with hypercolumn (Heng's solution)</li> <li>linknet style structure without hypercolumn</li> <li>ModifiedSCSEBlock (the original scse block did not work for me)</li> </ul> <p>(I also tried other structures, such as ASPP, PSP, OCNET, Vortex pooling, but they can’t outperform the above structures)</p> <h1>3. Training</h1> <ul> <li>adam (SGD did not work for me )</li> <li>(0.5) decay learning rate if the validation miou did not improve for 15 epoches</li> <li>Training 40 more epoches after got the highest miou on validation dataset.</li> <li>10 folds for each model</li> <li>5 cycle learning for each fold</li> <li>Heng's data sugmentation</li> <li>deep supervision</li> <li>lovasz loss</li> </ul> <h1>4. Pseudo label</h1> <p>Afraid of overfitting, I emsembled the results from densenet to train DPN. This improved 0.001 in public LB.</p> <h1>5. Final Ensemble</h1> <p>Densenet(with dilation)+Densenet(without dilation,two scale 128and224)+DPN+link_Densenet+Pseudo Just average all of them.</p> <h1>6. Testing</h1> <p>Just flip horizontal TTA. I have tried some other TTA, did not work for me.</p> <h1>7. Post-processing</h1> <p>Modified some masks if they adjoin with vertical masks in the jigsaw.(just copy them)</p> <p>(SPN, CRF, filtering small image blocks did not work for me)</p> <p>Thanks, Heng. I will try that.</p>
TGS Salt Identification Challenge
33rd place solution with code (AutoML enabled)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>You can reference : <a href="https://arxiv.org/pdf/1801.05746.pdf">https://arxiv.org/pdf/1801.05746.pdf</a> which uses a VGG11 pretrained encoder. In fast ai courses, Jeremy Howard demonstrates using pretrained Resnet encoder for Cavarna competition. Many of us found that resnet encoder has pretty good performance for this competition based on experiments. The reason is that pretrained encoders already learned the basic features (like basic shapes, edges, angles) of images, so it is much easier to be trained.</p> <p>A special thanks to kagglers who shared ideas and findings in community, especially <a href="https://www.kaggle.com/hengck23">Heng</a> and <a href="https://www.kaggle.com/pestipeti">Peter</a> for their generous sharing . I learnt a lot from this competition. </p> <p>Code based on pytorch is available <a href="https://github.com/Microsoft/nni/tree/master/examples/trials/kaggle-tgs-salt">here</a>, it can run standalone, and can also run with <a href="https://github.com/Microsoft/nni">NNI</a> AutoML toolkit to do model/parameter search.</p> <h1>Progress</h1> <p>I started with Resnet152 UNet baseline, which gives me around 0.81 public LB, then switched to Resnet34 and use lovasz loss (elu+1), and ensemble a few models, these gave me around 0.83 public LB. Then by training the models for more epochs with cosine annealing and reduceonplateau LR scheduler, and doing 4 ttas ( horizon flip, vertical flip, null, both) , ensemble 4 of 10 folds, I got 0.84 public LB.</p> <p>Training data is split in 10 folds randomly, I used only first 4 folds. During the competition, I tried to use another split with stratified on mask coverage, but I was not able to improve the score on second split, then switch back to the original random data split. </p> <p>Based on this 0.84 model, I tried following ideas:</p> <ol> <li><p><a href="https://arxiv.org/pdf/1803.02579.pdf">Spatial and channel squeeze &amp; excitation module</a> (scse) </p></li> <li><p><a href="https://arxiv.org/pdf/1411.5752.pdf">Hypercolumn</a></p></li> <li><p>add depths channel : scse + hypercolumn + depths channel, together improve public LB from 0.84 to 0.856. tta changed from 4 to 2 (null, hflip) due to depths channel added.</p></li> <li><p>CRF post processing: no help</p></li> <li><p>Remove first pooling layer of pretrained resnet34: no improvement on single model, but this model is on 4 folds used to ensemble</p></li> <li><p>Use resnet50 as backbone: no improvement on single model either, this model is on 4 folds used to ensemble</p></li> <li><p>More training (SGD, weight decay 0.0001, cosine annealing, reduce on plateau), ensemble (simple averaging) 4 folds x 3 models (res34, res50, res34 remove first pooling) x 2 ttas (null, flip), got 0.864 public LB </p></li> <li><p><a href="https://arxiv.org/pdf/1712.04440.pdf">pseudo labeling</a> , + 0.002, 0.866 public LB and 0.887 private LB</p></li> <li><p>Change padding to resize, did not work for me, maybe I did not train the resize models for enough epochs.</p></li> </ol> <p>not tried:</p> <ol> <li><p><a href="https://arxiv.org/pdf/1505.02496.pdf">Deep supervision</a></p></li> <li><p><a href="https://arxiv.org/pdf/1704.00109.pdf">Snap shot ensembling</a></p></li> <li><p>Salt Jigsaw Puzzle</p></li> </ol> <p>Maybe I could got a gold if I tried these three, but I am satisfied with my rank and grateful. Thanks everyone for this amazing competition!</p> <h1>Solution summary</h1> <h2>Data</h2> <p>10 folds cross validation random split. 4 folds are used.</p> <p>images are pad (edge) to 128x128</p> <p>augmentation:</p> <p>rotate : 10 degree</p> <p>h flip, p=0.5</p> <p>translate: x: -0.1, 0.1</p> <p>brightness: 0.9 - 1.1</p> <h2>Model</h2> <p>UNet with pretrained resnet34/50 as encoder, with scse and hyper column</p> <p>model 1: resnet34 as encoder</p> <p>model 2: resnet50 as encoder</p> <p>model 3: replace upsample with deconv, remove first pooling layer of model1</p> <h2>Loss function</h2> <p><a href="https://arxiv.org/pdf/1705.08790.pdf">Lovasz</a> (elu+1)</p> <h2>Post processing</h2> <p>No any post processing</p> <h2>Training process</h2> <p>stage 1: training without depths channel , with large augmentation for several hundreds epochs</p> <p>optimizer : SGD with weight decay 0.0001</p> <p>lr scheduler: reduce on plateau, then cosine annealing.</p> <p>stage 2: add depths channel, fine tune stage 1 model with smaller augmentation, prediction with tta and ensemble</p> <p>stage 3: fine tune stage 2 models with pseudo labeling.</p> <p>@YourVenn, I originally tried training first model on other folds, but 10 folds result is not better than 4 folds. Then I trained other 2 models, to compare model performance, I trained them on the same 4 folds very long time. <br> Maybe it is better to train other models on different folds, just no enough time and resources. </p>
TGS Salt Identification Challenge
4th Place Solution(code updated)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>对的,所有model的cycles二值化后voting。</p> <p>不好意思才看到,看b.e.s已经回答你了,我的方案里面不需要得到像素的置信度,最后的多snapshot集成也是二值化之后投票得到的。</p> <p>sorry,it‘s a mistake</p> <p>yes,10fold with majority voting(different threshold at each cycle), not ave</p> <p>For each cycle(snapshot), we choose the best threshold for binarization according to the valid map . After we get all binary results of all cycles(snapshots, 10fold*7cycles), we do majority voting to ensemble.</p> <p>Exactly!</p> <p>First of all,thanks to Heng for sharing all his ideas. Our team followed every step that Heng suggests. This is my first compele Kaggle competion, many thanks to my teammates!</p> <p>Solution Code: <a href="https://github.com/SeuTao/Kaggle_TGS2018_4th_solution">https://github.com/SeuTao/Kaggle_TGS2018_4th_solution</a></p> <h2><strong>Solution development:</strong></h2> <h2> 1.Single model design:</h2> <ol> <li>input: 101 random pad to 128*128, random LRflip;</li> <li>encoder: resnet34, se-resnext50, resnext101_ibna, se-resnet101, se-resnet152, se resnet154;</li> <li>decoder: scse, hypercolumn (not used in network with resnext101_ibna, se_resnext101 backbone), ibn block, dropout;</li> <li>Deep supervision structure with Lovasz softmax (a great idea from Heng);</li> <li>We designed 6 single models for the final submission;</li> </ol> <h2>2. Training:</h2> <ul> <li>SGD: momentum -- 0.9, weight decay -- 0.0002, lr -- from 0.01 to 0.001 (changed in each epoch);</li> <li>LR schedule: cosine annealing with snapshot ensemble (shared by Peter), 50 epochs/cycle, 7cycles/fold ,10fold;</li> </ul> <h2>3.Model ensenble: +0.001 in public LB/+0.001 in private LB</h2> <ul> <li>voting across all cycles</li> </ul> <h2>4. Post processing: +0.010 in public LB/+0.001 in private LB</h2> <p>According to the 2D and 3D jigsaw results (amazing ideas and great job from @CHAN), we applied around 10 handcraft rules that gave a 0.010~0.011 public LB boost and 0.001 private LB boost.</p> <h2>5.Data distill (Pseudo Labeling): +0.002 in public LB/+0.002 in private LB</h2> <p>We started to do this part since the middle of the competetion. As Heng posts, pseudo labeling is pretty tricky and has the risk of overfitting. I am not sure whether it would boost the private LB untill the result is published. I just post our results here <a href="https://github.com/SeuTao/Kaggle_TGS2018_4th_solution">https://github.com/SeuTao/Kaggle_TGS2018_4th_solution</a> , the implementation details will be updated.</p> <p><strong>6.Ideas that hadn't tried:</strong></p> <ul> <li>mean teacher: We have no time to do this experiment. I think mean teacher + jigsaw + pseudo labeling is promising.</li> </ul> <p><strong>7. Ideas that didn't work:</strong></p> <ul> <li>oc module: The secret weapon of @alex's team. Can't get it work.</li> </ul> <h2>Related papers:</h2> <ul> <li><a href="https://arxiv.org/abs/1608.03983">https://arxiv.org/abs/1608.03983</a> LR schedule</li> <li><a href="https://arxiv.org/abs/1411.5752">https://arxiv.org/abs/1411.5752</a> Hypercolumns</li> <li><a href="https://arxiv.org/abs/1712.04440">https://arxiv.org/abs/1712.04440</a> Data distillation</li> <li><a href="https://link.springer.com/chapter/10.1007/978-3-030-01225-0_29">https://link.springer.com/chapter/10.1007/978-3-030-01225-0_29</a> IBN</li> <li><a href="https://arxiv.org/abs/1705.08790">https://arxiv.org/abs/1705.08790</a> Lovasz</li> <li><a href="https://arxiv.org/abs/1803.02579">https://arxiv.org/abs/1803.02579</a> Squeeze and excitation</li> </ul> <p>We find some nearby slices from the 2D jigsaw, which helps us to understand the data better. It also brings us 10+ vertical masks, but we know that didn't help in private LB. Data distill maybe the only worked trick in our solution.</p> <p>Thank you!</p> <p>Thanks!</p> <p>haha</p> <p>Thank you! </p> <p>校友,666</p> <p>Thank you!</p> <p>Thank you!</p>
TGS Salt Identification Challenge
54th Solution: minimum post-processing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I didn't have enough computing power to run the best model on all 7 folds. Multi-class segmentation model with Pyramid OC module gave me best results on single fold (public LB 0.857 with TTA), and I think I could do better if I ran the same model as fold 6-7 on all other folds.</p> <p>First of all I want to thank <a href="https://www.kaggle.com/hengck23">Heng</a> and <a href="https://www.kaggle.com/pestipeti">Peter</a> for their generosity! This competition would be much harder for me without your help.</p> <p>I almost did nothing in the last week since I read somewhere saying that one needed to train ~1000 epochs to reach public LB 0.865 using single model. It would take forever to train my models using my poor GTX 980m and Neptune.ml almost bankrupted me (just kidding, the platform is awesome). I thought I couldn't do any better due to the computing power and $. So in the last week, I just sat and trained another fold using Google Cloud Platform.</p> <p>Generally speaking, I have built two different models over 7 folds in this competition. I used different resizing approaches for different models.</p> <hr> <p><strong>Augmentation:</strong></p> <p>I want to thank <a href="https://www.kaggle.com/iglovikov">Vladimir</a> for sharing the Albumentations package. It is really flexible. I use Resize to 128x128 (method depends on model) HorizontalFlip + OneOf((ShiftScale+RandomSizedCrop), GridDistortion, ShiftRotate) + OneOf(RandomBrightness, RandomContrast, RandomGamma)</p> <hr> <p><strong>Model A: deep-supervised</strong> <img src="https://imgur.com/SDc8CVM.png" alt="deep-supervised"> I resize image to 128x128 using reflect padding horizontally and replicate padding vertically. </p> <p><strong>Model B: multi-class segmentation</strong> <img src="https://imgur.com/Dgxc6zw.png" alt="multi-class"> I use bilinear resizing method to scale-up images to 128x128 since I classify images based on salt coverage. It wouldn't make sense to use padding for this model.</p> <p>I use pretrained ResNet34 + scSE as encoder. I tried deeper/wider networks such as ResNet50, seResNet101, seResNext50... None of them gave me better result than ResNet34 + scSE</p> <hr> <p><strong>Training:</strong></p> <p>I have total 7 folds determined by 0.92/0.08 random split.</p> <ul> <li><p><strong>Fold 1-3: no OC module</strong></p> <p>CosineAnnealing Learning Rate with Restart: 250 epochs, 50 epochs/cycle, from 0.01 to 0.001</p></li> <li><p><strong>Fold 4: ASP OC module with dilation [12, 24, 36]</strong></p> <p>CosineAnnealing Learning Rate with Restart: 250 epochs, 50 epochs/cycle, from 0.01 to 0.001</p></li> <li><p><strong>Fold 5: Pyramid OC module with kernel size [1, 2, 3, 6]</strong></p> <p>CosineAnnealing Learning Rate with Restart: 210 epochs, 40*1.2^(cycle-1) epochs/cycle, from 0.01 to 0.001</p></li> <li><p><strong>Fold 6-7: Pyramid OC module with kernel size [1, 2, 3, 6]</strong></p> <p>CosineAnnealing Learning Rate with Restart: 295 epochs, 40*1.2^(cycle-1) epochs/cycle, from 0.01 to 0.0005</p></li> </ul> <hr> <p><strong>Post-processing:</strong></p> <p>Null + fliplr TTA</p> <p>For ensemble, I tried a shallow CNN, ridge regression, majority vote etc. Simple averaging sigmoid(output) gave me the best public LB. I took average on cycle 3-5 of fold 1-4, cycle 4-5 of fold 5, and cherry-picked best cycles of fold 6-7. Pixel = salt if its value &gt; 0.5.</p> <p>Note that directly averaging output from lovasz loss gave me terrible result. This maybe the reason why ridge regression doesn't work since sigmoid(output) doesn't give us the posterior probability when model trained using lovasz loss. As <a href="https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/66924">Heng</a> mentioned in this post, probability calibration might be needed.</p> <p>I tried majority voting based on optimal threshold determined by validation set for each model/fold/TTA. Didn't have enough time to finish it before deadline.</p> <hr> <p><strong>Conclusion</strong></p> <p>This is my second Kaggle competition and my first image processing project. I have learned a lot from all the fellow Kagglers. Thank you everyone and congratulate all the top competitors on your success!</p>
TGS Salt Identification Challenge
74th place solution & code (Wider ResNet 38 encoder)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone! Here are some highlights from my solution, which is somewhat different from other solutions I suppose. Code could be found <a href="https://github.com/greenwolf-nsk/tgs-salt-identification-challenge">here</a> (some parts of it could be a bit messy)</p> <h2>TL;DR</h2> <p><strong>Things that worked</strong></p> <ul> <li>List item</li> <li>WiderResnet38 + Unet</li> <li>Reflect 101 padding to 128x128</li> <li>Augs: shift scale rotate, brightness/contrast/gamma, elastic transform / grid distortion</li> <li>Train 500 epochs with Adam 1e-4 lr BCE loss</li> <li>Cherry pick best snapshot from above</li> <li>Train 500 more epochs with Adam 1e-5 lr and Lovasz loss</li> <li>Train 50x5 SGD with CosineLR, starting from previous best snapshot</li> <li>Hflip TTA</li> <li>Average masks from 4 folds (all 10 folds would take too long to train)</li> </ul> <p><strong>Things that did not work</strong> (mostly due to the lack of time for properly designed experiments)</p> <ul> <li>Hypercolumn</li> <li>Deep supervision</li> <li>Resnet34 and SGD</li> <li>Border prediction</li> <li>SCSE blocks (I used them on some folds in final solution actually, but it did not make much difference)</li> </ul> <h2>Model</h2> <p>WiderResnet38 + Unet (a.k.a. TernausNetV2 from <a href="https://github.com/ternaus/TernausNetV2">here</a> pretratined on ImageNet</p> <h2>Training</h2> <p>As many of participants did in the beginning, I struggled to get a strong baseline model. Firstly, I used early stopping (val loss as a condition and about 20 epochs tolerance). Then I just turned it off and trained my network for 500 epochs (~9h on 1080ti), which got me pretty decent scores, I think 0.02 - 0.03 more that early stopping version (~0.83 public LB). </p> <p>So there is the first takeaway for me: always use leaderboard metric to choose best snapshot / early stopping / etc.</p> <p>I decided to push further with that training procedure and trained 500 epochs more with lower learning rate and Lovasz loss, which gave me about 0.85 public lb.</p> <p>TTA and snapshot ensemble effect was also dependent on fold, best single fold TTA+SE gave me 0.858 on public lb.</p> <p>Having pretty strong baseline I tried to add another things to my solution but nothing really worked. And with heavy training procedure like this it is hard to make many experiments, so I did not experiment with augmentations at all.</p> <p>Another takeaway: speed of experiments really matters, and if you can determine whether something works or not without training your huge pipeline - it's a huge advantage. But I think it's mostly about experience and professional gut feeling :)</p> <h2>One interesting idea</h2> <p>To get a 128x128 image from 101x101 input many of us used reflection padding. But as the inverse transform just central crop was used. So what if we wrap borders back and average the result with the part of the image? I tried it in the middle of competition on my single fold 0.858 submit and it got me 0.857 and I didn't try it anymore. But on private LB there scores differ dramatically: first one is very close to public: 0.859 (which is no medal), and score with wrapped borders is 0.876 (which is silver)! </p> <p>Though it didn't make huge difference in final ensemble (0.881 -&gt; 0.880 private), I think it's worth giving a try.</p> <h2>Thanks</h2> <p>To <a href="https://www.kaggle.com/hengck23">Heng</a> for the huge amount of information and ideas, <a href="https://www.kaggle.com/pestipeti">Peter</a> for the detailed sharing of his approach (which I totally failed to reproduce), <a href="https://www.kaggle.com/iglovikov">Ternaus</a> for sharing the code TernausNetV2 and to <a href="http://ods.ai/">ods.ai</a> community for the great discussions and ideas.</p>
TGS Salt Identification Challenge
9th place solution with code: single model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>mixup didn't work. But it didn't also hurt.</p> <p>@Ping the outputs are actually logits in the range from minus inf to plus inf. So multiplying with -1 reverses it.</p> <p>See: <a href="https://github.com/Cadene/pretrained-models.pytorch">https://github.com/Cadene/pretrained-models.pytorch</a></p> <p>see this: <a href="https://github.com/tugstugi/pytorch-saltnet/blob/master/models/unet.py#L163">https://github.com/tugstugi/pytorch-saltnet/blob/master/models/unet.py#L163</a></p> <ul> <li>from se_resnext50 to senet154: +0.005</li> <li>SWA: +0.004</li> <li>symmetric lovasz: on public LB +0.008 on private LB +0.02</li> </ul> <p>If you are using a preemptive gcloud instance as a gitlab runner, it is little tricky. I bind the preemptive runner to a specific branch and use only there. It enables me to resume a killed job/training.</p> <p>As described above, we tried many decoder architectures including SCSE. It did't affect our LB score. Even a single convolution with a transposed convolution together could achieve public LB 0.866. So we kept our last experimented decoder which used OC.</p> <p>Our team member <a href="https://www.kaggle.com/xuyuan">xuyuan</a> and I are from the same lab so we particapate mostly together as a team. We have started with a se_resnext50 model and gradually integrated the Heng's proposed ideas like hypercolumn, deep supervision etc. Later we switched to SENet154 which gave us a nice boost. The single fold score of SENet154 was 0.882/0.869 and with 10 folds reflective padding + resizing 0.890/0.875.</p> <p>The code with the training hyper parameters is here: <a href="https://github.com/tugstugi/pytorch-saltnet">https://github.com/tugstugi/pytorch-saltnet</a></p> <p>What worked for us so far:</p> <ul> <li>AdamW with the Noam scheduler</li> <li>cutout</li> <li>SWA after the training on the best loss, pixel accuracy, metric and the last models (+0.004).</li> <li>gitlab pipeline to manage training experiments</li> <li>modifying Lovasz to symmetric which gives a good boost in the LB (on public LB +0.008 on private LB +0.02):</li> </ul> <pre> def symmetric_lovasz(outputs, targets): return (lovasz_hinge(outputs, targets) + lovasz_hinge(-outputs, 1 - targets)) / 2 </pre> <p>What didn't work:</p> <ul> <li>SGD</li> <li>ensembling. We have used only a single model with 10 folds.</li> <li>mixup</li> <li>different decoders: We have tried different stuff in the decoders like S&amp;E, DANet, OCNet, pixel shuffling, transposing, upsamling etc. But it didn't affect the LB score. Even a simple single convolution together with a transposed convolution can achieve a high LB score.</li> </ul> <p>Hereby we want to thank Heng. Without his ideas, it would be difficult for us to reach a top 10 place:)</p>
TGS Salt Identification Challenge
A way to overfit private(Public:0.856=>Private:0.888)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is about one of my test models, and was actually best single model on private. I just wanted to share this result because I thought it's kinda interesting in regard to public/private datasets. Although this model wasn't my best, an idea implemented here turned out to give huge boost on private LB and possibly big shake-ups. I guess nobody could have chosen this kind of model/submission as final 2 selection though... </p> <p><strong>A key idea to overfit for private:</strong> </p> <ul> <li>Trained without “weird masks” dataset(total 3884images) which mentioned in this kernel(<a href="https://www.kaggle.com/pestipeti/fake-incorrect-training-masks">https://www.kaggle.com/pestipeti/fake-incorrect-training-masks</a>). This approach has already proposed in other threads and most claimed that “increased in CV, decreased in public. In my case, this gave me huge ovefitting to private LB. </li> </ul> <p><strong>Base model is nothing special:</strong> </p> <ul> <li>Encoder: se_resnext50(pertained on imagenet)</li> <li>Hypercolumns</li> <li>Attention-gated</li> <li>Dilated conv</li> <li>Deep supervision</li> </ul> <p><strong>Other condition:</strong> </p> <ul> <li>Input size: 128(reflect padding)</li> <li>Optimizer: SGD</li> <li>LR: 0.01 for 100epochs, 0.001 for another 20epochs</li> <li>Lovasz-loss with relu()+1 activation</li> <li>5-Folds, best snapshot of each folds(CV: 0.8875) </li> </ul> <p>By the way, my best submission was that slightly changed the model above, trained on full dataset, and applying 3 post-processing rules gave me private 25th. </p>
TGS Salt Identification Challenge
Another top 3% solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I did not find much about CNN stacking. Beyond simple averaging, most are creating images with probabilities as layers and optionaly original image. There is also coupling outputs of the last layers from frozen UNets. I'm wondering why there is not more papers on that. We know that stacking approach works.</p> <p>This is my fourth competition. I would like to share some insights of my solution that reached top 3%.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/inbox/698363/48c40b93efeab88ec038cc23accc1a0c/solution.png" alt="Solution"></p> <p><strong>Stage#1</strong>: I trained a set of UNets, mainly based on MobileNet encoder, with different RGB composite images at different scales. I selected MobileNet because it was fast to train and it gave good results during different experimentations. I tried various decoders including hyper-column, spatial and channel excitation. Augmentation was an important point: Crop, padding, translation with either border replicate or reflect, small rotation, medium scaling and intensity variations. Train iteration was important too: Each model required 100 epochs with a mix of BCE/DICE loss followed by another 100 epochs with LOVASZ loss. I used Adam with step LR. Using cyclic LR + best snapshots did not make real improvements. My CV5 stratification was based on both mask coverage and depth buckets. TTA was average of None + FlipLR.</p> <p><strong>Stage#2</strong>: Stacking approach from the probabilities given by stage#1 models. It gave better result (+0.006) than simple averaging of probabilities. It's just another simple UNet trained from scratch (easy, very fast, a few iterations with LOVASZ loss). I would expect to see more stacking for this competition as it's a regular approach in most competitions.</p> <p><strong>Stage#3</strong>: Post-processing: Find the best threshold, drop tiny masks and fill holes (though morphology operator)</p> <p><strong>What did not work</strong>:</p> <ul> <li>ResNet34 (whatever implementation in PyTorch or Keras). I guess it's either due to my stratification and/or different residual blocks implementations I tried.</li> <li>Dilated Residual Networks</li> <li>MaskRCNN (it was worth to try even for mainly one mask per image)</li> <li>External mask/no mask classifiers (not bad but not improve)</li> <li>CRF (worked only after weak models)</li> <li>Jigsaw full mask discovery from neighbors</li> </ul> <p><strong>What I liked</strong>: As for every competition I learned a lot. It's crazy to see how much knowledge and idea can be shared on the forum. Thanks to all! I learned a lot more on encoder/decoder CNN architectures. They are very powerful for segmentation and look to have much more potential from what we see in papers.</p> <p><strong>What I did not like</strong>: Pixel value=0 that could be either salt or non-salt found out at the end of competition by masters due to boundaries labeling. It is really confirmed? This one made me lose a lot of time when I tried to benefit from jigsaw; it looks obvious when displaying some salt domes but it was not compliant with data label definition. If confirmed then organizers should have mentioned it at the beginning (<strong>update</strong>: <a href="https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/66940#407928">not confirmed</a>). Also, I'm wondering what would be the real benefit for the organizers of having models build on puzzle pieces only and not full images. Maybe just to hire guys to work on further useful model. </p> <p><strong>Conclusion</strong>: Excellent competition!</p>
TGS Salt Identification Challenge
How does not win. 373 place. Reflexion, not technical report.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for feedback :)</p> <p>This is largely not a technical report. This is emotional reflection of what happened. My task is not to tell about success and factors contributing to a successful outcome. This tell about the factors reducing the results.</p> <p><em>I fear that those to whom it may be useful do not read the forum. Those who read forum already knows it.</em></p> <h2>TL;DR</h2> <ol> <li>Focus on the data, not the model.</li> <li>Understanding the problem contributes to finding a best solution that is stronger than having a large amount of computing power.</li> <li>The framework is important, but not as you think being in the middle of a leaderboard.</li> <li>In the absence of experience, the study of someone else's experience is more important than the invention of bicycles.</li> <li>Do not refuse to participate in the team (really?).</li> </ol> <h2>0. Technical report</h2> <p>My best models had a speed of <strong>0.827</strong>, and were a unet-like with pre-trained inception-resnet-v2/inceptionv3 wights encoder and a decoder with se-blocks. Fit procedure was entirely inherited from public kernels, 10-30 bce epochs, after lovasz loss before the smoke/overfit. I tried various procedures of changing lerning rate reduce according to a schedule, decrease when exiting the plato, change cyclically (a triangular diagram). I tried to lear model with SGD / Adam / Nadam optimizers<br> Original input image is rescaled to 128x128. Halftones arising on the mask were rounded off at the threshold of 0.4. The option to increase the input through reflections and zero indent did not work at all. I used D4 Augmentation. Augmentation that reflect image on horizontal reduced quality. I drop it. Final public score is <strong>0.836</strong></p> <h2>1. Focus on the data, not the model.</h2> <p>I was actively engaged in the task in the last 2 weeks. Most of this time was spent on tuning parameters in the above models. I saw reports on the usefulness of Hypercolumn, DeepSupervising and the Non-empty classifier, but ignored them. <br> I presenting the moment when I make “my-super-version-0.86-ResNet34” without modifications mentioned above.<br> We can invet a lot modifiers of baseline model. It seems now that Non-empty classifier and Bounded prediction are useful modifiers. But not because they are always good, but because in a specific task they allow using the properties of the task. It is important to be able to choose the right next model modification especially in conditions lack of time and computing power. You need to understand the task, than to do this correctly<br> You spend strength on finding hyperparameters of a not very strong model, then in the end we will get the best of the worst models and the lack of understanding which step should be next. The key to understanding the next step is to understand the specifics of the task. Concrete competition was not “simple image segmentation” competition, it had features that I chose to ignore and solve “simple image segmentation”.<br> <img src="https://storage.googleapis.com/kaggle-forum-message-attachments/inbox/203630/791288172b9ce2834fada05d6ae88ee2/tumblr_l3yuhnAPpJ1qz8ohs.gif" alt="enter image description here"></p> <h2>2. Understanding the problem contributes to finding a best solution that is stronger than having a large amount of computing power.</h2> <p>I found myself in a local maximum situation with a “nice unet-like model”, and I did not understand what the next right step should be taken. I tried to resolve the issue with brute force.<br> Actually, my workstating have GPU GTX 1080Ti. I could check 1-3 model hypotesis per day. Therefore, I tried to realize another strange delusion. In the last week, I rent a lot of powerful GPU for eval many different models. It turned out to be easier than letting the task competition enter in my soul. I rented to the google-cloud 4 Tesla P100 GPU and learn 23 different variations of the my best model. Obviously, the result was not radically better. The signle model perfomanse were soon around 0.827. The blending resulte have a score around 0.834. In retrospect, it is obvious that changing the architecture of the model and/or working with augmentations would give a greater increase than burning money in the cloud.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/inbox/203630/a9d371e4e4029669f74342fe157f8d27/zoj.png" alt="enter image description here"><br> <em>Thus picture is a frame from the clip “Ленинград - ЗОЖ” (Leningrad - Healthy lifestyle). The ironic song tells that in the absence of the mind, the application of any amount of effort will not bring results.</em></p> <h2>3 The framework is important, but not so much as you think being in the middle of a leaderboard.</h2> <p>I used Keras. However, if you looked into forum in those days, it would seem that the success of people from the top is ensured by the fact that they use FastAi/PyTorch. Maybe difference between 1 and 20 places can be explained by the difference in “quality of the framework”. But the difference between 20 and 400 place is not a “quality of the framework”, but in what is implemented on concrente framework. I saw the argument that rewriting from Keras to PyTorch resulted in an improvement from 840 to 860 and it looks impressive.<br> However, I think that have 827, you must firstly improove score to 840, and then after rewriting the pipeline on another framework. Rewrite from one framework to another with low current results is simply a waste of time.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/inbox/203630/960a17d9c68fd8872727ee4c0a54c96d/011939.jpg" alt="enter image description here"><br> <em>This picture is an illustration for Krylov's fable “quartet”. This is a fable in which not very talented animals are trying to improve the sound of their orchestra by permutation member of orchestra. They do not improve the skill of playing instruments.</em></p> <h2>4. In the absence of experience, the study of someone else's experience is more important than the invention of bicycles.</h2> <p>Before you start writing, learn to read. First, figure out what others are doing, and then spend time optimizing and rewriting. This was my first segmentation competition. I knew almost nothing about architecture, loss-fuctions and useful technics. Instead of spending time studying the specifics of the competition, I spent time on not very meaningful experiments.</p> <h2>5. Do not refuse to participate in the team.</h2> <p>On the merge-deadline I had single model with score 817 and anseble with scire 825.<br> I was invited to participate in the “[ods.ai] The Hateful Six” team (currently 108th, congratulate them). At that time, this team had an average score of 834 and were open to dialogue. However, I refused this for next reason. I considered that I had a rather modest result and I did not want to feel like a passenger “on the train”. It seems that the baggage of knowledge that I had then (and now) would be correct to go to the team and find out that colleagues already checked most of the “amazing ideas”. And in last days We can burn money in the cloud for a team solution, and not for the misunderstanding that I finally got.</p> <h2>Summary</h2> <p>Kaggle takes a lot of vitality. You need to spend them on a task, not on a model, framework, or cloud computing. Perhaps it is worth participating in contests, the topic of which is close and interesting to you, and not in general for everyone. It will be easier for you to immerse yourself in a task if you are interested in it at the domain level.<br> That's all. I think I said a lot of obvious things. I want to express my great gratitude to everyone who shared this experience in this competition (there will be something to study in the coming week). Good luck to everyone and see you in new contests.</p>
TGS Salt Identification Challenge
My part of 15th place solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I calculate topology-aware loss using mean square errors between the VGG19 descriptors (feature maps of middle layers in a pretrained VGG19 model) of ground-truth masks and predicted probability maps. (<a href="https://github.com/adam9500370/Kaggle-TGS/blob/2fe071415ad81442b8943100e8f909bc22624457/train.py#L129-L142">https://github.com/adam9500370/Kaggle-TGS/blob/2fe071415ad81442b8943100e8f909bc22624457/train.py#L129-L142</a>)</p> <p>First of all, I would like to thank many kagglers, especially <a href="https://www.kaggle.com/hengck23">Heng</a>, <a href="https://www.kaggle.com/pestipeti">Peter</a> and my teammate <a href="https://www.kaggle.com/atom1231">atom1231</a>, for sharing awesome and detailed ideas!</p> <p>This is my first kaggle competition, I have learnt a lot from this competition.</p> <p>I am glad that I can join this competition, and very lucky to get a good result.</p> <p>My PyTorch code: <a href="https://github.com/adam9500370/Kaggle-TGS">https://github.com/adam9500370/Kaggle-TGS</a></p> <p>The scores of my best 10-fold model by training from scratch: 0.887 on private LB / 0.872 on public LB.</p> <h2>Model Design</h2> <p>A small U-shape model (~8MB) with</p> <ul> <li>Pyramid pooling module, (dilated) residual blocks and auxiliary training</li> <li>Spatial and channel squeeze &amp; excitation module</li> <li>Guided upsampling module</li> </ul> <h2>Training</h2> <ul> <li>Input size: 129x129 with reflect padding</li> <li>Batch size: 40</li> <li>Augmentation <ul><li>Random horizontally flip</li> <li>Random translate with reflect padding</li> <li>Random crop and resize</li></ul></li> <li>Optimizer: Adam</li> <li>Weight decay: 1e-4</li> <li>Initial learning rate: 1e-3</li> <li>Total loss function <ul><li>1.00 Cross-entropy loss</li> <li>1.00 Lovász-softmax loss</li> <li>0.05 Topology-aware loss</li></ul></li> <li>10-fold CV (random split)</li> </ul> <h2>Final prediction</h2> <ul> <li>Mean of the probability map from all 10-fold models (TTA with horizontally flip)</li> <li>Threshold of salt probability: 0.5</li> </ul> <h2>Progress</h2> <ul> <li>U-shape network with pyramid pooling module, (dilated) residual blocks and auxiliary training by cross-entropy and Lovász-softmax loss <ul><li>Step LR decay by 0.5 every 50 epochs for first 200 epochs</li> <li>37s/epoch on single 1080TI</li> <li>0.870 on private LB / 0.852 on public LB</li></ul></li> <li>+Topology-aware loss <ul><li>Cosine annealing with restarts LR for 400 epochs (8 cycles)</li> <li>50s/epoch on single 1080TI</li> <li>0.877 on private LB / 0.859 on public LB</li></ul></li> <li>+Spatial and channel squeeze &amp; excitation module <ul><li>Cosine annealing with restarts LR for 1400 epochs (14 cycles)</li> <li>54s/epoch on single 1080TI</li> <li>0.886 on private LB / 0.871 on public LB</li></ul></li> <li>+Guided upsampling module <ul><li>Cosine annealing with restarts LR for 200 epochs (1 cycle)</li> <li>76s/epoch on single 1080TI</li> <li>0.887 on private LB / 0.872 on public LB</li></ul></li> </ul>
TGS Salt Identification Challenge
My part of 8th place (Private LB) solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>experimental way :) Adding BCE with a small weight allowed to reach good gradient flow at the beginning of training. I've tried also adding Jaccard Loss, but I was not able to reach the same performance. </p> <p>I was able to get about 0.002-0.004 LB improvement using this pre-processing</p> <p>thank you!</p> <h2><strong>1. Image pre-processing:</strong></h2> <p>Input 224x224 resize. Initially, I was using padding to 128x128 resolution, but it appeared that resize works better in my pipeline. </p> <p>Preprocessing: because of random shift in brightness (discovered form jigsaw puzzle) all images were pre-processed next way: <strong>np.clip(img - np.median(img) +127, 0, 255)</strong></p> <p>Augmentations: random gamma, brightness, shift, scale, rotate, horizontal flip, contrast</p> <h2><strong>2. Model design:</strong></h2> <ul> <li>Best performing backbones: SeNet154, SeResNext101, SeResNext50, DPN92 (from top to bottom)</li> <li>Decoder: U-Net like decoder with ScSe, CBAM and Hypercolumn</li> </ul> <h2><strong>3. Training:</strong></h2> <ul> <li><p>5-fold CV</p></li> <li><p>Batch size: 8</p></li> <li><p>Adam for 200 epochs.</p></li> <li><p>LR schedule: 0.0001 -- 100 epochs, 0.00001 -- 100 epochs (cycle #1)</p></li> <li><p>CycleLR for 40 epochs more with 10 epoch cycle and rmsprop optimizer (cycles #2, #3, #4, #5).</p></li> </ul> <p>On each training cycle, one checkpoint was made. </p> <ul> <li>Hard example mining was performed as well.</li> </ul> <h2><strong>4. Loss:</strong></h2> <p>Lovasz Hinge*0.9 + BCE*0.1</p> <h2><strong>5. Ensembling all together:</strong></h2> <p>The simple mean of the predictions (TTA with hflip) appeared to work the best.</p> <p>Training and inferencing of the SeNet154-based model take about 6 days on a single 1080ti GPU. The single model trained in this way without any post-processing gets 0.891 Private LB and 0.876 Public LB. </p> <h2><strong>6. Post-processing.</strong></h2> <ul> <li>Last days we applied post-processing based on the jigsaw puzzle. It improved our Public score, but Private remained the same.</li> </ul> <p><strong>Framework:</strong> PyTorch</p> <h2><strong>What didn't work:</strong></h2> <ul> <li>SGD</li> <li>ResNet34 :) </li> <li>Depth usage</li> </ul> <h2><strong>What didn't try:</strong></h2> <p>Deep Supervision, Mean Teacher, OCModule, Pseudo Labels and Data Distill.</p> <p>I did try adding Jaccard loss, but it didn't improve the performance. I wanted to try further model training with Lovasz Loss only, but decided to focus more on model architecture tuning. </p>
TGS Salt Identification Challenge
Noob Solution (#136) with Keras code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I shared my best model code: <a href="https://github.com/jfpuget/Kaggle_TGS">https://github.com/jfpuget/Kaggle_TGS</a></p> <p>Given the number of points (about 40 millions) I just ran sklearn logistic regression with its default L2 regularization. I did ran a k fold lr at some point to check if there was any overfit, but there wasn't.</p> <p>Sure, but I benefited from all that was shared so far. Spending 10 days at the end of a competition yields way better results than spending 10 days at the start.</p> <p>I was a noob in deep learning when I started this competition. I learned so much that I may have improved now!</p> <p>First of all, many thanks to Heng, Peter, qubvel, Alexander Liao, Jack Shao, and many others who shared so much. Pardon me if I don't cite you. I learned a lot from you all!</p> <p>I won't pretend I used anything clever here, I mostly used Heng's ideas, implemented them in Keras. But there are a few things I did that may be of interest, see below. And I totally missed the jigsaw puzzle.</p> <p>First, I used Heng's general approach:</p> <ul> <li><p>a decoder based on Jack Shao's kernels, modified for 128x128 images</p></li> <li><p>unet</p></li> <li><p>use middle layer to feed an empty mask classifier</p></li> <li><p>use each upsample block output to feed a side output. I down sampled ground truth for their loss.</p></li> </ul> <p>I used my own decoder without pretrained weights for all the competition until last day where I used pretrained resnet24 and unet from @qubvel git repo. I did not had time to train more powerful encoders like se-resnext50 that was suggested to me. Resnet34 encoder was 0.005 better than mine. </p> <p>I tried to add my own twists to the above. All twists are designed to lead to more robust models. The high variability in model predictions over fold was worrying me.</p> <p><strong>Loss:</strong> Trained first with a variant of focal loss with gamma = 1, and alpha computed image per image. This proved to be more robust than a fixed alpha. I then completed with Lovasz Loss.</p> <p><strong>Folds</strong> Never submitted a single fold as it is easy to get fooled by a seemingly good result on public LB. I always averaged the sigmoid of all fold models predictions.</p> <p><strong>TTA</strong> Used some vertical dilatation up to 30% in addition to flip. This improved my LB by 0.003 or so. Images were first tta then upsampled to 202x202 and padded to 256x256 for resnet34, and padded to 128x128 for my encoder.</p> <p><strong>Vertical masks</strong> I removed them from the train data of each fold, but kept them in the validation data. Same for images with very small masks.</p> <p><strong>Output layer</strong> Heng's idea is to use <code>sigmoid(empty_class) * sigmoid(segmentation)</code> This looks fine but it creates numerical issues when using Lovasz loss where one takes the logit of it. I rather output <code>empty_class + segmentation</code>. This is much better from a numerical point of view. </p> <p><strong>Ensembling</strong> I used a logistic regression on the outputs of 4 models. My outputs are logits, and logistic regression does a nice job of recalibrating outputs so that they blend well. This gives a 0.005 boost over the best model, and 0.01 over the average LB score of the models. It is better than averaging the models outputs as well. I tried a bit more complex ensembling models, but ran out of time. I think there is room to do a bit better that logistic regression. After all we get about 40 million (4000x101x101) points to train ensembling models.</p> <p>That's it I guess. My conclusion is that I will give Pytorch a try next time! Keras proved to be a bit cumbersome for adding se blocks within an existing model..</p> <p>Edit: I shared my best model code <a href="https://github.com/jfpuget/Kaggle_TGS">https://github.com/jfpuget/Kaggle_TGS</a></p>
TGS Salt Identification Challenge
[126th place] simple solution overview with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I was able to give this competition relatively little time and my solution is very simple. I trained an incv4 based unet on 10 folds, picked the best performing 6 and averaged. I also submitted another solution which was an average of three models (resnet34, se_resnetxt_50 and incv4) trained over 5 folds. </p> <p>My best performing model was incv4 based unet trained on 202x202 images with 27px of reflect padding on each side. I used SE layers and hypercolumns. I predicted with flip TTA. I trained using SGD, BCE -&gt; lovasz loss and the 1cycle policy. I used <a href="https://github.com/albu/albumentations">albumentations</a> for minimal augmentation and <a href="https://github.com/fastai/fastai">fastai v1</a> for training.</p> <p>I share my code <a href="https://github.com/radekosmulski/tgs_salt_solution">here</a>.</p> <p>Special thanks to <a href="/hengck23">@hengck23</a> and <a href="/pestipeti">@pestipeti</a> - you shared some great, very useful information. This competition would not have been even remotely as fun or informative as it was for many (including myself) if it was not for your posts. </p>
TGS Salt Identification Challenge
[Argus] 14th place solution with full PyTorch pipeline
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The 14th place solution by Argus team (<a href="https://www.kaggle.com/romul0212">Ruslan Baikulov</a>, <a href="https://www.kaggle.com/nikolasent">Nikolay Falaleev</a>).</p> <h3>Data preprocessing</h3> <p>The original images with size 101x101 px padded to 148x148 px with biharmonic inpaint from <code>skimage</code> package. This “padding” performed better for us than reflection or zero padding.</p> <p>Random crop to the input size 128x128 px, flip in the left-right direction and random linear color augmentation (for brightness and contrast adjustment) were applied.</p> <h3>Model design</h3> <p>After a series of experiments, we ended with a U-Net like architecture with an SE-ResNeXt50 encoder. Standard decoder blocks enriched with custom-built FPN-style layers. In addition to the segmentation task, an additional classification branch (empty/contains salt tile) added into basic network architecture.</p> <p><img src="https://nikolasent.github.io/assets/post14/arch2.png" alt="Model design"></p> <h3>Models Training</h3> <ul> <li>Loss: Lovasz hinge loss with elu + 1. See details <a href="https://github.com/lRomul/argus-tgs-salt/blob/9ab90c4e823db7ac65ef006d7ec121565d8117c2/src/lovasz.py#L110">here</a></li> <li>Optimizer: SGD with LR 0.01, momentum 0.9, weight_decay 0.0001</li> <li>Train stages: <ol><li>EarlyStopping with patience 100; ReduceLROnPlateau with patience=30, factor=0.64, min_lr=1e-8; Lovasz * 0.75 + BCE empty * 0.25.</li> <li>Cosine annealing learning rate 300 epochs, 50 per cycle; Lovasz * 0.5 + BCE empty * 0.5.</li></ol></li> </ul> <h3>Post-processing</h3> <p>Averaged results of two training used in the final submission:</p> <ul> <li>SE-ResNeXt50 on 5 random folds.</li> <li>SE-ResNeXt50 on 6 mosaic based folds (similar mosaics tiles placed in the same fold) without the second training stage.</li> </ul> <p>Mosaics-based post-processing. We used the <a href="https://www.kaggle.com/vicensgaitan">Vicens Gaitan’s</a> <a href="https://www.kaggle.com/vicensgaitan/salt-jigsaw-puzzle">Kernel</a> but not on a raw input dataset, but on images after histogram matching to an average histogram, which helps us to assembly more tiles into mosaics. In addition to extrapolating tiles with vertical masks from train subset on neighbouring tiles, we performed an automatic detection of small missed corners and inpaint them by a polygon with a smooth-curve boundary. Holes in masks were also filled with OpenCV.</p> <h3>Results</h3> <p><img src="https://nikolasent.github.io/assets/post14/postprocess.png" alt="Sample predictions"></p> <p><em>Example of the whole mosaic post-processing. Green/blue - salt/empty regions from the train dataset; red - predicted mask; yellow - inpainted by the post-processing (used in the final submission).</em></p> <p>The results are based on a step-by-step improvement of the pipeline, postprocessing, and fair cross-validation. Finally, results were achieved by carefully selected architectures without heavy ensembles of neural nets and second order models. Reasonable cross-validation with the evaluation metric prevented us from overfitting on the public leaderboard.</p> <p>For more details on data pre- and post-processing, as well as conducted experiments with neural nets, check out a blog <a href="https://nikolasent.github.io/deeplearning/competitions/2018/10/24/Semantic-Segmentation-of-Seismic-Reflection-Images.html">post</a>.</p> <p>The solution code is available on <a href="https://github.com/lRomul/argus-tgs-salt">Github</a></p>
TGS Salt Identification Challenge
[Kent AI Lab | Ding Han | Renan] 5th place solution (code available)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I don't think so. We implemented our own version.</p> <p>Yep. You can also graph them out, I'm sure there's a way you can do this automatically in TensorboardX.</p> <p>I've only concatenated the lower level features in the hypercolumn. I did not use multi-scale loss, multi-stage loss (classification/segmentation) or multi-stage training between empty and nonempty images.</p> <h2><strong>Update:</strong></h2> <p>Code is now available at <a href="https://github.com/liaopeiyuan/ml-arsenal-public">https://github.com/liaopeiyuan/ml-arsenal-public</a></p> <h2><strong>A Few Words Ahead:</strong></h2> <p>There are three people I would like to thank: <a href="https://www.kaggle.com/hengck23">Heng</a> for his amazing start code and threads, <a href="https://www.kaggle.com/bestfitting">bestfitting</a> for his detailed winner solution for past competitions, and <a href="https://www.kaggle.com/vicensgaitan">Vicens Gaitan</a> for his amazing salt jigsaw kernel. You guys should be given a prize for such amazing contributions.</p> <h2><strong>TL; DR:</strong></h2> <p>Don't trust CV, trust stability; classification+segmentation; scSE+Unet+hypercolumn; lovasz+cosine annealing+snapshots</p> <h2><strong>Data Features:</strong></h2> <p>We realized that this <em>is not a naive segmentation problem.</em> Why? Cause for a typical segmentation problem, say, scene parsing, it is rare to have "all background" annotations. But here, we have 39% empty masks. So all tricks that may apply to semantic segmentation <strong>cannot be used directly.</strong> Unlike a lot of top kagglers who used Heng's Deeply Supervised Network, we've decided to go simple: Treat this as a classification + segmentation problem. We did try to implement Deep Supervision, but it didn't help much. Of course later, after we've assembled the mosaic back together, we noticed the boundary details.</p> <p><strong>Classification:</strong> We first tried to train a binary classifier that predicts whether an image contains salt boundaries. And the way to test our classifier's performance is using the method suggested by Heng: for every empty mask, submit "1 2"; otherwise submit " ". If the classifier is perfect, the LB should be 0. </p> <p>We tried a wide range of classifier architectures, from ResNets to Densenets, but none of them works better than using the segmentation, trained on all images, used only for binary classification. For our final two judging submissions, we uses the following two binary predictions:</p> <p>Majority voted Resnet34-Unet with scSE and hypercolumn (will be explained later)+SE-ResNeXt50 w. OC Module+ ResNeXt w.o. OC Module (LB 0.869, binary classification LB 0.010)</p> <p>Majority voted (inside fold) SE-ResNeXt50 w. OC Module+ ResNeXt w.o. OC Module (LB 0.865, binary classification LB 0.012)</p> <h2><strong>Segmentation:</strong></h2> <p>Our model is pretty much like the models used by top kagglers, aka spatial+channel wise excitations, hypercolumn concatentaions and lovasz softmax loss, except for a few noticeable differences:</p> <ol> <li>we used elu+1 instead of relu for lovasz loss; validation loss was as high as 1.xx when training loss was 0.3x, but turned out to be fine</li> <li><p>we altered the features used in the hypercolumn:</p> <pre><code>f = torch.cat(( F.upsample(e1,scale_factor= 2, mode='bilinear',align_corners=False), d1, F.upsample(d2,scale_factor= 2, mode='bilinear',align_corners=False), F.upsample(d3,scale_factor= 4, mode='bilinear',align_corners=False), F.upsample(d4,scale_factor= 8, mode='bilinear',align_corners=False), F.upsample(d5,scale_factor=16, mode='bilinear',align_corners=False), ),1) </code></pre></li> <li><p>we have slightly different scSE implementations</p></li> <li>we implemented Object Context Module and added it to the SE-ResNeXt50</li> </ol> <p>Since we were getting good results on shallower networks, we didn't bother to go deeper. Seems like we are the only team still using ResNet34. The model size for 34 is around 130 MB, and for OCnet it's a little bit over 250 MB. Now looking back, we should've tried SEnet154 and Se-RexNext101, etc.</p> <h2><strong>Training:</strong></h2> <p>Thanks to peter, we implemented the cosine annealing lr with tuned hyper parameters, updated per batch instead of per epoch:</p> <pre><code>scheduler = lambda x: (0.01/2)*(np.cos(PI*(np.mod(x-1,300*1000/30)/(300*1000/30)))+1) </code></pre> <p>We also took advantage of the snapshot ensemble method, which gave us more models with the same amount of time. The most important detail, however, would be that after training on 128 for 6 cycles, we fine-tuned on 256*256 for another 7 cycles. OCnet training is more tricky, and in total, we had 200 models, including 10 folds and different snapshots.</p> <h2><strong>Model selection:</strong></h2> <p>Since we had too many models, we were afraid that some might cause overfitting. Thus, we did something similar to bestfitting's approach in Planets competition: we created simulated public/private dataset from the training set in random 34:66 split, test our models on different splits and calculated the simulated shakeup. After simple feature engineering of different characteristics of the simulated shakeups, we selected the most stable 100 from the 200 models.</p> <h2><strong>Post-processing:</strong></h2> <p>We weren't fancy on model ensemble, so we simply did an arithmetic mean on sigmoids (geometric mean didn't change much). For the threshold, we again used bestfitting's approach from Planets competition: the best threshold is the one that makes the train and test distribution the same. We used KL divergence and a bin number of 20. If we were too close this would've stabilized our results, but according to the private LBs, this approach didn't help much.</p> <h2><strong>Jigsaw (didn't help):</strong></h2> <p>We designed around 30 rules that gave a 0.010 public LB boost. The idea is like a meta-convolutional kernel. For each element in the kernel, we implemented a function that returns a binary value give certain characteristics of the patch. The kernel (not necessarily square) then is applied to each image, and if anything hits, the code proceeds to do mask generation/elimination/alternation.</p> <h2><strong>Split of work:</strong></h2> <p>Kent AI Lab: we are a New-England based, student-operated organization that focuses on multidisciplinary AI research, competitive data science, and high-performance computing. <a href="https://www.kaggle.com/somnisight"></a><a href="/somnisight">@somnisight</a>, <a href="https://www.kaggle.com/jiajunmao"></a><a href="/jiajunmao">@jiajunmao</a>, <a href="https://www.kaggle.com/bofanxu"></a><a href="/bofanxu">@bofanxu</a> and I are founders of the lab. This is the third time we are working together, and Tenniel, Aaron and I are excited to gain the Competition Master Status as high-schoolers. Though during the competition, the progress is regularly discussed between us, I'm performing most of the experiments. I'm responsible for post-processing, shakeup simulation, and the design of Resnet34-Unet.</p> <p>Renan is responsible for the DeepLabV3+ model, deep supervision, binary classifier and majority voting.</p> <p>Ding Han is responsible for the design of SE-ResNext50 with Object Context Module as well as threshold optimization.</p>
TGS Salt Identification Challenge
[Vishnu]: 21st place solution along with code using Fast.ai
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I made some changes to model.py inside fastai code. Updated Github with that code. Hope that helps.</p> <p>I guess by default it uses cosine annealing.</p> <p>Thanks.</p> <p>Thanks.</p> <p>Thanks.</p> <p>Thanks.</p> <p>I am not sure why it is not working. Try updating fastai or manually check if you can find senet.py inside models folder.</p> <pre><code>ds1,ds2,ds3,ds4 = self.ds1(d1),self.ds2(d2),self.ds3(d3),self.ds4(d4) </code></pre> <p>They are conv blocks applied to the decoder features to produce target at different scales, say 16*16*1, 32*32*1. Then you apply loss function on these different scales. </p> <p>Adam was initially working better than SGD. I tried SGD based on some of the comments in the discussion forum. It turned out to work better with a particular set of hyperparameters like lr, momentum, weight decay. It did not work initially as I was using the wrong hyper parameters. </p> <p>A special thanks to the amazing Kaggle community who are open to share ideas and help till the last minute of the competition. A special thanks to <a href="https://www.kaggle.com/hengck23">Heng</a> for his amazing ideas.The competition environment was different before Heng joined and after Heng joined the competition. He gave too many ideas to work on keeping us busy for the last 3 months. </p> <h2>Solution Code :</h2> <p><a href="https://github.com/svishnu88/TGS-SaltNet">Github link</a></p> <h2>Solution:</h2> <p>I started with a simple architecture which <a href="https://www.kaggle.com/jhoward">Jeremy Howard</a> showed in his <a href="http://course.fast.ai/lessons/lesson14.html">14th lesson</a> of fastai course and experimented with different ideas. ResNet34 worked the best for me, it was faster than the deeper architectures which allowed me train faster and test out new ideas. My final solution mainly consists of the below.</p> <ol> <li>I used <a href="https://github.com/Jongchan/attention-module">BAM: Bottleneck attention module</a> for my best model which performed slightly better(0.003) than scSE. </li> <li>Used hyper columns.</li> <li>Resize worked better than padding. I used 224 * 224 for my final model. Did initial experiments with 64*64 or 128*128 .</li> <li>Used SGD with momentum of 0.9 and wd of 1e-4.</li> <li>Used Cyclic learning rate. </li> <li>The final model is a 10 fold, where each model is run for 100 iterations. Moving from 5 fold to 10 fold gave me +0.002 boost. </li> </ol> <h2>Idea that did not work:</h2> <ol> <li>Object context. Was running into memory issues when trying OC.</li> <li>Deeper models like SeNet154, ResNext50, ResNet50 gave similar or slightly worse results than ResNet34.</li> <li>Was able to get Boundary based loss work an hour before the competition ended. Running it for 10 fold could have given a slightly better result.</li> <li>Use of depth in the intermediate layer.</li> <li>SWA did not give a meaningful improvement. May I tried it too early.</li> <li>Pseudo Labeling : Tried it in the last week. Could not get it to work, guessing some form of bug in my code.</li> </ol> <p>Thanks <a href="/sheriytm">@sheriytm</a>,<a href="/soonhwankwon">@soonhwankwon</a></p>
TGS Salt Identification Challenge
fast.ai student's 264th place solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>@Caleb, my pleasure, and appreciate your taking the time to check it out!</p> <p>@pete, awesome to be studying with you in the current course, and sounds good!</p> <p>Thanks for reading @Alex!</p> <p>Thanks a lot @William! I'm looking forward to our first lesson today -- am excited to see what v1.0 of the library will be like. And like @Vishnu, I'm hoping I can budget a lot of time toward practicing all the fundamentals : )</p> <p>I'm a current fast.ai student and the U-net architecture and training regime presented in the <a href="https://github.com/fastai/fastai/blob/master/courses/dl2/carvana-unet.ipynb">Carvana notebook</a> presented in <a href="http://course.fast.ai/lessons/lesson14.html">Lesson 14</a> of the 2018 edition of fast.ai's <a href="http://course.fast.ai/part2.html">Deep Learning Part 2</a> course served as the foundation upon which I built my solution.</p> <h3>My Solution Code</h3> <ul> <li><a href="http://nbviewer.jupyter.org/github/jamesdellinger/kaggle_tgs_salt_identification_challenge/blob/master/kernel_tgs_salt_sub6_unet-ish_resnet34_%28best%29.ipynb?flush_cache=true">Notebook on Github</a></li> </ul> <h3>Network architecture</h3> <ul> <li>U-net with a pre-trained ResNet34 bottleneck.</li> </ul> <h3>Training regime</h3> <ul> <li>Adam optimizer with weight decay.</li> <li>Progressive resizing of train/val image sizes from 64x64, to 128x128, and then finally to 256x256.</li> <li>At each image size, first train top layer group with the learning rate (lr=1e-2, in my case) found using the learning rate finder learn.lr_find().</li> <li>Then unfreeze batchnorm layers and weights at all the lower layer groups, and train them using discriminative fine-tuning (previous fast.ai students will know this technique as "differential learning rates"). Gradual unfreezing and discriminative fine-tuning are both mentioned in: <a href="https://arxiv.org/pdf/1801.06146.pdf">https://arxiv.org/pdf/1801.06146.pdf</a>.</li> <li>One-cycle training with momentum, as introduced by Leslie Smith in: <a href="https://arxiv.org/pdf/1803.09820.pdf">https://arxiv.org/pdf/1803.09820.pdf</a>. Sylvain Gugger gives a great intro in <a href="https://sgugger.github.io/the-1cycle-policy.html">https://sgugger.github.io/the-1cycle-policy.html</a>. Since I used fast.ai 0.7.x, I was unable to make use of the <a href="http://docs.fast.ai/training#Extending-training-with-callbacks">latest</a> one-cycle API found in fast.ai 1.x. I nonetheless was able to approximate it (everything but the cosine annealing) by using use_clr() with properly chosen hyperparameters. I found that use_clr() performed a bit better than use_clr_beta() for my architecture.</li> </ul> <h3>Important modifications</h3> <ul> <li>Images are more similar to top-down satellite imagery than the side-facing images that comprise most of imagenet, so use higher discriminative learning rates lrs = [lr/9,lr/3,lr], as opposed to the lrs = [lr/100,lr/10,lr] used in the Carvana notebook.</li> <li>The fast.ai 0.7.x default augmentations for satellite imagery didn't perform best for me when used during training. Thankfully, fellow fast.ai student <a href="https://www.kaggle.com/vishnus">Vishnu</a> <a href="https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/66765#394475">reported</a> that just using simple random horizontal flips did help improve performance both during training and TTA. This turned out to be true for me as well, and Vishnu's TestFilesDataset() <a href="https://github.com/svishnu88/TGS-SaltIdentification-Open-Solution-fastai/blob/master/Unet-Resnet34-Open%20Solution-FastAI.ipynb">class</a> was invaluable in my being able to also use these augmentations at TTA <strong>(+0.01 Private LB)</strong>.</li> <li>Unfortunately, fast.ai's learn.TTA doesn't work for this because afaik there's no way to force it to apply the same augmentation to <em>all</em> images in the test set, which is what I needed to do here to see an improvement. And even if I hadn't applied TTA to all test images, I would have still needed to know which images got flipped, so that I could flip back their predicted masks before averaging with the non-TTA predicted mask for the image. I don't believe the learn.TTA API currently supports this behavior, nor does it store the indices of test images that were randomly chosen to have TTA applied. </li> </ul> <h3>Other techniques that helped</h3> <ul> <li>For each training cycle, first run the cycle using binary cross-entropy with logits as the loss function, and then run the cycle again using Lovasz Hinge loss: Lovasz Hinge Loss: <a href="https://arxiv.org/pdf/1705.08790.pdf">https://arxiv.org/pdf/1705.08790.pdf</a>. I used the implementation from the author's [github].(<a href="https://github.com/bermanmaxim/LovaszSoftmax/blob/master/pytorch/lovasz_losses.py">https://github.com/bermanmaxim/LovaszSoftmax/blob/master/pytorch/lovasz_losses.py</a>).</li> <li>Adding Concurrent Spatial and Channel Squeeze &amp; Excitation blocks to the U-net decoder <strong>(+0.032 Private LB)</strong>: <a href="https://arxiv.org/pdf/1803.02579.pdf">https://arxiv.org/pdf/1803.02579.pdf</a>. I used the <a href="https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/66178">implementation</a> generously provided by <a href="https://www.kaggle.com/bkkaggle">Bilal</a>.</li> <li>Snapshot ensembling <strong>(+0.001 Private LB)</strong>: <a href="https://arxiv.org/pdf/1704.00109.pdf">https://arxiv.org/pdf/1704.00109.pdf</a>. I wrote a custom class to save and return the best N models from a single training cycle. This was inspired by the SaveBestModel() <a href="http://forums.fast.ai/t/a-code-snippet-to-save-the-best-model-during-training/12066">class</a> written by <a href="https://forums.fast.ai/u/iskode/summary">Ismaël Koné</a>.</li> <li>Depth-statified n-fold training <strong>(+0.01 Private LB)</strong>. For me, 8-fold scored slightly higher than 5-fold <strong>(+0.001 Private LB)</strong>.</li> </ul> <h3>What didn't work for me</h3> <ul> <li>Hypercolumns: <a href="https://arxiv.org/pdf/1411.5752.pdf">https://arxiv.org/pdf/1411.5752.pdf</a>.</li> <li>Many folks(<a href="https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/64645#380301">Heng</a>, et al) reported anywhere from a public LB +0.01 to +0.005 improvement after concatenating hypercolumns to their decoder's output. My attempt led to a lower score, and because I found it to be computationally expensive on my Paperspace P5000 machine, I didn't bother seeing if I could troubleshoot.</li> </ul> <h3>Would try if I had more time</h3> <ul> <li>Experiment further with other pre-trained bottlenecks such as SEResNeXt50, which Vishnu initially <a href="https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/66765#394475">reported</a> as performing a bit better than ResNet34 for him. </li> <li>Incidentally, through copious trial and error, I actually got a fast.ai U-net successfully working with an SEResNeXt50 encoder the night before the competition ended. </li> <li><a href="http://nbviewer.jupyter.org/github/jamesdellinger/kaggle_tgs_salt_identification_challenge/blob/master/kernel_tgs_salt_sub6_unet-ish_seresnext50.ipynb">My initial performance with SEResNeXt50</a> was promising (seeemed to overfit less than ResNet34). But I didn't have enough time to ensure that my network architecture was ideal. I basically just ran with ver. 1.0 of whatever structure helped me to finally stop getting mis-matched channel/dimension errors.</li> <li>I kind of just ran with random horizontal flips as my sole training and test-time augmentation when I saw that doing so performed better than the default satellite augs in fast.ai. I wish I had taken time to explore one-by-one whether a more diverse set of augmentations used solely during training would have increased my CV score.</li> <li>Finally, would have been nice to <a href="https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/68435">explore deep supervision</a>. Since apparently, <a href="https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/68949#406152">images containing all salt were labeled as having no salt</a>.</li> </ul> <h3>Thanks</h3> <ul> <li>I am indebted to the comments and contributions of fast.ai students/alums <a href="https://www.kaggle.com/vishnus">Vishnu</a>, <a href="https://www.kaggle.com/maw501">Mark Worrall</a>, <a href="https://www.kaggle.com/hortonhearsafoo">William Horton</a>, and <a href="https://www.kaggle.com/radek1">radek</a>, along with several experienced Kagglers like <a href="https://www.kaggle.com/hengck23">Heng</a> and <a href="https://www.kaggle.com/alexanderliao">Alexander Liao</a>.</li> <li>Their kind and positive tone on the forums, coupled with their incredibly valuable contributions, made it possible for me get my solution up to its current level of performance. My only regret is that I was too shy to jump into the conversations on the threads, myself. I'll do a much better job in this area in my next competition.</li> <li>Finally, I am thankful to <a href="https://twitter.com/math_rachel">Rachel Thomas</a> and <a href="https://www.kaggle.com/jhoward">Jeremy Howard</a> for all they've done with fast.ai: the library, the courses, the community, the blog, everything.</li> <li>I am enrolled in the fast.ai <a href="https://www.usfca.edu/data-institute/certificates/deep-learning-part-one">course</a> that begins this coming Monday, Oct 22, 2018. After binging through all of last year's version of the course on YouTube over the previous five weeks, I am looking forward to taking the time to get a much firmer grounding in the fundamentals of deep learning on a week-by-week basis.</li> <li><p>Rachel and Jeremy's dedication to their goal of making neural nets uncool again always inspires me and gives me hope. This <a href="https://twitter.com/jeremyphoward/status/1050423527537442817">tweet</a> of Jeremy's sums it up well:</p> <p><code> Every week I get asked "what's fast.ai's business model?" Our business model is: we spend our personal money to do stuff that helps people use deep learning. That's it. We're totally independent. We don't even take grant money. </code></p></li> </ul>
TGS Salt Identification Challenge
open source 60th solution(code)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TGS Salt Identification Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://github.com/liuchuanloong/kaggle-TGS-salt-identification">open source 60th solution(code)</a> for kaggle TGS salt identification challenge this my first kaggle competition, I worked hard in last month competition times, thanks for all people shared their ideas, in especial <a href="https://www.kaggle.com/shaojiaxin">Jack (Jiaxin) Shao</a>, <a href="https://www.kaggle.com/pestipeti">peter</a> and <a href="https://www.kaggle.com/hengck23">heng</a>, they give me a lot of encouragement, I have gained so much from this competition, and thanks for <a href="https://github.com/arc144/Kaggle-TGS-Salt-Identification">Eduardo Rocha de Andrade's code for our baseline model </a></p> <h2>our final solution</h2> <ul> <li>Input: 101 -&gt; resize to 202 -&gt; pad to 256 </li> <li>Encoder: ResNet34 pretrained on ImageNet</li> <li>Centerblock: FPA model for attention </li> <li>Decoder: conv3x3 + GAU </li> </ul> <h2>Training overview:</h2> <ul> <li>Optimizer: Lovasz Loss, SGD, Batch size: 32, Reduce LR on cosine annealing, 100 epochs each, LR starting from 0.01 <br> <ol><li>single model ResNet34 got 0.856 public LB (0.878 private LB) </li> <li>5-fold Average Ensemble ResNet34 got 0.859 public LB (0.881 private LB) </li> <li>all threshold we used 0.45 for best score </li> <li>Transfer above best model for 5-fold Average Ensemble ResNet34 got 0.864 public LB (0.883 private LB) </li></ol></li> </ul> <h2>Augmentations</h2> <ul> <li><p>we used <a href="https://www.kaggle.com/hengck23/competitions">heng's</a> augmentations <a href="https://drive.google.com/drive/folders/18_gAnL1GMD7Ogyz4T3Y0l_UD31qagsc-?usp=sharing">code</a>, we first used on keras model but did not work , in pytorch worked perfect</p></li> <li><p>do_horizontal_flip2 </p></li> <li>do_random_shift_scale_crop_pad2 0.2 </li> <li>do_horizontal_shear2 (-0.07, +0.07) </li> <li>do_shift_scale_rotate2 rotate 15 </li> <li>do_elastic_transform2 (0, 0.15) </li> <li>do_brightness_shift (-0.1, +0.1) </li> <li>do_brightness_multiply (-0.08, +0.08) </li> <li>do_gamma (-0.08, +0.08) </li> </ul> <h2>preprocessing</h2> <ul> <li><a href="https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/61949#385778">depth channels</a> </li> <li>Stratified in this <a href="https://www.kaggle.com/shaojiaxin/u-net-with-simple-resnet-blocks-v2-new-loss">kernel</a> rather than simply cov_to_class, this give me 0.03 boost</li> </ul>
TReNDS Neuroimaging
10th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for this exciting competition and also congratulations to all the winners.</p> <p>Here is a brief description of my solution.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2615556%2Fd058e1058828f2adcb7b3c3ba2abeab2%2Fsol%20(6" alt="">.png?generation=1593597555606133&amp;alt=media)</p> <ul> <li>My final submission (public: 0.15665, private: 0.15741) is an average of several variations of the prediction above.</li> <li>The blending part of the pipeline above uses not only OOF predictions but predictions on training sets (Gaussian noise is added to reduce overfitting). This improved public LB score 0.1577 -&gt; 0.1569.</li> <li>I used only ridge regressor as the 2nd level model. The parameter alpha and the scales of features were optimized with Optuna.</li> </ul>
TReNDS Neuroimaging
11th place solution (14th with pure SVR)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks again to the organizers for a great competition and for all my competitors for a great challenge. As my first kaggle competition, I learned a lot and trying to keep up with the top scores was a thrill (while also very stressful)!</p> <p>I finally was able to write up the (almost) whole code base to my approach. Here's how it worked and what made my approach sometimes unique:</p> <ol> <li><a href="https://www.kaggle.com/miykael/trends-exploration-of-the-targets">TReNDS: Exploration of the targets</a>: Having a closer look at the targets I decided for the following transformations. <ul><li><strong>First</strong>, targets were scaled using a <code>RobustScaler</code>.</li> <li><strong>Second</strong>, targets 2 to 5 were transformed to a more normal distribution using a power of 1.5</li> <li><strong>Third</strong>, I took the decision to restrict all predictions to the unique values in the target set. Especially for <code>age</code> this was helping a lot.</li></ul></li> <li><a href="https://www.kaggle.com/miykael/trends-feature-exploration-engineering">TReNDS: Feature exploration &amp; engineering</a>: Concerning the feature datasets, I removed about 1% of extrem outliers (i.e. subjects with often very strange feature values) and I've created two additional feature datasets: <ul><li><strong>Intra subject</strong> features: By computing the correlations between all 53 MRI maps within a subject, I've created 1'378 new features hopefully representing within subject characteristics.</li> <li><strong>Inter subject</strong> features: By computing the correlation between each of the 53 MRI maps and the corresponding averaged population map, I've created 53 new features hopefully representing between subject characteristics.</li></ul></li> <li><a href="https://www.kaggle.com/miykael/trends-data-scaling-and-modeling">TReNDS: Data scaling and modelling</a>: Following the lead from the kaggle discussions/notebooks I investigated a feature dataset unique scaling approach. However, instead of scaling the FNC dataset just with a factor of 500, I fine tuned the exact scaling factor for each of my 4 feature datasets. Additionally, this fine tuning was done separately for each of the 5 targets, and for each of the models I've explored (i.e. Ridge and SVR with an RBF kernel).</li> </ol> <p>These three notebooks brought me to a leaderboard position below 20. Only in the last 5 days I've discovered the <a href="https://www.kaggle.com/david1013/trends-multi-layer-model">TReNDS Multi-Layer Model</a> notebook showcasing a stacking approach. By adapting this code and optimizing it to my individually scaled dataset, I was able to push my score the ladder up to my final position.</p> <p>As an additional goody, I've also tried to profit from the particular relationship between <code>domain2_var1</code> and <code>domain2_var2</code> (as described in my first notebook) and was hoping that a multi-target prediction approach could help. To do so, I've implemented a dense neural network with two outputs and multiple loss functions. Unfortunately, this approach wasn't fruitful. Nonetheless, the corresponding notebook can be found here: <a href="https://www.kaggle.com/miykael/multi-loss-neural-network-for-domain2-targets">Multi-loss neural network for domain2 targets</a></p> <p>Thanks again for a great competition.</p> <p>Cheers, Michael</p>
TReNDS Neuroimaging
12th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Looking forward to it. Here's the features: <a href="https://www.kaggle.com/evangeliney/trendsneuroimagingaal">https://www.kaggle.com/evangeliney/trendsneuroimagingaal</a></p> <p>I like your approach! It's very creative. Great notebooks too, they're so detailed. I still have the features from the AAL parcellation if it's something you want to play with.</p> <p>Thank you to the host for organizing this competition and thank you to Kaggle for the computing resources.</p> <p>Congratulations to all the winners and medal finishers. Thanks for sharing your solutions!</p> <h2>Parcellation</h2> <p>My initial approach was to use 3D CNNs but I couldn't get a good performance out of them so I switched to a parcellation-based approach.</p> <p>I wanted to use a parcellation with a moderate number of brain regions to keep the number of features manageable. I ended up using the <a href="https://nilearn.github.io/modules/generated/nilearn.datasets.fetch_atlas_aal.html">AAL</a> parcellation which has 116 regions.</p> <h2>Age prediction</h2> <p>For each of the 3D maps, I extracted the mean intensity in each region (116 regions x 53 maps = 6148 features).</p> <p>I used the loading, fnc and these maps-derived features to train a ridge model. I got a decent score for age prediction by just tuning the alpha parameter and two weight variables. The weight variables are used to scale down the fnc and maps-derived features.</p> <p>Then I extracted the variance for each region and added them to the feature list. This led to a 0.0007 LB boost.</p> <p>I tried using skew and kurtosis as features but this didn't improve the score. My final age prediction is a weighted average of 3 ridge models.</p> <h2>Domain variables predictions</h2> <p>Unfortunately, these maps-derived features didn't seem to help much in predicting the domain variables. I only trained a single model for each of the domain variables.</p> <h2>Site effects</h2> <p>I built a site classifier using logistic regression and ridge models. I later stumbled upon the <a href="https://github.com/Warvito/neurocombat_sklearn">Combat</a> harmonization method and I used it to adjust the features for site effects.</p> <p>That's interesting, I didn't know that it's popular. And thanks for mentioning sva, I learned something new.</p>
TReNDS Neuroimaging
15th Place (3DResnet Part)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hey <a href="/authman">@authman</a> - thanks for your thoughtful post. I hope you don't think I've been ignoring it - I just want to make sure I take the time to write a full response... and have been a little busy lately. Maybe it might be easier to chat of kagglenoobs slack sometime.</p> <p>Thanks to Kaggle and the host for putting on a really interesting competition. It was really great being on a super talented team with <a href="/cpmpml">@cpmpml</a> <a href="/philippsinger">@philippsinger</a> <a href="/ericfreeman">@ericfreeman</a> and <a href="/fakeplastictrees">@fakeplastictrees</a> - I learned a lot from them in such a short period of time and thankful that they teamed. Also congratulations to all the gold medal finishers, I'm really looking forward to reading your solutions!</p> <h2>3D (4D?) Resnet</h2> <p>My main contribution to our team was through building a 3D Resnet. I decided to use Keras and most of the architecture was built off <a href="https://github.com/JihongJu/keras-resnet3d/blob/master/resnet3d/resnet3d.py">this code I found on github</a> I just had it modify it some to work with tf2.0. The models themselves did not fare well on the LB, but when blended with good tabular provided a good jump.</p> <p>We initially had a 0.1589LB Tabular model -&gt; Blending 20/80 with 3D Resnet improved to 0.1587 LB --&gt; Adding mixup augmentation improved to 0.1578 LB</p> <p>I also converted all the fMRI and tabular data to [tfrecords] (<a href="https://www.tensorflow.org/tutorials/load_data/tfrecord">https://www.tensorflow.org/tutorials/load_data/tfrecord</a>) format to optimize loading from disk.</p> <p>The main part of the model itself looked like this: ``` input = Input(shape=input_shape, name='fMRI')</p> <h1>first conv</h1> <p>conv1 = conv_bn_relu3D(filters=64, kernel_size=(7, 7, 7), strides=(2, 2, 2), kernel_regularizer=l2(reg_factor) )(input) pool1 = MaxPooling3D(pool_size=(3, 3, 3), strides=(2, 2, 2), padding="same")(conv1)</p> <h1>repeat blocks</h1> <p>block = pool1 filters = 64 for i, r in enumerate(repetitions): block = residual_block3d(block_fn, filters=filters, kernel_regularizer=l2(reg_factor), repetitions=r, is_first_layer=(i == 0) )(block) filters *= 2</p> <h1>last activation</h1> <p>block_output = bn_relu(block)</p> <h1>average poll and classification</h1> <p>pool2 = AveragePooling3D()(block_output) pool_dense = Dense(256*4*4, activation='sigmoid')(pool2) flatten1 = Flatten()(pool_dense)</p> <h1>Dense output</h1> <p>dense_out = Dense(units=num_outputs))(flatten1) ```</p> <h2>Augmentations</h2> <h2>4D Mixup</h2> <p>The biggest gain I made with this model was by adding a customized version of mixup for 4D. It was the same idea as mixup with images - I combined two random images and then made the target for this "new" brain as the weighted averages of the labels. I did this in the tensorflow dataset pipeline per batch. While it did slow down training a good bit (200seconds per epoch became 300 seconds per epoch) it improved CV and LB score by ~0.001 after blending.</p> <p>The augmentation code was based off of <a href="/cdeotte">@cdeotte</a> 's notebook for 2d images <a href="https://www.kaggle.com/cdeotte/cutmix-and-mixup-on-gpu-tpu">here</a>: </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F644036%2Ff05fcc9a899b7cd0efbe56523c17cb24%2F4dmixup.png?generation=1593481515794349&amp;alt=media" alt=""></p> <p><code> def mixup4D(image, label, prob=1.0): """ 4D Mixup of fMRI images size (BATCH,53, 63, 52, 53) """ imgs = []; labs = [] for j in range(BATCH): P = tf.cast( tf.random.uniform([],0,1)&amp;lt;=prob, tf.float32) k = tf.cast( tf.random.uniform([],0,BATCH),tf.int32) a = tf.random.uniform([],0,1)*P # MAKE MIXUP IMAGE img1 = image[j,] img2 = image[k,] label1 = label[j,] label2 = label[k,] imgs.append((1-a)*img1 + a*img2) # Average Labels labs.append((1-a)*label1 + a*label2) image2 = tf.reshape(tf.stack(imgs),(BATCH,53, 63, 52, 53)) label2 = tf.reshape(tf.stack(labs),(BATCH,5)) return image2,label2 </code></p> <h3>4D Cutmix</h3> <p>After mixup worked so well I took a lot more time to get 4D cutmix working... but that didn't show any CV or LB improvements. I'm assuming I could've gotten it to work better if I had perfected the cube shape, but experimentation took too long to fully flesh out. I'm guessing with my implementation most of the time the cube consisted of area not within the mask region.</p> <h3>Mirror Flip</h3> <p><code> def random_mirror_flip(image, label): imgs = [] for j in range(AUG_BATCH): uniform_random = random_ops.random_uniform([], 0, 1.0) mirror_cond = math_ops.less(uniform_random, .5) result = control_flow_ops.cond( mirror_cond, lambda: flip_left_right(tf.reverse(image[j,], axis=[0, 2])), lambda: image[j,] ) imgs.append(result) image2 = tf.reshape(tf.stack(imgs),(AUG_BATCH, 53, 63, 52, 53)) return image2, label </code></p> <p>I created a custom augmentation for mirror flip. After reading some literature it seemed like this was a common technique, but it didn't improve my CV or LB score. I think that since the left/right side of the brain has more meaning than MRI images of different organs - My intuition is that the mirror flip just made the model lose the knowledge of different brain regions.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F644036%2Fbb1f1e7d95f685db2a3cb7e17b6b52b8%2Fmirror_flips.png?generation=1593482089335977&amp;alt=media" alt=""></p> <p>That's all. It's a bummer to miss out on gold but I am grateful to have learned so much in this competition.</p> <p>Thanks. Individually they weren't too great. Here is one of the models submitted individually. Our final solution ensembeled many different versions of these 3DResnets.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F644036%2F2436d57e69a47c349142caee8f29eb18%2FScreen%20Shot%202020-06-30%20at%208.14.21%20AM.png?generation=1593519430430118&amp;alt=media" alt=""></p>
TReNDS Neuroimaging
16th Place “Simple” Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, We(<a href="https://www.kaggle.com/kannahashimoto">@Kanna Hashimoto</a> , <a href="https://www.kaggle.com/toseihatori">@hatry</a>) would like to thank you all for organizing a very interesting themed competition. </p> <h1>Solution overview</h1> <ul> <li>Adversarial Validation for site1/site2 <ul><li>Our policy was that we should only model using features that do not affect the site's classification.(It may have been meaningless as a result.)</li> <li>We did adversarial validation to classify site1(all training data) or site2(test data , which is clearly revealed to be site2) for only loading features. It result AUC 0.9.</li> <li>Excluding the top 3 features that contribute to the classification of site1 and site2.As a result, AUC decreased by 0.6 ~ 0.7.</li></ul></li> <li>stacking <ul><li>local cv of 1st models are 0.163 ~ 0.1575</li> <li>2nd model local cv is 0.15674, LB is 0.15766, private is 0.15786.</li></ul></li> <li>We did not use 3d CNN models.</li> </ul> <h1>features</h1> <ul> <li>loading, fnc features <ul><li>We created multiplication and subtraction variables.</li></ul></li> <li>Correlation between images <ul><li>We used the correlation coefficients for the 53 fMRI 3d images for each user ID. Removing the 0 pixels improved the local cv a bit.</li></ul></li> <li><p>histogram features, statistical features</p> <ul><li>Image histograms and statistics were also useful. Just computing histograms and statistics on the whole image improved the local cv, but features that computed on the images that had been split into smaller cubes were more effective.</li></ul></li> </ul> <h1>Models</h1> <ul> <li>1st models <ul><li>Ridge + Optuna + random seed averaging</li> <li>MLP + random seed averaging</li> <li>LightGBM + random seed averaging</li></ul></li> <li>2nd model <ul><li>Baysian Ridge + Optuna</li></ul></li> </ul>
TReNDS Neuroimaging
1st private (1st public) place + code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Our team: Nikita Churkin <a href="/churkinnikita">@churkinnikita</a>, Dmitry Simakov <a href="/simakov">@simakov</a>.</p> <p>First of all, thanks to the organizers, Kaggle team and all participants: it was important and very exciting challenge for our team. In addition, we want to thank everyone who wished us luck in kaggle discussions.</p> <p>The core of our approach is: I. Generate powerful features from 3D fMRI data. II. Train-test “matching”. III. Training of a diverse ensemble (different models and feature subsets). IV. Сombining all together.</p> <p>Let us describe the details.</p> <h2>I. Generate features from 3D fMRI data.</h2> <p>We tried to use 3D CNN models at first, but did not spend so much time on it. Small network (resnet10 without pretraining) with AdamW, L1 regularization, <a href="https://github.com/HolyBayes/pytorch_ard">ard</a> layers, snapshots and CutOut3D augmentation worked better as a regressor. 3D autoencoders (simple or UNet-like) + PCA (or ICA) also worked, but was slightly worse for us. </p> <p>The score overall was quite low (only age and domain2-var2 were higher than median prediction) and composing with others features was difficult (3D CNN model trains 10 epochs, whereas MLP on loadings and fnc 70 epochs). So we (mostly <a href="/simakov">@simakov</a>) decided that it was too hard problem for many of top20 participants and they must use something different (it is great that we are wrong, your solutions are cool and interesting!).</p> <p>Our research showed that (obviously) the most important problem was high dimensionality of 3D data. Most related papers use some dimensionality reduction methods to solve this problem. We already had data after such procedure (see data description). But we wanted to move further. If we understood right, data preparation pipeline consisted of two stages: intra-subject level and inter-subject level (it will be cool to hear more about it from <a href="/miykael">@miykael</a>). For the second one, we had to work with all subjects, but they did not fit in the memory. Nipype probably can work with such data, but it is too complicated for fast usage (and <a href="/miykael">@miykael</a> great notebooks don't help ). So, we used classics – sklearn! Sklearn has some algorithms for mini-batch dimensionality reduction: dictionary-learning and Incremental PCA with useful method <code>partial_fit</code>. In theory, we prefer to use as much components as possible (there is probably a lot of information in 3D scans), use as much features (channels) as possible (for better interactions among them), and have smallest batch-size (RAM limit). This algorithms have condition that bath-size &gt;= n_components.</p> <p>The most useful scheme was Incremental PCA with n_components 200, batch-size 200. We splited channels in groups by 10 and flattened inside them (6 groups in total). All training took 9 hours for 24 threads processor and 64gb RAM. As a result, we had 1200 PCA features and they improved our model by 2.3e-3. We also tried dictionary-learning for diversity with n-components 100, with batch-size 100 and n-iters 10 (training procedure took 2 days). We fitted models on train + test data without any additional preprocessing (different settings were generating lower scores or asked for much more RAM). </p> <p>We refused to use 3D CNN models in our submissions – it was too long and we already saw great score on the CV and Public LB.</p> <p>In addition, we computed some simple statistics, like mean, std, and quantiles inside each feature of 3D scans and similar stats for FNC matrix.</p> <h2>II. Train-test matching</h2> <h3>Preprocessing for constructed features.</h3> <p>The idea is actually simple: add a bias to different columns in test set to make it closer to train set. Linear models showed good performance in the competition so it was expected that adding biases would help a lot (at least for linear models). There are lots of way to figure out possible biases: we used minimization of Kolmogorov-Smirnov test’s statistic between train[col] and test[col]+ b.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F597945%2F4bafd16ce91cf310ef6e47fe0240c188%2Foffsets.png?generation=1593547322071002&amp;alt=media" alt=""></p> <p>You can see that resulting distributions are not perfectly matched but all gaps mostly disappeared: it was enough to boost our public/private score significantly for any single model or the whole ensemble. We used biases for site2 data (known points + classifier output) and separate biases for other test data.</p> <p>The same logic can be applied (as postprocessing) to submission.</p> <h3>Design of good site2 classifier</h3> <p>At the beginning, we thought that classifier was the most important part for surviving shakeup. As it was mentioned before, offsets for site2 (both known and with classifier) improved the public LB score by 1е-4. The same effect they had in the private LB. So, we can took the first place even without classifier and offsets for site2.</p> <p>Nevertheless, our approach:</p> <p>Raw loading, fnc and pca features were used for site2 classifier. We applied StandartScaler for train + test data (it boosts score by ~0.02 roc-auc). Regression ElasticNet was used for modeling. Then, we made pseudo-labes inside training fold (hard labels for only site2 with threshold 0.3) and retrained model with pseudo-labels (only +0.004 gain). Final metrics were: 0.973 roc-auc, 0.76 F1-score (subject to site2 class). Our model detected ~1400 new site2 observations, but 200-400 of them were False Positive errors.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F597945%2F3f8ad9c1dd6686b70ea3190bd73e078b%2Fscheme.png?generation=1593547171811243&amp;alt=media" alt=""></p> <p>If we apply offsets before training classifier, roc-auc becomes 0.67 and our model poorly distinguishes site1 form site2.</p> <h3>Postprocessing.</h3> <p>Postprocessing used the same logic as a preprocessing: we try to make the distribution of site2 predictions (revealed + from classifier) closer to distribution of site1 predictions. We calculate KS statistic for site2 and not site2 predictions and find best shift. But the effect of postprocessing was small: only 5e-5 for both public and private. </p> <h2>III. Training of a diverse ensemble (different models and feature subsets).</h2> <p>Main and only “model” for domain1_var2 was median prediction.</p> <p>And for other targets we used ML models to create "score" for every label and every feature subset. Scores are just OOF predictions on the data of only one source (like, for example, FNC features or dictionary learning features). Random Forests and GBM performed poorly in making scores, learning on raw features or stacking.</p> <p>But we wanted to use something tree-based in our ensemble (to add diversity) and recalled about <a href="https://arxiv.org/abs/1109.0887">RGF </a>.</p> <p>RGF showed nice performance in construction of scores and all levels of stacking (especially for age prediction).</p> <p>Making of scores allowed us to effectively use more sophisticated and/or slow algorithms (like MLP or RGF) on top of, in some way, all data at once. Scores were build by blending some fitted linear models (like Ridge or OMP) and RGF.</p> <p>Our stacking approach involved different models on different datasets (including scores) for each target. Final stacking for complicated labels such domain2-var1 and domain1-var1 included only blend of basic linear models + NuSVM, KernelRidge and Gaussian Process trained on different datasets. Final ensemble of domain2-var2 contained more models but nothing truly different from complicated case (just a little more of dataset-model packages).</p> <p>Because detailed listing of all target-dataset-model is boring we are going to explain detailed stacking scheme only for age.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F597945%2Feae94538eefadc9324d5d2952ec94d0e%2Fstack.png?generation=1593547091734113&amp;alt=media" alt=""></p> <h2>TLDR:</h2> <p>1) Incremental PCA for 3d images. 2) Offsets for test features (like we did in ION). 3) Large (4-5 levels) stack. Our path can be summarised in that plot:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F597945%2Fdb5df8ddc55fcc420de046a016722a95%2FProgress2.png?generation=1593547263683229&amp;alt=media" alt=""></p> <p>UPDATE: Our code: <a href="https://github.com/DESimakov/TReNDS/">https://github.com/DESimakov/TReNDS/</a></p> <p>We saw particular this <a href="https://github.com/miykael/nipype-beginner-s-guide">one</a>, because links from one papper led to removed repo where you was contributer, then we opened your repo and found this :)</p>
TReNDS Neuroimaging
20th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Everyone,</p> <p>Ending final validation, I share my solution.</p> <p>First, Channel-wise 3D convolutions are applied to fMRI spatial maps. All channels share the weight of convolution to prevent overfitting. Output features are thrown into Edge Update GNN with FNC correlation.</p> <p>The outputs form GNN are averaged and concatenated with sMRI loading. Finally, conventional MLP is applied and prediction for age and other target variables are obtained.</p> <p>Code: <a href="https://github.com/toshi-k/kaggle-trends-assessment-prediction">https://github.com/toshi-k/kaggle-trends-assessment-prediction</a> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F172060%2F7b73d84ca14bf27a964417366d92e5bb%2Fconcept.png?generation=1593698635618709&amp;alt=media" alt=""></p> <p>Hi Tawara,</p> <p>I forked the custom loss function from your public notebook. Thank you for sharing the excellent NN baseline ! <a href="https://github.com/toshi-k/kaggle-trends-assessment-prediction/blob/master/source/02_edge_gnn/lib/loss.py">https://github.com/toshi-k/kaggle-trends-assessment-prediction/blob/master/source/02_edge_gnn/lib/loss.py</a></p>
TReNDS Neuroimaging
24th place - 3D fMRI Feature Engineering Partial Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks <a href="/miykael">@miykael</a> </p> <p>I'm really interested in this field now. I actually read the whole nilearn docs and your github repositories. openneuro.org and datalad looks amazing. I wish I could know them prior to this competition but I haven't even heard about neuroimaging one month ago.</p> <p>I have a question for you. I also read <a href="https://surfer.nmr.mgh.harvard.edu/fswiki/CorticalParcellation_Yeo2011">Cortical Parcellation Yeo 2011</a>. Is this how the 7 networks are derived in our dataset? Can more networks yield more information?</p> <p><a href="/miykael">@miykael</a> Thanks for answering my question. I already saw those algorithms and I thought about extracting ROIs for every different target by using targets in those algorithms. I abandon the idea for some reason back then. Do you think that would be useful?</p> <p><a href="/broach">@broach</a> Congrats on your silver medal Brian. That was a really smart idea. My amazing teammate <a href="/authman">@authman</a> have also tried something very similar. He tried to rank every 58869 voxel for every 53 component for every subject for every target. That was the craziest thing I saw so far and sadly, he got a similar result as yours.</p> <p>I wish I could worked with domain experts like you guys in this competition. I wish you guys good luck in your next competitions</p> <p>This was the best competition I joined by far in my opinion. I really enjoyed working on this topic and I might even want to work on neuroimaging as a job. I was little bit disappointed because most of the teams didn't even bother looking at <code>fMRI_train</code> and <code>fMRI_test</code>. Those teams lost large amount of information by skipping that part of the dataset. Our solution heavily relies on 3D fMRI data and I'm updating my data analysis kernel with all feature engineering code described here. It has more detailed explanation in this kernel here. (It takes 8 hours to create those features so kernel is still comitting)</p> <p><a href="https://www.kaggle.com/gunesevitan/trends-neuroimaging-data-analysis-3d-features">https://www.kaggle.com/gunesevitan/trends-neuroimaging-data-analysis-3d-features</a></p> <h2>3D Feature Engineering</h2> <p>Features are based on the differences of fMRIs between different target groups. The biggest differences in those fMRIs are the spread and intensity of active/inactive areas. I plotted the youngest and oldest subject's fMRIs side by side for 53 components, and tried to extract some features. Some of those features worked, most of them didn't. </p> <p><img src="https://i.ibb.co/44mHnBn/fmri.jpg" alt=""></p> <p>Data in <code>fMRI_train</code> and <code>fMRI_test</code> are represented in 4 dimensional space: 3 spatial dimensions and one component dimension. The 4th dimension is time in most cases but it is 53 different components in this dataset. 3D brain is mask is applied to 4D samples in order to convert them into a restructured 2D representation.</p> <p><img src="https://nilearn.github.io/_images/masking1.jpg" alt=""></p> <p><code> fmri_mask = nl.image.load_img('../input/trends-assessment-prediction/fMRI_mask.nii') sample = h5py.File(f'../input/trends-assessment-prediction/fMRI_train/10001.mat', 'r')['SM_feature'][()] sample = np.moveaxis(sample, [0, 1, 2, 3], [0, 3, 2, 1]) masked_sample = sample[:, fmri_mask.get_data() == 1].astype(np.float32) </code> This code loads the fMRI mask <code>(53, 63, 52)</code> and a random sample <code>(53, 52, 63, 53)</code>. Reorients the axes of sample <code>(53, 53, 63, 52)</code> and apply the mask to it. Final shape of the sample is <code>(53, 58869)</code> which is <code>(n_components, n_voxels)</code>. All of our features are extracted in this space. The same fMRIs look like this when they are masked.</p> <p><img src="https://i.ibb.co/3BmNgRH/fmri-masked.jpg" alt=""> </p> <p>The features created in this space are: * Skew and kurtosis of every 53 components * Skew of <code>np.diff(component)</code> * Active and inactive regions are created for 53 components with the lines below, and their skew and shapes are extracted * <code>active = component[component &amp;gt; component_mean + component_std]</code> * <code>inactive = component[component &amp;lt; component_mean - component_std]</code> * Active mean / inactive mean and active std / inactive std are created for 53 components * Component skew - active skew and component skew - inactive skew are created for 53 components * More active and inactive regions are created for 53 components with the lines below, and their shapes are extracted * <code>more_active = component[component &amp;gt; component_mean + (2 * component_std)]</code> * <code>more_inactive = component[component &amp;lt; component_mean - (2 * component_std)]</code> * Final features are each component (signal) are divided into 15 equal parts which corresponds to bounding boxes in 3D space. Active region shape and mean of each box are extracted with the lines below. I intended to capture active region spreads and intensities in fixed locations with those features.</p> <p><code> box_count = 15 for b, box in enumerate(np.array_split(component, box_count)): features[f'{icn_order[i]}_box{b}_mean'] = box.mean() box_active = box[box &amp;gt; component_mean + component_std] features[f'{icn_order[i]}_box{b}_active_spread'] = box_active.shape[0], </code></p> <p>At the end I created 2279 new features and they helped my not so special models to reach 0.1581. <code> Number of Bounding Box Features: 1590 Number of Statistical Features: 689 </code></p> <p>I also tried smoothing with Gaussian filter before feature engineering but it either had no effect or lost too much information as seen below. (First image has no smoothing, second image has smoothing with factor 5 and third image has smoothing with factor 10)</p> <p><img src="https://i.ibb.co/CJqjVbf/smoothing.jpg" alt=""></p> <h2>Models</h2> <p>My models were very simple. I used 2 ridge regressions and 2 rapids SVRs. Two of those models included FNC features and two of them didn't. I used 10 KFold with shuffle as cross-validation and trained each model with 5 different shuffle seeds. I also tried to drop 5% of FNC features and bounding box features in every different iteration for diversity. Finally, I added ±0.1 random noise to <code>age</code> for increasing cardinality since it was rounded. Models are shared in this kernel here.</p> <p><a href="https://www.kaggle.com/gunesevitan/trends-neuroimaging-linear-model-ensemble">https://www.kaggle.com/gunesevitan/trends-neuroimaging-linear-model-ensemble</a></p>
TReNDS Neuroimaging
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I used oof with nelder-mead to find out the best weights. And then just multiply the labels with it.</p> <p>First of all, thanks to the competition organizers for interesting and leak-free competition! And congrats to winners!</p> <h1>Overview</h1> <p>My solution is 2nd stage stacking, and then applied weight optimization. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F317344%2F4117c544c79b1e8f34a3b8287e07f0a7%2F2nd%20place%20pipeline.png?generation=1593488007825961&amp;alt=media" alt=""></p> <h1>OOF correlation heatmap</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F317344%2Fa4f1dddd12d18ef0d623d34af62cea07%2FOOF%20corr%20hm.png?generation=1593484113552540&amp;alt=media" alt=""></p> <p>These names stands for below <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F317344%2F625b974272f82c8b1f52da3036664979%2Foofs.png?generation=1593484633888789&amp;alt=media" alt=""></p> <h1>Findings</h1> <ul> <li>SVM and linear model works well in tabular data, but tree-based model doesn’t work as well as KNN</li> <li>3D ResNet18 is best CNN here for me. Comparing to that, ResNet50, ResNext50, ResNext101 doesn’t work well.</li> <li>Splitting 3D fMRI into 3~6 pieces gave me boost, but 8 pieces are useless.</li> <li>I'm loved by 2nd place</li> </ul> <h1>Appendix</h1> <p>脳 means brain in Japanese, and pronounced / nou /, so <strong>脳 brainer</strong> stands for <strong>No Brainer</strong>. Actually I joined this competition 2 weeks ago.</p> <blockquote> <p>Did you try mixup augmentation ?</p> </blockquote> <p>I would, but didn't. Did it work well?</p> <p>```</p> <h1>==============================================================================</h1> <h1>minimize</h1> <h1>==============================================================================</h1> <p>weights = np.ones(len(COLS_TARGET))</p> <p>def get_score(weights: np.array, train_idx): oof_ = oof.copy()</p> <pre><code>for i, c in enumerate(COLS_PRED): oof_.loc[train_idx, c] *= weights[i] return utils.validate(oof_, False) </code></pre> <p>weights_list = [] for i in tqdm(range(LOOP)): kf = KFold(n_splits=NUM_FOLDS, shuffle=True, random_state=i) for fold, (train_idx, valid_idx) in enumerate(kf.split(train)): res = minimize(get_score, weights, args=(train_idx), method="Nelder-Mead", tol=1e-6)</p> <pre><code> print("score:", res.fun) print("weights:", res.x) weights_list.append(res.x) </code></pre> <p>print(np.array(weights_list).mean(0)) """ [1.00114876 1.00650625 1.01753426 1.01534408 1.01883692] """ ```</p>
TReNDS Neuroimaging
30th place solution - and some thoughts
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>30th place solution</h1> <p>First of all thanks to the organizers to putting together this challenging and interesting competition!</p> <p>I've seen many good and well written and well developed solutions on the challenge ... mine is neither. But I just wanted chip in quite late, as I might have used some feature engineering that hasn't been used as widely. Unfortunately I don't have the time right now to dig into it in detail, but will just give you a rough description about what I did.</p> <h2>Reflections</h2> <p>A personal note on how I perceived the challenge. I was really exciting to see this challenge, this should lie in my area of expertise, so why not give it a try. So I did, wrote a for my terms successful EDA notebook, got into the data and somehow got to 13th place. But, as people noted, there was not much going on in terms of discussion etc., so I might have gotten a bit complacent with my position and did not develop much further. Also I might have gotten a bit annoyed, because I couldn't get my errors on <code>domain1_var1</code> and the <code>domain2</code> vars done as much as I liked.</p> <p>Then life happened, vacation, deadlines, an online hackathon. Not much space for the challenge, and suddenly there are 10 days left, I have a deadline coming up, and find myself around place 50th on the leaderboard. So some panic sets in and also some craziness, so I rewrote the analysis, and preprocessing from scratch, run all the different models. Stack them up and well suddenly it's place 34, and 30 on the private leaderboard - my best competition so far.</p> <h2>Preprocessing and Features</h2> <h3>Targets</h3> <ol> <li>I decided to perform stratified cross-validation on my data, so I used <code>KNN(n_clusters=15)</code> on the target variables, setting the missing values to 0, so that this will be taken into account for the stratification approach.</li> <li>Then I imputed the missing data using <code>sklearn.impute.IterativeImputer</code> to get values there. This seemed to be a good balance for me between discarding the data or just filling in a constant. However, I didn't really look into the imputations afterwards...</li> </ol> <h3>Preprocessing and Feature Engineering</h3> <ul> <li><code>loading.csv</code> - Discarded <code>IC_20</code> - Seemed to be very different between train and test (discussions and kernels)</li> <li><code>fnc.csv</code> - Reordering the data for use with <code>nilearn.connectome.vec_to_sym_matrix</code></li> <li>Extracting data from the 4D feature maps. I have a kernel out-there showing how this can be done using different <code>Masker</code> objects using <code>nilearn.input_data</code>. I extracted the sets <code>schaeffer</code> (400 brain regions), <code>msdl</code>, and <code>basc</code> (197 regions) here.</li> <li>Graph-based features: Several publications also focus on using graph-theory based descriptions of functional connectivity as features for machine learning classifiers. So I used the "Brain Connectivity Toolbox", <code>bct.py</code> to derive features. These were just a ton of global and local descriptions, also at different thresholdings of the connectivity matrices (derived from <code>fnc</code>). Around 1700 features in total (with sometimes really bad numerical issues), which I somehow hacked away in the preprocessing before my stacking data. This will be named <code>fnc_graph</code>.</li> <li>Combined data, I also created the set <code>loadfnc</code>, combining <code>fnc</code> and <code>loading</code> data, and <code>loadmsdl</code> combining <code>msdl</code> and <code>loading</code>. Finally, I also used a <code>loadnoise</code> set, where I added some random intercept and Gaussian noise to the <code>loading</code> data (differently for each subset). If that helped at all, I couldn't unfortunately test.</li> </ul> <h2>Stacking, stacking, stacking</h2> <p>I stacked tons of models (32 for each feature) using different regression approaches, and sometimes different preprocessing. <code>ConnectivityMeasure</code> (shortened to <code>CM</code>). is a class from <code>nilearn.connectome</code>, that can be used to transform a <code>n x t</code> matrix in to a <code>n x n</code> connectivity matrix, using differnt kinds of connectivity. The nice thing is, it also fits into sklearn-pipelines as a vectorized version of the matrix is possible.</p> <p>| Data Set | Preprocessing SVR, LGBM | Preprocessing Regression | | -------- | ----------------------- | ------------------------ | | <code>basc</code> | CM(tangent), PCA(whiten=True), RobustScaler() | CM(correlation), PCA(whiten=True) | | <code>msdl</code> | CM(tangent), PCA(whiten=True), RobustScaler() | CM(correlation), PCA(whiten=True) | | <code>schaeffer</code> | CM(tangent), PCA(whiten=True), RobustScaler() | CM(correlation), PCA(whiten=True) | | <code>fnc</code> | PCA(whiten=True) | abs(fnc) &lt; 0.15 = 0, PCA(whiten=True) | | <code>fnc_pca</code> | None | abs(fnc) &lt; 0.15 = 0, None | | <code>loading</code> | RobustScaler() | RobustScaler() | | <code>fnc_graph</code> |Numerical fixes, PCA(whiten=True), RobustScaler() | Numerical fixes, PCA(whiten=True), RobustScaler() | | <code>loadmsdl</code> | PCA(whiten=True), RobustScaler() | PCA(whiten=True), RobustScaler() | | <code>loadfnc</code> | RobustScaler() | RoubstScaler() | | <code>loadnoise</code> | RobustScaler() | RobustScaler()|</p> <h3>SVR</h3> <p>I used sklearns <code>SVR</code> using both a linear and a rbf kernel on the datasets <code>basc</code>, <code>msdl</code>, <code>schaeffer</code>, <code>fnc_pca</code>, <code>loading</code>, and <code>fnc_graph</code>. To figure out the best parameters I applied <code>skopt.BayesianSearchCV</code> with 35 iterations (objective mean absolute error). Parameters optimized where <code>C</code>, <code>epsilon</code> and <code>n_components</code> for PCAs.</p> <p>So here are 2 x 6 = 12 models.</p> <h3>Regression</h3> <p>In this competition I somehow came to like the <code>LassoLars</code> regression of sklearn. So that's what I am using here. The feature selection of it seemed to help actually. Running models on <code>basc</code>, <code>msdl</code>, <code>schaeffer</code>, <code>fnc</code>, <code>loading</code>, <code>fnc_graph</code>, <code>loadmsdl</code>, <code>loadfnc</code>, and <code>loadnoise</code>. This time optimizing mean squared error using <code>BayesianSearchCV</code> for <code>alpha</code> and <code>n_components</code>.</p> <p>So 9 models.</p> <h3>LightGBM</h3> <p>Same datasets as for <code>Regression</code>. Optimizing tree parameters and PCA, best model defined by mean absolute error.</p> <p>Another 9 models.</p> <h3>2D CNN</h3> <p>I also tried to get some more spatial information into the model as well so I set up a small 2D CNN having:</p> <ul> <li>Conv2D Layer, with ReLU activation</li> <li>Maxpooling (2, 2)</li> <li>Flatten</li> <li>Dropout()</li> <li>Dense(1)</li> </ul> <p>Where the number of <code>filters</code>, <code>kernel_size</code>, <code>Dropout</code>, <code>learning_rate</code>, and the <code>loss</code> (mae, mse), where found through <code>BayesianSearchCV</code>.</p> <h2>Training and Prediction</h2> <p>I used the same approach for all models and the final stacking model:</p> <ol> <li>Optimize hyperparameters on 5-Folds.</li> <li>Retrain model on CV-Data</li> <li>Evaluate on hold-out set</li> <li>Retrain on all data</li> <li>Predict on test set.</li> </ol> <p>The final stacking model was again a <code>LassoLars</code> regression, on the outputs of the 31 models. I actually preprocessed the predictions, by slapping a <code>RobustScaler</code> in just for good measure.</p> <h2>Thoughts</h2> <p>I learned quite a lot from the competition, but have to say that I am not really satisfied with what I did (my best competition so far...), and see a lot of room for improvement.</p> <h3>Work smarter</h3> <p>I think the most annoying part for me is, that I just stacked tons of models. In the end not even thinking much about <em>why</em> I am doing it. I just wanted to get that 0.001 Leaderboard boost, to get a little edge. But, if I had invested my time more into careful tuning, preprocessing, and careful model selection, I think I would have gotten more out in the last weekend of the competition, than I did here. In the end, I was mostly waiting for models to finish running and to start the next set of long calculations.</p> <h3>Evaluate, evaluate, evaluate</h3> <p>So far, my intuition on evaluating locally and avoiding overfitting got me quite far (I got my first silver medal basically because of an incredibly heavy shake up of the leaderboard, pushing me a couple of 100 places or so to the top). Here I think I was actually quite lucky - see the forum posts where people discuss about the lack of a shake up.</p> <h3>Get a team</h3> <p>Next time I am in this situation, I think I will team up. Also I apologize to the people who contacted me and I didn't get back to. Here it was mostly bad timing, but I think there is some much to gain in terms of insights, when you can discuss your solutions :)</p> <p>And of course much more. </p>
TReNDS Neuroimaging
35th Place Solution- Magic PP - Site 2 correction
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Updated Scores:</p> <p>Submission assuming all site 1: 0.15902 (Private) 0.15888(Public) Submission assuming all site 2: 0.15881 (Private) 0.15860(Public) Submission with probabilities: 0.15875 (Private) 0.15840(Public)</p> <p>Note: The additional boost in submission assuming all site 2 is due to known site 2 samples.</p> <p>Conclusion: 1. Not all the known site 2 samples were in public test. (Boost in Private: 0.00021 | Boost in Public: 0.00028) 2. There were more unknown site 2 samples in public test than private test. (The boost decreased from 0.0002 to 0.0006)</p> <p><strong>Part 1</strong> First of all, Congratulations to my teammate <a href="/prateekagnihotri">@prateekagnihotri</a> for his first silver medal and a lot of thanks for coming up with excellent stacking models. Congratulations to all the teams that did good on private lb. Also, credits should be given to <a href="/david1013">@david1013</a> for the high scoring public kernel that was used in final blend. The best part was we did all these in the last 10 days, that too with just tabular data. We wanted to explore fMRI, but due to limited time, we couldn't go there. 1. <strong>0.0003 public lb loost - Preprocessing for Known site 2 samples</strong> We tried to perform a gaussian fit on IC and fnc features for both site1 and site2.The std was nearly same, so, it looked like we can approximate the site 1 features by just adding mean shift to site 2 features. <code> Mean Shift = Mean(Site 1 samples) - Mean(Site 2 samples) Generated Site 1 sample = Site 2 sample + Mean Shift </code> And then the prediction for known site 2 samples was done using the generated site 1 sample.</p> <p>2.<strong>Calculating No of site 2 samples</strong> We simply solved this equation for all the features that were showing significant shift and plot the histogram with x values obtained from all features: <code> Mean(Unknown Samples) = x * Mean(Site1 samples) + (1-x) * Mean(Site2 samples) Here, x is the percentage of site 1 samples in the unknown dataset. </code> It was clear that 20-40% of the dataset belongs to site 2. </p> <p>3.<strong>0.0002 public lb loost - Postprocessing for Unknown site 2 samples</strong> Our site classifier was ensemble of: - LightGBM on all features - Stacked Ensemble on all features with generated samples. The additional site 2 samples were generated from known site 1 using mean shifts. - Bayesian Model on top 30 features. </p> <p>The individual probabilities were on different scale, specially for model 1, due to class imbalance. So, Ensemble was done by ranking the samples according to their probabilities and then taking the weighted mean of all the ranks. Top 500 ranks were assigned probability 1. Next 500 were assigned 0.9. Next 0.75. Similarly at the bottom, samples were assigned 0, 0.1, 0.25. All the rest samples were assigned probability 0.5</p> <p>On private set, the last one actually worked better than blend but we didn't select that. </p> <p>With the probabilities, final prediction were calculated as: <code>Prediction = p1 * site1_pred + p2 * site2_pred</code> where site1_pred was the actual prediction by the model and site2_pred was the prediction made assuming the sample comes from site2, it was done in manner similar to (1)</p> <p>4.<strong>Models</strong> Blend Ensemble of 1. Public LB with PP 2. 3-level stacking (43&gt;7&gt;1) 3. 3-level stacking (29&gt;5&gt;1)</p> <p>Models used in Stack 2 &amp; 3: Ridge, Bayesian Ridge, Bagging Regressor, SVR, NuSVR, Kernel Ridge, Neural Networks, Lasso, E-Net, GLM</p> <p>All three scored 0.1585 on public lb, blend ensemble gives <strong>0.1584</strong></p> <p>I am new to writing detailed solution. So, pardon me for any mistakes!!</p> <p>*<em>More Details about the stacking model and some more tricks that worked will be posted soon. *</em></p> <p><strong>Full Detailed Solution with code will be uploaded soon</strong></p>
TReNDS Neuroimaging
36th Place Solution (with code)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes, you are right, but unfortunately we come up with this on last day of competition with only 2 submissions left. We didn't have much submissions left to experiment.</p> <p>Hello everyone, first of all i would like to thank my great teammates <a href="/joatom">@joatom</a>, @mks2192and <a href="/kpriyanshu256">@kpriyanshu256</a> for such an interesting competition journey. And congratulations to all the winners!</p> <p>It's been a great competition, and my team has spend a lot of time in this competition and finally glad to share that all the hard work paid off.</p> <h2>Our Approach</h2> <p>Our final solution is an ensemble of following models or submissions: * <a href="https://www.kaggle.com/joatom/trends-tabular-nn-0-159">simple tabular nn</a> as mentioned by <a href="/joatom">@joatom</a> in the comments below. * GCN-Tabular Learner by <a href="/kpriyanshu256">@kpriyanshu256</a> * <a href="https://www.kaggle.com/rohitsingh9990/trends-pycaret-training-inference">TReNDS - PyCaret (Training + Inference)</a> by <a href="/rohitsingh9990">@rohitsingh9990</a> * <a href="https://www.kaggle.com/aerdem4/rapids-svm-on-trends-neuroimaging">RAPIDS SVM on TReNDS Neuroimaging</a> by <a href="/aerdem4">@aerdem4</a> * and a bunch of other models trained on tabular data only.</p> <p>Both <a href="https://www.kaggle.com/aerdem4/rapids-svm-on-trends-neuroimaging">RAPIDS SVM</a>and <a href="https://www.kaggle.com/rohitsingh9990/trends-pycaret-training-inference">TReNDS - PyCaret</a> notebooks are already publicly available, there is nothing much to explain.</p> <h3>Brief Overview of <code>simple tabular nn</code>:</h3> <ul> <li>A detailed explanation of this approach is posted by <a href="/joatom">@joatom</a> in the comments below. <blockquote> <p>Note: This is the most important approach which alone when ensembled with public RAPIDS SVR kernel can easily give us a silver medal.</p></blockquote></li> </ul><p></p> <h3>Brief overview of our <code>TReNDS GCN-Tabular Learner</code>:</h3> <ul> <li>The scenario of using a GCN appears if we treat the correlation values between the fnc entities (eg. SCN(53) ) as edge weights. </li> <li>A little EDA showed that there are ((53*52)/2) columns (excluding the Id) i.e. a completely connected graph and there are in total 53 unique entities. There are also 53 spatial maps for each patient. </li> <li>We assume <code>ICN_numbers.csv</code> gives a mapping of each entity to the corresponding map. Hence, we create a graph of 53 nodes for each patient. To create features for the nodes, we rely on domain specific knowledge ( <a href="https://www.kaggle.com/kpriyanshu256/trends-image-features-53-100">https://www.kaggle.com/kpriyanshu256/trends-image-features-53-100</a> ). </li> <li>We extracted some features from each map using nilearn ( <a href="https://nilearn.github.io/modules/generated/nilearn.datasets.fetch_atlas_schaefer_2018.html#nilearn.datasets.fetch_atlas_schaefer_2018">https://nilearn.github.io/modules/generated/nilearn.datasets.fetch_atlas_schaefer_2018.html#nilearn.datasets.fetch_atlas_schaefer_2018</a> and <a href="https://nilearn.github.io/modules/generated/nilearn.datasets.fetch_atlas_talairach.html#nilearn.datasets.fetch_atlas_talairach">https://nilearn.github.io/modules/generated/nilearn.datasets.fetch_atlas_talairach.html#nilearn.datasets.fetch_atlas_talairach</a> ). </li> <li>We have used this for the GCN. The features from the GCN are collected and combined with the <code>fnc.csv</code> and <code>loading.csv</code> features. The combined features are passed through dense layers to get the output. </li> <li>Notebook : <a href="https://www.kaggle.com/kpriyanshu256/trends-gcn-tab-oof/">https://www.kaggle.com/kpriyanshu256/trends-gcn-tab-oof/</a> (edited) kaggle.comkaggle.com</li> </ul> <h3>Post-Process</h3> <p>At last after ensembling, we did a little post-processing that we came up on last day of this competition, which gives us a boost of approx 0.002</p> <p><code> Before transformation, the age in the training set is rounded to nearest year for privacy reasons. However, age is not rounded to year (higher precision) in the test set. Thus, heavily overfitting to the training set age will very likely have a negative impact on your submissions. </code> This is the information available on the data page, and we were wondering how we can use this sort of information to get best out of it. so, what we did is to try multiplying <code>age</code> we get from our ensembled models by a factor of <code>1.01</code></p> <p>Unfortunately we try this on last day of competition and we didn't have any submissions left to improve it further, i wish we might have tried it earlier.</p> <p>Link to our final submission kernel <a href="https://www.kaggle.com/rohitsingh9990/36th-place-trends-ensemble?scriptVersionId=37747366">https://www.kaggle.com/rohitsingh9990/36th-place-trends-ensemble?scriptVersionId=37747366</a></p>
TReNDS Neuroimaging
3rd Place Solution (+ github code)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks!</p> <p>First of all, I want to thank hosts for organizing this interesting competition. This competition was a lot of fun because we got to try many techniques! And I'm so happy with my first solo gold!</p> <h1>Solution overview</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1227363%2F881ec03b0fff4b96bbb71a2521d174f2%2Fsolution%20(2" alt="">.png?generation=1593524751624771&amp;alt=media)</p> <h1>Solution Details</h1> <p>I think that the point of this competition is stacking and diversity of models. So, I built many models as shown below.</p> <h3>1. Simple models by table feature.</h3> <p>First, I trained simple models by fnc and loding feature. - SVM (rbf kernel and linear kernel) - NuSVM - KNN - Ridge - BaggingRegressor(Ridge)</p> <h3>2. NN models by fMRI data.</h3> <h3>2-1. 3D voxel</h3> <p>I applied sample wise and component wise normalization.</p> <ul> <li>ResNet18 replaced by 3D modules.</li> <li>Simple 3dConvNet (smaller parameter than ResNet18) <ul><li>trained by all label</li> <li>trained by age, domain1_var1 and domain1_var2</li></ul></li> <li>(2 + 1)D CNN (suggested in <a href="https://openaccess.thecvf.com/content_cvpr_2018/papers/Tran_A_Closer_Look_CVPR_2018_paper.pdf">A Closer Look at Spatiotemporal Convolutions for Action Recognition</a>) <ul><li>better than ResNet18</li></ul></li> </ul> <h3>2-2. masked data</h3> <p><a href="https://nilearn.github.io/modules/generated/nilearn.input_data.NiftiLabelsMasker.html#nilearn.input_data.NiftiLabelsMasker">Masker of Nilearn</a> can extract signals from brain parcellation. Then, I used <a href="https://nilearn.github.io/modules/generated/nilearn.datasets.fetch_atlas_schaefer_2018.html#nilearn.datasets.fetch_atlas_schaefer_2018">schaefer_2018</a> parcellation and extracted 400 rois from fMRI. (reduce shape [53, 52, 64, 53] to [53, 400]) Finally, I built some models by this [53, 400] signals.</p> <ul> <li>CNN models (input channel is 400) <ul><li>1D ResNet18</li> <li>1D densenet121</li> <li>1D ResNest 14d</li></ul></li> <li>Transformer <ul><li>No PositionalEncoding</li> <li>Self Attention of components by components (feature axis is 400)</li></ul></li> <li>GIN (Graph Isomorphism Network) <ul><li>Impremented by <a href="https://www.dgl.ai/">Deep Graph Library</a></li> <li>I used the network architecture like in [this paper] (<a href="https://arxiv.org/pdf/2001.03690.pdf"></a><a href="https://arxiv.org/pdf/2001.03690.pdf">https://arxiv.org/pdf/2001.03690.pdf</a>).</li></ul></li> </ul> <h3>3. LGBM and XGBoost with table features, NN features and voxel statistical features.</h3> <ul> <li>I computed voxel statistical feature (mean, max, kurt, skew) of each components. <ul><li>I used NiftiMasker for extracting non-zero area.</li></ul></li> <li>LGBM(XGBoost) model with hidden feature of (2 + 1)D CNN</li> <li>LGBM(XGBoost) model with hidden feature of Simple 3dConvNet</li> <li>LGBM(XGBoost) model with hidden feature of GIN</li> </ul> <p>I selected 1024 feature of each models by feature importance and reduced the number of leaves to avoid over-fitting (n_leaves = 2).</p> <h3>4. Stacking</h3> <p>I just looked at the distribution of predictions in the training and test data and erased them if there were too much discrepancy. And I made interaction feature between domains (sum, abs(diff), multiply). After the preprocessing, I used Linear SVM and LGBM for stacking and finally blending the predictions.</p> <h1>Some Insights</h1> <ul> <li>3DCNN with table feature had better CV but this model was useless when stacking.</li> <li>Removing noise prediction had important role on shake-up.</li> <li>MONAI framework is too slow. <a href="https://github.com/PhoenixDL/rising">Rising</a> is faster and I used it. But I don't know if it helped to boost the score...</li> </ul> <h1>What didn't work</h1> <ul> <li>Pseudo label</li> <li>Ridge stacking, XGBoost stacking, Second level stacking</li> <li>GNN with fnc attention</li> <li>Select less feature of lgbm and xgboost than 1024</li> </ul> <p>[Updated] I uploaded code to <a href="https://github.com/shimacos37/kaggle-trends-3rd-place-solution">https://github.com/shimacos37/kaggle-trends-3rd-place-solution</a> with Dockerfile. This code uses several tools. 1. I used <a href="https://github.com/PyTorchLightning/pytorch-lightning">pytorch-lightning</a> to lap training and evaluation code and <a href="https://github.com/facebookresearch/hydra">hydra</a> to manage parameter. 1. I used <a href="https://www.wandb.com/">Weights &amp; Biases</a> to manage expetiments and GCS to save result. Weights &amp; Biases is better for me than the others (tensorboard, mlflow, etc ...)</p> <p>In the case of this competition, I thought that this competition was similar to otto competition. So I hypothesized that stacking might work, and I did it, and luckily it worked. What this shows is that accumulating knowledge of past competitions can help us formulate appropriate hypotheses in the early stages of the competition. Of course, sometimes the hypotheses don't work, but I believe that wasting time and gaining experience through trial and error will also help in the future!</p> <p><a href="/iprapas">@iprapas</a> I performed the following operations with the predictions of each model. <code> def make_domain_feature(df: pd.DataFrame, mode: str): domain_cols = ["domain1_var1", "domain1_var2", "domain2_var1", "domain2_var2"] feat_dict = {} for col1, col2 in combinations(domain_cols, 2): if f"{col1}_pred" in df.columns and f"{col2}_pred" in df.columns: feat_dict[f"{col1}_{col2}_sum"] = ( df[f"{col1}_pred"] + df[f"{col2}_pred"] ) feat_dict[f"{col1}_{col2}_diff"] = np.abs( df[f"{col1}_pred"] - df[f"{col2}_pred"] ) feat_dict[f"{col1}_{col2}_multiply"] = ( df[f"{col1}_pred"] * df[f"{col2}_pred"] ) return feat_dict </code></p>
TReNDS Neuroimaging
47th Place - Site 2 Correction
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This competition is different than real life. In real life you could collect training data from both site 1 and site 2. So you would know precisely which site is used and you would know the target distributions. Then you could analyze and identify the bias accurately.</p> <p>In this comp, we can only assume that site 1 target mean equals site 2 target mean (supported by the evidence that modifying targets improves LB). And we can only approximately classify scans as site 1 and site 2.</p> <p>Thanks for the great plot. I tried both preprocess and post process. They both improve LB by 0.0005. And both methods make the site 2 targets' mean equal to site 1 targets' mean. (I was not aware of this quadratic shift relationship, that's cool).</p> <p>I chose to use post process because it was easier to experiment. Trying a different post process took 1 second as opposed to trying a different preprocess and running all my models for hours. Also using post process allowed me to easily add public notebook submission CSVs to my ensemble.</p> <p>The best public notebook <a href="https://www.kaggle.com/david1013/trends-multi-layer-model" target="_blank">here</a> scores LB 0.1590. If you analyze the predictions, you will see that it underestimates target <code>age</code> and underestimates target <code>domain2_var1</code> for site 2 brain scans. If you adjust these 2 targets, you increase it's LB score to LB 0.1585. That's a 0.0005 LB Boost !</p> <h1>Classify Brain Scans as Site 2</h1> <p>Using an ensemble of logistic regression with L2 regularization and RAPIDS cuML support vector classification, you can build a site 2 classifier with AUC = 0.80. Below is the classifier's probability distribution. There are 3507 test samples with <code>prob = 0</code> of being site 2. There are 800 test samples with <code>0 &lt; prob &lt;= 0.5</code> and 508 test samples with <code>0.5 &lt; prob &lt; 1</code> and 1062 with <code>prob = 1</code>. (The 1062 includes the known 510).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F23bdd9c9160467874ee0c021ed518740%2FScreen%20Shot%202020-06-29%20at%206.57.37%20PM.png?generation=1593482276909658&amp;alt=media" alt=""></p> <h1>Determine Target Bias of Site 2</h1> <p>Next if you plot the targets versus site 2 probability, we see that the targets <code>age</code> and <code>domain2_var1</code> decrease as the probability of site 2 brain scans increase. If we <strong>assume</strong> that site 1 and site 2 have the same <code>age</code> and <code>domain2_var1</code> mean then this is a statistically significant model bias based on site feature drift. </p> <p>(For samples of size 1000, the sample mean's standard deviation of target is <code>0.3 = OOF std / sqrt(1000)</code> therefore our observation is not random chance. )</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2Fa39fa26954aff4aa77bdf7f9fa867f36%2FScreen%20Shot%202020-06-29%20at%206.54.42%20PM.png?generation=1593482098625573&amp;alt=media" alt=""></p> <h1>Post Process Correction - Boost LB 0.00050!</h1> <p>From the plot we observe that this biased model needs to add <code>1.9</code> to the test target <code>age</code> for site 2 brain scans with <code>prob = 1</code> and needs to add <code>0.67</code> to the test target <code>domain2_var1</code> for site 2 brain scans. Furthermore we can adjust <code>0.5 &lt; prob &lt; 1</code>, <code>0 &lt; prob &lt;= 0.5</code>, and <code>prob = 0</code> based on the plot. </p> <h3>1590 Public Notebook With PP</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F83004b4000cf0d3c5a899cc13a4c5f00%2FScreen%20Shot%202020-06-29%20at%207.13.37%20PM.png?generation=1593483311555960&amp;alt=media" alt=""></p> <h3>1590 Public Notebook Without PP</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F89139b7bc495ae25f565a1c8c281fbb3%2FScreen%20Shot%202020-06-29%20at%207.18.46%20PM.png?generation=1593483542944485&amp;alt=media" alt=""></p> <h1>Private Leaderboard</h1> <p>I was hoping that private leaderboard would have more site 2 data. Then this trick would increase private LB more than public LB and I would have climbed rank on private leaderboard :-) Instead this PP only increased private LB by +0.00025 and I dropped rank on private LB :-(</p> <p><strong>UPDATE</strong>: Two simple experiments suggest that private test has 510 site 2 brain scans and public test has 1020 site 2 brain scans. The experiments are described <a href="https://www.kaggle.com/c/trends-assessment-prediction/discussion/162907" target="_blank">here</a></p> <h1>Final Models</h1> <p>My final model was an ensemble of these 4 great notebooks:<br> <a href="https://www.kaggle.com/david1013/trends-multi-layer-model" target="_blank">trends-multi-layer-model</a> by NQ <a href="https://www.kaggle.com/david1013" target="_blank">@david1013</a><br> <a href="https://www.kaggle.com/andypenrose/baggingregressor-rapids-ensemble" target="_blank">BaggingRegressor + RAPIDS Ensemble</a> by Andy <a href="https://www.kaggle.com/andypenrose" target="_blank">@andypenrose</a><br> <a href="https://www.kaggle.com/ttahara/trends-simple-nn-baseline" target="_blank">TReNDS:Simple NN Baseline</a> by Tawara <a href="https://www.kaggle.com/ttahara" target="_blank">@ttahara</a><br> <a href="https://www.kaggle.com/hemavivekanandan/trends-eda-dnn-for-predicting-age" target="_blank">TRends: EDA,DNN for Predicting Age</a> by Hema <a href="https://www.kaggle.com/hemavivekanandan" target="_blank">@hemavivekanandan</a> </p> <p>Ensemble achieved LB 0.1588 with weights <code>0.60 * model_1 + 0.20 * model_2 + 0.15 * model_3 + 0.05 * model_4</code> and then PP increased this to LB 0.1583.</p> <p>Nice. I'm reading solutions and it seems a bunch of teams in the top 50 used this trick to their advantage.</p> <p>Thanks Yasin. Yes, i was hoping that the private test would have more site 2 and that this trick would help me climb private leaderboard. (For example +0.00050 on public and +0.00100 on private! but instead it was +0.00025 on private so i went down a dozen ranks instead of up a dozen.)</p> <p>Our PP gains are similar. Mine is 0.0005 and yours is 0.0004 (neither is 6x the other). My private LB gain is 0.00025, so it worked on private too but not as much as public.</p> <p>If it worked the same or better on private then we would have stayed in the same leaderboard position or climbed LB. Bummer that it didn't work in our favor.</p> <p>Two simple experiments indicate that there are probably 1020 site 2 brain scans in public test and 510 site 2 brain scans in private test. Furthermore the given 510 site 2 brain scans are all in public test. The experiments to determine this are described in the discussion linked <a href="https://www.kaggle.com/c/trends-assessment-prediction/discussion/162907">here</a></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2Fc9186c5d4609f2bcf62f97fd4e9a0f8c%2FScreen%20Shot%202020-06-30%20at%204.49.05%20AM.png?generation=1593518703221747&amp;alt=media" alt=""></p>
TReNDS Neuroimaging
54th Private & 44th Public Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all the kagglers who participated in this competition!</p> <p>This one feels quite pleasant for me and my teammate since it has brought us to the Competitions Expert tier with a good ranking! :)</p> <p>We've started out with RAPIDS-based blending of classical ML models inspired by <a href="https://www.kaggle.com/tunguz/rapids-ensemble-for-trends-neuroimaging">this notebook</a>. We were trying to pick coefficients via <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.lsq_linear.html">SciPy linear least-squares bounded problem solver</a>, but we have faced the overfitting bound soon enough to give up this technique.</p> <p>The game-changer for us was <a href="https://www.kaggle.com/david1013/trends-multi-layer-model">Multi-Layer model</a> generously shared by @david1013. We started with blending 60 output .csv's with different seeds. An important thing is that our validation has shown that this ensemble underestimates most of the values, especially <strong>age</strong>. So we tried out an element-wise maximum of 60 predictions, and it was much better w.r.t. both the CV and public LB.</p> <p>However, <strong>age</strong> value remained lower than expected. Running out of time, we tried to pick a good multiplier for it as a part of post-processing. The best we have tried was <strong>1.005</strong>, so we've stopped on this predictor.</p> <p>Thanks!</p>
TReNDS Neuroimaging
5th place solution, 3D fMRI part
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats on 1st place!!</p> <ul> <li>I didn't use pretrained weights. (I tried <a href="https://colab.research.google.com/drive/10-JE1OOqM2LziPk7BgAOJcpal0F1_aEB?usp=sharing">this one</a>, but it didn't work)</li> <li>No augmentation</li> <li>batchsize: 16~32</li> <li>10 epochs / fold</li> <li>lr: 2e-3 -&gt; 5e-4 (cosine annealing)</li> <li>optimizer: RAdam</li> </ul> <p>I added SE module myself based on: <a href="https://github.com/moskomule/senet.pytorch/blob/master/senet/se_resnet.py">https://github.com/moskomule/senet.pytorch/blob/master/senet/se_resnet.py</a></p> <p>``` class SELayer(nn.Module): def <strong>init</strong>(self, channel, reduction=16): super(SELayer, self).<strong>init</strong>() self.avg_pool = nn.AdaptiveAvgPool3d(1) self.fc = nn.Sequential( nn.Linear(channel, channel // reduction, bias=False), nn.ReLU(inplace=True), nn.Linear(channel // reduction, channel, bias=False), nn.Sigmoid() )</p> <pre><code>def forward(self, x): b, c, _, _, _ = x.size() y = self.avg_pool(x).view(b, c) y = self.fc(y).view(b, c, 1, 1, 1) return x * y.expand_as(x) </code></pre> <p>class BasicBlock(nn.Module): expansion = 1</p> <pre><code>def __init__(self, in_planes, planes, stride=1, downsample=None, reduction=16): super().__init__() self.conv1 = conv3x3x3(in_planes, planes, stride) self.bn1 = nn.BatchNorm3d(planes) self.relu = nn.ReLU(inplace=True) self.conv2 = conv3x3x3(planes, planes) self.bn2 = nn.BatchNorm3d(planes) self.downsample = downsample self.stride = stride self.se = SELayer(planes, reduction=16) def forward(self, x): residual = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) ## se out = self.se(out) if self.downsample is not None: residual = self.downsample(x) out += residual out = self.relu(out) return out </code></pre> <p>```</p> <p>Thanks!</p> <p>Thanks! I didn't use pretrained model, because the number of channel is different. I also tried extracting 512-dims features from locally trained model, training by SVM and Ridge with fnc, loading data, but it didn't work. Training 3DCNN with 3D fMRI and fnc, loading data was not good, too.</p> <p>In case of 2D image task, the higher the resolution of the input image, the better the performance of the model is likely to be. Even if original image size is low, resizing to high resolution may improve scores. So, I resized image size. As a result, I got better models. And, the model's predictions were quite different even if backbone is same. This helped when stacking.</p> <p>Fisrst of all, many thanks to Kaggle and the hosts for hosting such an interesting competition, and congratulations to all the winners. And a special thanks to all my teammates for teaming up with me!! I arrived at this solution with a lot of advice from my teammates @masatomatsui @kurupical @takoihiraokazu .</p> <p>(Let me apologize for my poor English. )</p> <h1>Summary of models.</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1057275%2Fa3c45d38674d125c37bdb111ec64bf51%2F3D_fMRI_summary%20(1" alt="">.png?generation=1593476631685161&amp;alt=media)</p> <p>ResNet, SEResNet based on: <a href="https://github.com/kenshohara/3D-ResNets-PyTorch/tree/540a0ea1abaee379fa3651d4d5afbd2d667a1f49">https://github.com/kenshohara/3D-ResNets-PyTorch/tree/540a0ea1abaee379fa3651d4d5afbd2d667a1f49</a> DenseNet: <a href="https://github.com/Project-MONAI/MONAI">https://github.com/Project-MONAI/MONAI</a> Other tips: <a href="https://www.kaggle.com/c/trends-assessment-prediction/discussion/147797">https://www.kaggle.com/c/trends-assessment-prediction/discussion/147797</a> (Thanks, @shentao !!)</p> <h1>About Scores</h1> <p>All of 3D models are not so good about CV/LB score. Most of the models are around 0.171 ~ 0.175 CV scores, even the best model can only gets CV 0.17074 (SEResNet18 with resize, float 32). However, ensembling these models greatly improved the scores. In fact, weighted average of single 3D CNN model(ResNet18, float 32, no normalization) (LB 0.1675) and kernel prediction of <a href="https://www.kaggle.com/tunguz/rapids-ensemble-for-trends-neuroimaging">“RAPIDS Ensemble for TReNDS Neuroimaging”</a> (LB 0.1595) at 0.2 : 0.8 gots LB 0.1582. In the end, stacking our various models greatly improved the LB score. </p> <h1>About 3D Data</h1> <p>In the early part of the competition, I saved 3D data in float 16 format (light blue model). This is because it made the learning process so much faster and allowed me to train a lot of models. Training one model (5 folds, 10 epochs each) took about 5 ~ 10 hours to complete. In case of float 32, it took 2 ~ 3 times longer. However, it would be more accurate to train in float 32 format, so important red parts were trained with float 32 format.</p> <h1>About Resize</h1> <p>To create a variety of model prediction, I resized 3D data from (50, 63, 53) to (75, 94, 79). This contributed to improved CV scores and ensembles.</p> <h1>About Site Normalization</h1> <p>There were two types of data in the test dataset: site 2 and not revealed(site 1 or site 2). Teammates figured out that, by moving the distribution of site 2 closer to that of site 1 (this is “shift” in the figure), models using loading and fnc gots better LB scores (for example, best kernel 0.1590 -&gt; 0.1586). To achieve something similar in the 3D CNN models, I normalized 3D data by mean, std of each site. Site 1 data by mean, std of site 1, site 2 by those of site 2. One of my teammates made a model to predict whether the data is site 1 or 2 (Accuracy over 0.94), and I used his predictions to normalize the not reveal data in the same way. This “Site Normalization” did not boost CV, but improved LB scores.</p> <blockquote> <p>DenseNet, float 16, no resize No normalization: CV 0.1722 -&gt; LB 0.1684 Site normalization: CV 0.1742 -&gt; LB 0.1680</p> </blockquote>
TReNDS Neuroimaging
6th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Amazing team for professional growth, victories and interesting communication:</strong> Sergey Bryansky <a href="/sggpls">@sggpls</a>, Andrei Lupu <a href="/andreils">@andreils</a>, Möbius <a href="/arashnic">@arashnic</a>, Yury Bolkonsky <a href="/bolkonsky">@bolkonsky</a> and Kirill Shipitsyn <a href="/kirillshipitsyn">@kirillshipitsyn</a> 👍 😃 </p> <p><strong>Thanks all participants and organazers for such interesting competition!</strong></p> <p><strong>FE:</strong> we converted 4d images to 3d and train 3D CNN Autoencoder for each spatial map (1,1008*53) for each case, using PCA with different kernels to compress the data to (1,2048) for each.</p> <p>We used these features + original + site classifier features for training our pipeline. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1192776%2F16850a2fdcc31314682948022d477deb%2Fscheme.PNG?generation=1593498901941299&amp;alt=media" alt=""></p> <p>Also we were using target binning technique and feature selection which improve our final score <strong>(0.15663 public LB 6th place, 0.15703 private LB 6th place).</strong> 🔥 </p> <p>After PCA all data looks like tabular, so we had no special input for image data like resnet or something. Image data was processed throught Autoencoder and then were treated as tabular</p>
TReNDS Neuroimaging
7th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I have also tried auto-encoder type of models. Their encoded input didn't work for me.</p> <p>Why do you think OOF could leak btw?</p> <p>Indeed you are using an information which will be in your validation in the next stage of stacking. So the stage 2 model can use this information for reverse engineering. Therefore I think it is a common stacking problem and probably it is more about the complexity of stage 2 model rather than individual performance of stage 1 models.</p> <p>I thought it was the whole point with this competition: building a model which is robust for all different sites.</p> <p>out of fold</p> <p>I have a 3D Conv Net originated from @shentao 's great baseline. I want to thank him for that. I made some small architectural changes on resnet10 and started to boost from median predictions. It converged much faster that way, so that I could make the augmentation less severe. It achieved 0.1695 CV without using any other features.</p> <p>I have used its OOF in my models. Then I have almost equally performing 4 models on top of these OOF predictions and loading + fnc features: - LGB: Before feeding the features, I oversample loading features 50x times and then I use very low feature sampling fraction like 0.015. - MLP: Different dropout rates for 3D CNN OOF, fnc and loading features. - SVM: Same as <a href="https://www.kaggle.com/aerdem4/rapids-svm-on-trends-neuroimaging">https://www.kaggle.com/aerdem4/rapids-svm-on-trends-neuroimaging</a> - Ridge: An overfitting ridge was contributing the best compared to a robust BayesianRidge or HuberRegressor.</p> <p>I made the CV-LB gap closer by using Adversarial Validation predictions as sample weights. At the end, I had 0.1562 CV and 0.1566 LB. I was expecting shaking up thinking that there are even more site2 in private LB (Kaggle usually does that way) but I guess it wasn't the case and this trick lost its importance.</p> <p>``` def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x)</p> <pre><code> x = F.adaptive_avg_pool3d(x, (1, 1, 1)) emb_3d = x.view((-1, self.fea_dim)) out = self.fc(emb_3d) medians = [50.42774666, 51.84730591, 60.05253491, 47.81120495, 52.57203216] out = out + Variable(torch.Tensor(medians)).cuda() return out </code></pre> <p>```</p>
TReNDS Neuroimaging
8th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! My teammate ( <a href="/berserker408">@berserker408</a> ) did it. Your post-process is always nice :)</p> <p>Thanks!</p> <ul> <li>Did you pretrained 3D-CNN by this competition data and use it as feature extractor ? Yes, we used 3D-CNN trained with this competition 3D map data.</li> <li>Did you develop a model for each target but using multi-task learning ? Yes, I also tried single target criterion, but multi-task learning was better in my experiments. I wonder why it didn't work for you...</li> </ul> <p>We( <a href="/yasufuminakama">@yasufuminakama</a>, <a href="/berserker408">@berserker408</a> ) would like to thank kaggle &amp; host for the interesting competition and to all the participants for giving us a lot of ideas. And congrats to winners!</p> <h1>Solution overview</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1695531%2Fb512c7d9fbdd8d728b42b10cd34ba8f8%2FTReNDS_8th_place_solution.png?generation=1593492256129741&amp;alt=media" alt=""></p> <h1>3D map features</h1> <p>We extracted 512 3D map features using 3D-Resnet10. As I posted to External Data Thread, we used the one shared by <a href="/shentao">@shentao</a> ( see <a href="https://www.kaggle.com/c/trends-assessment-prediction/discussion/147797">https://www.kaggle.com/c/trends-assessment-prediction/discussion/147797</a> ). More details, you can check his discussion. Thanks a lot to him. As written in <a href="https://www.kaggle.com/c/trends-assessment-prediction/data">https://www.kaggle.com/c/trends-assessment-prediction/data</a>, <code> The scores (see train_scores.csv) are not the original age and raw assessment values. They have been transformed and de-identified to help protect subject identity and minimize the risk of unethical usage of the data. Nonetheless, they are directly derived from the original assessment values and, thus, associations with the provided features is equally likely. Before transformation, the age in the training set is rounded to nearest year for privacy reasons. However, age is not rounded to year (higher precision) in the test set. Thus, heavily overfitting to the training set age will very likely have a negative impact on your submissions. </code> We need to avoid overfitting to the training data, to avoid overfitting to the training data, 3D map features helped a lot.</p> <h1>Models</h1> <p>We used fillna data, fillna by SVM+Ridge.</p> <h2>NN</h2> <p>NN base structure is below. ``` class CFG: hidden_size1=64 hidden_size2=128 hidden_size3=6 dropout=0.5</p> <p>class TabularNN(nn.Module): def <strong>init</strong>(self): super().<strong>init</strong>() self.mlp1 = nn.Sequential( nn.Linear(26, CFG.hidden_size1), nn.BatchNorm1d(CFG.hidden_size1), nn.Dropout(CFG.dropout), nn.PReLU(), nn.Linear(CFG.hidden_size1, CFG.hidden_size1//2), nn.BatchNorm1d(CFG.hidden_size1//2), nn.Dropout(CFG.dropout), ) self.mlp2 = nn.Sequential( nn.Linear(1378, CFG.hidden_size2), nn.BatchNorm1d(CFG.hidden_size2), nn.Dropout(CFG.dropout), nn.PReLU(), nn.Linear(CFG.hidden_size2, CFG.hidden_size2//2), nn.BatchNorm1d(CFG.hidden_size2//2), nn.Dropout(CFG.dropout), ) self.mlp3 = nn.Sequential( nn.Linear(512, CFG.hidden_size3), nn.BatchNorm1d(CFG.hidden_size3), nn.Dropout(CFG.dropout), nn.PReLU(), nn.Linear(CFG.hidden_size3, CFG.hidden_size3//2+1), nn.BatchNorm1d(CFG.hidden_size3//2+1), nn.Dropout(CFG.dropout), ) self.head = nn.Sequential( nn.Linear(CFG.hidden_size1//2+CFG.hidden_size2//2+CFG.hidden_size3//2+1, CFG.hidden_size1//2+CFG.hidden_size2//2+CFG.hidden_size3//2+1), nn.BatchNorm1d(CFG.hidden_size1//2+CFG.hidden_size2//2+CFG.hidden_size3//2+1), nn.Dropout(0.1), nn.Linear(CFG.hidden_size1//2+CFG.hidden_size2//2+CFG.hidden_size3//2+1, 5), ) def forward(self, cont_x): loading_x = cont_x[:, :26] fnc_x = cont_x[:, 26:1404] img_x = cont_x[:, 1404:] loading_x = self.mlp1(loading_x) fnc_x = self.mlp2(fnc_x) img_x = self.mlp3(img_x) x = torch.cat((loading_x, fnc_x, img_x), 1) x = self.head(x) return x <code> Criterion was optimized per target using optuna like below. This improved results. </code> def weighted_nae(inp, targ, W=torch.FloatTensor([0.3, 0.175, 0.175, 0.175, 0.175])): return torch.mean(torch.matmul(torch.abs(inp - targ), W / torch.mean(targ, axis=0)))</p> <p>W_DICT = {'age': torch.FloatTensor([1.0, 0.28405462973430023, 0.15685327697616772, 0.06391563565928776, 0.014662813285223892]), 'domain1_var1': torch.FloatTensor([0.6638609260536665, 1.0, 0.14338389219696276, 0.1384448250515486, 0.1682642506073942]), 'domain1_var2': torch.FloatTensor([0.5528250413216551, 0.2767068350188637, 1.0, 0.7764813978084125, 0.1654236853816111]), 'domain2_var1': torch.FloatTensor([0.7856514193916039, 0.15941727005223633, 0.43970552609312813, 1.0, 0.16377864561256422]), 'domain2_var2': torch.FloatTensor([0.502415239502587, 0.2892349530842958, 0.2382098399995868, 0.15870234342595627, 1.0])} ``` Averaged NN scored CV: 0.15786, LB: 0.15781 without post-processing. We prepared more averaged NN prediction and used it for final blend.</p> <h2>other models</h2> <p>We referred four public kernels, and retrain them with our 3D map features. <a href="https://www.kaggle.com/aerdem4/rapids-svm-on-trends-neuroimaging">https://www.kaggle.com/aerdem4/rapids-svm-on-trends-neuroimaging</a> ( <a href="/aerdem4">@aerdem4</a> ) <a href="https://www.kaggle.com/tunguz/rapids-ensemble-for-trends-neuroimaging">https://www.kaggle.com/tunguz/rapids-ensemble-for-trends-neuroimaging</a> ( <a href="/tunguz">@tunguz</a> ) <a href="https://www.kaggle.com/david1013/trends-multi-layer-model">https://www.kaggle.com/david1013/trends-multi-layer-model</a> ( <a href="/david1013">@david1013</a> ) <a href="https://www.kaggle.com/andypenrose/baggingregressor-rapids-ensemble">https://www.kaggle.com/andypenrose/baggingregressor-rapids-ensemble</a> ( <a href="/andypenrose">@andypenrose</a> ) Thanks to these four kernels' authors.</p> <h2>site2 detect model</h2> <p>This part is similar to the public kernel except adding the 3D map features. The valid AUC can reach 0.98. Thanks to Bojan. <a href="https://www.kaggle.com/tunguz/adversarial-trends-site">https://www.kaggle.com/tunguz/adversarial-trends-site</a> ( <a href="/tunguz">@tunguz</a> )</p> <h1>post-processing</h1> <p>This trick is quite common in regression competitions. And it improved our score by ~0.0003. <code> sub.loc[(sub.target=='age')&amp;(sub.site==1),'Predicted'] *= 1.01 sub.loc[(sub.target=='age')&amp;(sub.site==2),'Predicted'] *= 1.02 sub.loc[(sub.target=='domain1_var1'),'Predicted'] *= 1.01 sub.loc[(sub.target=='domain2_var1')&amp;(sub.site==2),'Predicted'] *= 1.01 </code></p> <h1>What did not work</h1> <ul> <li>pseudo labeling</li> </ul> <h1>Final result</h1> <p>Public score: 0.15683 (Public 10th) , Private score: 0.15729 (Private 8th)</p> <p>Now I tried to get weights using oof with nelder-mead, weights are {'age': 1.00039063, 'domain1_var1': 1.00390625, 'domain1_var2': 1.00771484, 'domain2_var1': 1.00693359, 'domain2_var2': 1.00927734}. CV goes up 0.0001, Private LB goes up 0.0001 as well. Your weights are a little higher than our nelder-mead result, maybe it is the difference between 0.0001 &amp; 0.0003 improvement. (And note that the train scores are not the original age and raw assessment values) And only with oof nelder-mead approach, we can't search weight for site2 correction(training data do not contain site2 data), to search weight for site2 correction, LB probe worked.</p> <p>Thanks! Yes, very similar. Your team's Autoencoder is interesting approach. Congrats to your team's 6th place!</p> <p>LB probe. When test and train data do not have the same distribution, LB probe will work.</p>
TReNDS Neuroimaging
9th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I joined that competition pretty late so I started with a brute force attack. </p> <p>My solution is basically a big 3 level stack ensemble of tabular models. My final prediction have 24 baseline models.</p> <ul> <li>6x Ridge</li> <li>1x Lasso</li> <li>3x RAPIDS SVR</li> <li>2x XGboost</li> <li>2x KNN</li> <li>9x MLP</li> <li>1x 2D CNN</li> </ul> <p>Each model is trained using different datasets including the original features and additional engineered features extracted from 3D brain scans.</p> <p>For the second level stacked predictions I used 4 models: - BaggingRegressor( base_estimator=Ridge ) - KNeighborsRegressor - NuSVR - RandomForestRegressor</p> <p>My final prediction is a weighted average of those 4 model. My final local CV is 0.1559 and LB: 0.15724 (Public)/ 0.15742(Private). I believe the gap between local and LB scores are due some site2 differences.</p> <p>Each fMRI scan have 4 dimensions (53, 52, 63, 53) and it makes things hard when the topic is extracting features. Basically the first axis represents 53 different scans (not time related) so the dimension of each brain scan is only (52, 63, 53).</p> <p>To extract features from fMRI data I used many approaches, but the ones that performed better were when I calculated metrics comparing images from two different scans. So, most of my features are 2-way interactions of all 53 scans for each patient. That way I created thousands of features to describe each fMRI. </p> <p>To make the MLP works better I had to use BatchNorm in all layers and Dropout of around 0.7~0.8 to generalizes well. That was a key point in my models since most of the features have a very small signal or are just random noise.</p> <p>My 2D CNN was trained using projections of each 3D image in the 3 axis: x, y, z. So each fMRI scan is reduced to a 2D representation. That model scored 0.17x locally.</p> <ul> <li>Autoencoders embeddings didn't worked for me.</li> <li>3D CNNs scored very bad to me and I regret for not using it in my stacking since top solutions used it.</li> <li>Post processings site2 worked just a little bit for Public LB.</li> <li>Feature Selection overfited trainset, so I didn't used it.</li> <li>I'm surprised that using simple KFold for CV worked pretty well for stacking in this small dataset (5877 rows)</li> <li>RAPIDS t-SNE features didn't worked.</li> </ul>
TReNDS Neuroimaging
No Conv Net (Private: 4th, Public: 2nd)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We got several different base models. Some scale this feature down if it is loaded in Combination with fnc or IC. If the matrices are the only input them we do not scale.</p> <p>apply atlas: </p> <p><code> mat = np.array(h5py.File(f'{path}{participant_id}.mat', mode='r').get('SM_feature')) mat = schaefer_masker.fit_transform(nb.Nifti1Image(mat.transpose([3,2,1,0]), affine=schaefer_masker.mask_img.affine)) </code></p> <p>Loading the atlas: </p> <p>``` from nilearn import plotting, image, input_data, datasets</p> <h2>nrois = choose a schäfer atlas: 200, 300, 400</h2> <h2>brain_mask = nb.load(path + 'fMRI_mask.nii') ## the mask for the glass brain provided in the train data</h2> <p>schaefer_data = datasets.fetch_atlas_schaefer_2018(n_rois=nrois, resolution_mm=2, resume=True, verbose=1) plotting.plot_roi(schaefer_data.maps, title='Schaefer 2018 ' + str(nrois) + " rois") schaefer_masker = input_data.NiftiLabelsMasker(schaefer_data.maps, mask_img=brain_mask, smoothing_fwhm=None, standardize=False, detrend=False) ```</p> <p>Thank you :D</p> <h2>Some thoughts</h2> <p>We will try to make this dicussion post as comprehensive as possible and answer every question in the comment section. We had to shorten some parts as this would get way to long. </p> <p>It is very interesting that 3d CNNs seem to perform better on Private LB compared to our approach. We thought they would overfit more. It seems like Private LB distribution is closer to Train distribution than to Public LB distribution. </p> <h2>Credits</h2> <p>First of all thank you to the organizers for this great competition and hughe shoutouts to <a href="/churkinnikita">@churkinnikita</a> and <a href="/simakov">@simakov</a> for their well deserved win. It was a very memorable moment for us when they overtook our Public 1st place. </p> <p>Thanks to <a href="/aerdem4">@aerdem4</a> for the amazing kernels and Kudos to everyone who participated in this competition.</p> <h2>About us</h2> <p>Marius and I are lab buddies. We wrote our medical doctor thesis together in the neurophysiology department. That's why we have worked with neuroimaging data before (Fortunately this is not Zillow)</p> <h2>Approach</h2> <p>We wanted to create a comprehensive method for predicting fmri images to be able to directly evaluate, which parts contributed the most to the models prediction and not finetune for Site1 and Site2 but build 1 model for both of them.</p> <p>In the brain it's all about connectivity. Normally fmri images are 4 dimensional with 1 axis being time. But here these were ICA components - different brain networks. Marius had the idea to consider this axis like a time axis and use the same tools to derive features from time used in neuroimaging but for the ICA components. The keyword is Parcellations. Parcellations are an atlas of the brain, they subclassify the brain in smaller networks using functionality and/or anatomical features. Is is then possible to calculate the mean of fmri data points for every component in the atlas. Thereby we reduce the shape of 53*63*52*53 to Number of Parcellation components (a few hundred depending on the atlas) x Number of ICA components. In the following we will refer to this array as PICA. This reduces inter subject and Site variation.</p> <p>We used several atlases to diversify our model to better generalise (we used up to 10 different):</p> <h3>Basc:</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2685252%2F6c4fcf78897c8f7d99bbd8780ff0e85f%2Fbasc.PNG?generation=1593499876369065&amp;alt=media" alt=""></p> <h3>Schaefer:</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2685252%2F877ec3c26039269f93033eca98b0139c%2Fschfer300.PNG?generation=1593500180123531&amp;alt=media" alt=""></p> <p>We even calculated our own Parcellations using unsupervised learning approaches such as KMeans and Ward. We will link you some papers here as describing this would be beyond the scope of this post: </p> <p>Bertrand Thirion, Gael Varoquaux, Elvis Dohmatob, Jean-Baptiste Poline. Which fMRI clustering gives good brain parcellations ? Frontiers in Neuroscience, 2014.</p> <p>Vincent Michel, Alexandre Gramfort, Gael Varoquaux, Evelyn Eger, Christine Keribin, Bertrand Thirion. A supervised clustering approach for fMRI-based inference of brain states.. Pattern Recognition, Elsevier, 2011.</p> <h3>Kmeans</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2685252%2F98a2bbf5e44f65de6a5a2de89cbbea0d%2Fjbmkkmeans.PNG?generation=1593499927440943&amp;alt=media" alt=""></p> <h3>Ward</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2685252%2Fff709e43fa29f7a401ffb83df3c6936d%2Fjbmkward.PNG?generation=1593499951002666&amp;alt=media" alt=""></p> <p>We have seen this idea of atlases emerge in the discussion. But participants did not know what to further do with them. We had several approaches on this. We will only list the most contributing here, as this would be beyond the scope of this post: </p> <h3>Calculate connectivity of different components</h3> <p>Normally connectivity is derived by looking at the interaction of 2 components derived from parcellation over time. Here we used the ICA components as our “time” feature. We looked at the interaction of 2 components over ICA components using Matrices. </p> <p>We will visualise how this looks like using this basic yeo Parcellations with only 17 components. We used matrices up to 444 components as our RAM could not handle more.</p> <h3>Yeo</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2685252%2Ff388f5f2ec9fed7d49ee10a484be1964%2Fyeoparcellation.PNG?generation=1593500079292710&amp;alt=media" alt=""></p> <h3>1. Correlation Matrix</h3> <p>The x and y axis are the different yeo components and the matrix is the correlation of these 2 components over the 53 ICA Networks.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2685252%2F6400fd5cc5ea394eccb2ee5ad1a67cec%2Fcorrelation.JPG?generation=1593500547921516&amp;alt=media" alt=""></p> <h3>2. Covariance Matrix</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2685252%2F86c806d53a82c1773842b3fd7501c874%2Fcovariance.JPG?generation=1593500586579723&amp;alt=media" alt=""></p> <p>From now on only links as this gets too long :D</p> <h3>3. Partial Correlation Matrix</h3> <p><a href="https://en.wikipedia.org/wiki/Partial_correlation">https://en.wikipedia.org/wiki/Partial_correlation</a></p> <h3>4. Tangent Matrix</h3> <p>G. Varoquaux et al. “Detection of brain functional-connectivity difference in post-stroke patients using group-level covariance modeling, MICCAI 2010.</p> <h3>5. Precision Matrix</h3> <p><a href="https://en.wikipedia.org/wiki/Precision_(statistics)">https://en.wikipedia.org/wiki/Precision_(statistics)</a></p> <p>… you get the idea</p> <h3>What to do with these Matrices</h3> <p>We then used these matrices as input for our model (SVR, Ridge, Lasso, Elastic Net) and stacked their predictions using SVR and Ridge with a simple blend in the end. </p> <p>We also fed the raw parcellation data to some networks.</p> <p>And we used the public notebooks for IC and FNC features</p> <h2>Results</h2> <p>with Lofo: CV: age: 0.13378; d1v1: 0.14976; d1v2: 0.15016; d2v1: 0.1795; d2v2 0.175050 overall: 0.154682; Private LB: 0.15714; Public LB: 0.15615</p> <p>without Lofo: CV: age: 0.13477; d1v1: 0.15027; d1v2: 0.15077; d2v1: 0.18045; d2v2 0.17545 overall: 0.15539; Private LB: 0.15689; Public LB: 0.15626</p> <h2>Some more Tricks</h2> <ol> <li>Leave one Network Out</li> <li>Only use a single Network</li> <li>Calculate our Parcellations using Site 2 </li> <li>...</li> </ol> <h2>What we think contributed the most to not overfit to Site1 / Site2:</h2> <ol> <li>using Brain Atlases derived from Site1 + Site2</li> <li>using Brain Atlases derived from completely different experiments (e.g. Basc)</li> <li>The shear amount of features used in our models</li> <li>The amount of base models (around 70)</li> <li>The amount of parcellations used</li> </ol> <h2>What did not work</h2> <ol> <li>Calculating our own Parcellations solely on Site 1 -&gt; overfit to Site 1. </li> <li>We tried a lot with 3d CNNs but were not able to improve CV using them. - We were especially interested in remapping the Parcellated brain back to 3d. But this did not give any improvement. We are excited to see how other dealt with this. </li> <li>Autoencoders in many combinations</li> <li>....</li> </ol> <h2>Discussion:</h2> <p>We have not seen this approach anywhere in the literature. We now can perfectly calculate which components were contributing to the models prediction. Which brain regions are important and what interaction between different brain regions are relevant for ageing and the other assessment values. We hope that the organizers will allow us to further develop this idea and discuss the results with our medical supervisors. </p> <h1>Cheers!</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2685252%2Fba6aa2e139b3492443b0e175f30fc755%2Fpotato.JPG?generation=1593500378556395&amp;alt=media" alt=""></p> <p>PS. One of the most memorable moments of this competition for us will be the Japanese dominance in the LB</p> <p>Hopefully there will be a "real" fmri image one day with Time features :D</p> <p>You can do with nilearn.</p>
TReNDS Neuroimaging
Second silver in one week
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Due to the metal distribution delay in Jigsaw, my 2 silvers come in 1 week.</p> <p>Many thanks to everyone and congrats to the winners ! Can't wait to see your solutions 😁 </p>
TReNDS Neuroimaging
[Public: 0.15881, Private:0.15864] GNN Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I have the same problem on the left GNN.</p> <p>I replicates the same number of <code>loadings</code> as <code>ICN</code> (described as "copy" in the above figure), hence there are 53 copies. These copies are feeded into <strong><em>not-shared</em></strong><em> Linear</em>. It's intented that each copied <code>loadings</code> feature is transformed to <strong>different</strong> features suited to <code>FNC</code> matrix.</p> <p>Shortly, I mapped <code>loadings</code> to nodes of <code>FNC</code> graph. <br> I missed medal unfortunately, but it's all my fault.</p> <p>I believe that learned techniques of GNNs will help me in the future. Thanks.</p> <p>Congrats winners and the all participants got the medals ! and thanks to the host and Kaggle Team for holding this tough but interesting competition.</p> <p>Maybe, as stated in <a href="https://www.kaggle.com/c/trends-assessment-prediction/discussion/162739">this post</a>, my final submission was not in time then I missed silver medal😅 </p> <p>But I think my solution may be useful for someone and would like to share summary here.</p> <h2>Overview</h2> <ul> <li>Graph Neural Network approach (single model, 5-fold averaging)</li> <li>develop a model for each target (<code>age</code>, <code>domain1_var1</code>, <code>domain1_var2</code>, <code>domain2_var1</code>, <code>domain2_var2</code>)</li> <li><a href="https://www.kaggle.com/c/trends-assessment-prediction/discussion/147128">convert fnc data into symmetric matrices</a>(code is <a href="https://www.kaggle.com/ttahara/technique-for-converting-fnc-to-matrix">here</a>) and use them as adjacency matrices' weights</li> <li>extract (53,512)-dim feature by pretrained resnet10-3D from <a href="https://github.com/Tencent/MedicalNet">MedicalNet</a> (just using feature extractor, <strong>not training on competition data</strong>)</li> </ul> <p>This approach works well for <code>age</code> but not for others, I show you the architecture especially for <code>age</code>.</p> <h2>Model Architecture (for age)</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F473234%2Ff34744551fb5dcc574d3e14f037120d3%2Fsolution_pipline.png?generation=1593505309611553&amp;alt=media" alt="architecture"></p> <h2>Important things (I think)</h2> <h3><strong>Independent</strong> edge parameters</h3> <p>In general GNNs, edge parameters are shared in each layer. But I think relations between components can not be represented by shared parameters.</p> <p>For reducing the number of parameters, I implemented this by a node-wise manner in accordance with the following paper:</p> <p><a href="https://dl.acm.org/doi/10.1145/3308558.3313444">Dressing as a Whole: Outfit Compatibility Learning Based on Node-wise Graph Neural Networks</a></p> <h3>Regulariazation by dropping <strong>edge</strong></h3> <p>This is a simple but powerful method for regularizing GNNs. I think my model couldn't have achieved this score without DropEdge.</p> <p>If you want to know details, see this:</p> <p><a href="https://iclr.cc/virtual_2020/poster_Hkx1qkrKPr.html">DropEdge: Towards Deep Graph Convolutional Networks on Node Classification</a></p> <p><br> I learned many things about GNNs through this competition.</p> <p>Thank you the host and Kaggle team again for giving me the opportunity😃 </p>
TalkingData AdTracking Fraud Detection Challenge
11th place features
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData AdTracking Fraud Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>sorry, day 8 is included in my training data.</p> <p>I used the validation data in retraining.</p> <p>yes, I refit.</p> <p>I mean the validation data is used like fold-out data.</p> <p>Congrats winners!</p> <p>I would like to share my features briefly before I forget it. </p> <h3>Features</h3> <p>I used BigQuery on the almost all feature engineering parts. <br> <a href="https://gist.github.com/tkm2261/1b3c3c37753e55ed2914577c0f96d222">https://gist.github.com/tkm2261/1b3c3c37753e55ed2914577c0f96d222</a></p> <p>It takes only about 20 minutes. very fast!</p> <p>There may not be surprising features if you have seen the shared kernels. I tried to find good features by brute force way.</p> <h3>Machine</h3> <p>CPU: 16 cores, MEM: 100GB on GCP</p> <h3>Data</h3> <ul> <li>training data: day 7 and day 8 (13,188,695,398 rows)</li> <li>validation data: day 9 (53,016,937 rows)</li> <li>test data: day 10 (18,790,469 rows)</li> </ul> <p>I used validation data to determin the boosting round. Then, I used train+valid data for training.</p> <h3>Training</h3> <p>Simply, I used LightGBM and ensembled with different seeds.</p> <p>The single best model is 0.9823 on the public LB.</p> <p>The parameter is here: </p> <pre><code>{'colsample_bytree': 0.6, 'learning_rate': 0.1, 'max_bin': 1023, 'max_depth': -1, 'metric': ['binary_logloss', 'auc'], 'min_child_weight': 30, 'min_split_gain': 0.0001, 'num_leaves': 127, 'objective': 'binary', 'reg_alpha': 0, 'scale_pos_weight': 1, 'seed': 1142, 'subsample': 0.9, 'subsample_freq': 1, 'verbose': -1} </code></pre> <p>source code is here: <a href="https://github.com/tkm2261/kaggle_talkingdata/blob/master/protos/train_lgb.py">https://github.com/tkm2261/kaggle_talkingdata/blob/master/protos/train_lgb.py</a></p> <p>I tried to earn my tuition for MS CS program starting next fall. But, there was no free lunch!</p> <p>I handle the imbalance by min_sample_weight, num_leaves, and other parameters. Since the eval metric is AUC, the scale is not important.</p> <p>the latter is correct. plz see my sql gist for more detail.</p> <p>maybe right. if you want to learn more, please see my code: <a href="https://github.com/tkm2261/kaggle_talkingdata/blob/master/protos/train_lgb.py">https://github.com/tkm2261/kaggle_talkingdata/blob/master/protos/train_lgb.py</a></p> <ol> <li>I left top about 50 features ordered by the feature importance of lightgbm when I made new features. I sometimes dropped features in python code.</li> <li>For parameter tuning, I used day 8 data as train data and day 9 data as validation data. Since training only day 8 data is not relatively heavy, I could try about 20 parameters sets. But, I did not devote much effort to the parameter tuning.</li> </ol> <p>Thanks for questioning! <br> 1. You are right. I copied these numbers from the log when I deleted duplicated rows by mistake. And, deleting duplicated rows did not improve my score. <br> 2. At first, I used only day 8 data as train data. Then, I added day 7 data that I called "prev data". So, in my query, prev and train are merged.</p>
TalkingData AdTracking Fraud Detection Challenge
18th Place Brief Solution Overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData AdTracking Fraud Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>@kazanova has, IMHO correctly, pointed out that the 1st and 4th place finishes are perhaps the best ones in any competition. I'd argue that finishing one place out of gold might be the hardest one. A slightly different choice of final blend, one or two more tricks that we didn't use, could have made all the difference. It calls for a lot of soul-searching, and makes you engage in a lot of counterfactual thinking. Nonetheless, my team and I are overall pretty happy with what we had accomplished. I want to thank my teammates - @psilogram, @andrenaef, and @dicksonchin93. They are all incredibly talented, smart, fun, and kind people. I'm sure you'll be seeing a lot more of them in the upcoming competitions. </p> <p>Here is a brief overview of our solution(s):</p> <p>Silogram's model was a single LGB bagged 10 times with different training data samples. First, data was split into four 24-hour datasets (day7 - say10) matching the hours in test_supplement. All features were then generated within each day without considering data from other days. Features consisted of the typical frequency stats by group and seconds to previous/next clicks that have been mentioned by others, as well as target stats from the previous day. The final model had 24 features. For validation, the model was trained on day 8 and validated on day 9 (looking only at hours that matched the test set hours). To save time, training was done on a sample of 2% negative observations plus all of the positive observations. For the submission, the model was trained on a sample of data from day8 and day9, with the number of iterations increased 1.9 times over the best validation iteration count. The LB score was 0.9814 and the PB score was 0.9820.</p> <p>André's model was a single LGBM based on ~90 features trained on 5% of the data, plus all positives. Beside common features, he was also using log odds from a statistics timeframe, and applying SVD to those to fill in the blanks.</p> <p>Chin (Ee Kin) model:</p> <p>Build primarily models on LGBM and XGB. Fixed Validation split of day 9 4am to 3pm mimicking time period of test set, trained both types of models with validation and after that used best iteration from cv to train on full data set, got the max increase of 0.0006 for LGBM with this method and XGB 0.0022. For my LGBM full training dataset is used without sampling, for XGB the focus is to create variation to our models for ensembling so resorted to sampling methods, tried multiple configurations but for my best XGB model is with a ratio of 1:5 positive to negative examples, with oversampling positive samples without replacements at 10%. Utilised @Nano Mathias Bayesian optimization script to obtain lgbm and xgb parameters for training with slight manual tweaking based on intuition. Besides common features, was using two step next clicks and previous clicks, some statistical info based on time and normalized 0-1 for some features which only contributed a little the final model but its used, around 40+ features. Didn’t use ip as a feature for training directly based on several popular discussions made on it. Things that didn’t work for me is target encoding and pseudo-labelled target encoding, burnt two weeks trying different versions...</p> <p>Bojan's model(s):</p> <p>I also used more or less the same features as everyone else. I used three different validation schemes: 4am to 3pm on the 9th day, all of the 9th day, and random splits. I built several LGBM and XGB models, and even used one fm_ftrl model based on @anttip's now famous kernel. Training the model on different subsets of training data gave a nice boost for blending, as did using different boosting algos. I wish I had the time to build a catboost and RGF models. Early in the competition I was also considering building a solid NN model, but since most people were not able to blend it well with the rest of their submissions, it seemed that it would not help much, so I never went down that rout. It also didn't help that I have access to a machine with lots of RAM and a machine with lots of GPU power, but not one that has both. :-) </p> <p>Our final submission was a blend of various models. We experimented with various weights for our submodels, but it seems that something close to almost equal weights would have been as good as almost all the other options that we tried. </p>
TalkingData AdTracking Fraud Detection Challenge
191st Place Solution Or more like lessons learnt....
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData AdTracking Fraud Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was a crazy competition with the release of last minute of 9811 kernel, I stayed up the night to retune my models and see if I can beat that score. I edged it out only slightly, but I guess the model was good enough to jump up in the private LB. Unfortunately, my XGBoost was still running at the cut off time and wasn't able to my blends unto date. Since the beginning I was using the full dataset as I did have access to a better machine with lots of RAM and swap space. Almost until last week, I was experimenting with the following models:</p> <ol> <li><p><a href="https://www.kaggle.com/scirpus">@Scirpus</a> favourite FFM: I spent a lot of time studying the Avazu solution was trying to replicate it. In addition I also stumbled on the DeepFM paper (<a href="https://arxiv.org/abs/1703.04247">https://arxiv.org/abs/1703.04247</a>) they have the code on GitHub (<a href="https://github.com/ChenglongChen/tensorflow-DeepFM">https://github.com/ChenglongChen/tensorflow-DeepFM</a>). I wasn't able to tune the model beyond 0.96-0.97. I think a lot of my experimentation time was wasted chasing it and I don't think I am any closer to building better intuition around it even now. I will continue to spend time to study more. </p></li> <li><p>Vowpal Wabbit: This was another time sink in getting it to work. This was the most frustrating part, absolutely none of my models went beyond 0.47-0.48. I tried doing bayesian optimisation on it as well, but something was missing....maybe I did something wrong. </p></li> <li><p>Neural Network: I started of initially with the approach given by <a href="https://www.kaggle.com/alexanderkireev">@Alexander Kireev</a>. I also did a bayesian optimisation to tune the layers and other NN parameters. These models got me in 0.979 range and adding more features from various kernels (next click, previous click and the subsequent combinations) started to bump me up the LB, but not by much. </p></li> <li><p>GBT models (XGBoost, CatBoost, LightGBM): These were relatively faster to train, so last week was focusing on getting them to tune (bayesian optimisation) and start bumping up the list. Don't use cat boost on large models. It was either that the pool or the fit method that blows up in the memory and crashes without it reaching any conclusion. But XGboost and LightGBM remained strong till the end. I was able to do more submissions in the last week that I did before that. my best scoring did not do better in LB so that was not used for scoring, but I used that as one, then I would have gotten into 0.9821211 on the private. Thats a big lesson learned...dont make hasty judgements. </p></li> </ol> <p>With 3 and 4 and the public word batch kernel was my final ensemble. I will share my notebook in the upcoming days. </p> <p>I did explore <a href="https://www.featuretools.com/">feature tools</a> initially, but then relied more on the starting features in various kernels and then making more on top of them. </p> <p>Special Thanks to <a href="https://www.kaggle.com/yuliagm">@yulia</a> <a href="https://www.kaggle.com/nanomathias">@Nanomathias</a> <a href="https://www.kaggle.com/asraful70">@Md Asraful Kabir</a> @Wenjie Bai <a href="https://www.kaggle.com/ogrellier"></a><a href="/olivier">@olivier</a> and CPMP</p> <p>Congratulations to the prize winners and all those who worked hard towards this competition. </p> <p>The learning journey continues. Note to myself, Make better notes, experiment more. </p>
TalkingData AdTracking Fraud Detection Challenge
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData AdTracking Fraud Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>LSA features were computed on the original data :) </p> <p>Thank you ChinaBoy. All the features were computed on the full data. Negative down-sampling was done only on model training. </p> <p>Thank you, zr. Both of the two papers are good, but actually the second link was not what I wanted to put. I replaced it with the correct one. Sorry about this. </p> <p>Thanks to negative down-sampling, the many-features was not a so big problem to us, so we didn't do fine-grained feature selection. For example, when I came up with the LDA features, I added 100 new features to my model. Luckily, it worked very well, so I added all of the 100 features. These 100 features were used for the final submission model without any further testing. </p> <p>I haven't cared about which one of them contribute the most though ideally I might have had to remove useless features. </p> <p>We'd like to thank TalkingData and Kaggle for organizing this exciting competition. This competition gave us a fantastic opportunity to learn how to deal with very large table data. <br> <br> Here is our solution. <br> <br> <strong>Strategy:</strong> <br> Our solution heavily depends on negative down-sampling [1, 2], which means we use all positive examples (i.e., is_attributed == 1) and down-sampled negative examples on model training. We down-sampled negative examples such that their size becomes equal to the number of positive ones. It discards about 99.8% of negative examples, but we didn't see much performance deterioration when we tested with our initial features. Moreover, we could get better performance when creating a submission by bagging five predictors trained on five sampled datasets created from different random seeds. This technique allowed us to use hundreds of features while keeping LGB training time less than 30 minutes. <br> <br> <strong>Features:</strong> <br> First, we started from features from Kernels. On Thanks pranav84 and other Kagglers for sharing their awesome insights! We did feature engineerings using all the data examples instead of the down-sampled ones. <br> - five raw categorical features (ip, os, app, channel, device) <br> - time categorical features (day, hour) <br> - some count features <br> Then, we created a bunch of features in a brute-force way. For each combination of five raw categorical features (ip, os, app, channel, and device), we created the following click series-based feature sets (i.e., each feature set consists of 31 (=(2^5) - 1) features): <br> - click count within next one/six hours <br> - forward/backward click time delta <br> - average attributed ratio of past clicks <br> We didn't do feature selection. We just added all of them to our model. At that point, our LGB model's score was 0.9808. <br> <br> Next, we tried categorical feature embedding by using LDA/NMF/LSA. Here is the pseudo code to compute LDA topics of IPs related to app. (LDA is latent Dirichlet allocation)<br></p> <pre>apps_of_ip = {} for sample in data_samples: apps_of_ip.setdefault(sample['ip'], []).append(str(sample['app'])) ips = list(apps_of_ip.keys()) apps_as_sentence = [' '.join(apps_of_ip[ip]) for ip in ips] apps_as_matrix = CountTokenizer().fit_transform(apps_as_sentence) topics_of_ips = LDA(n_components=5).fit_transform(apps_as_matrix) </pre> <p>We computed this feature for all the 20 (=5*(5-1)) combinations of 5 raw features and set the topic size to 5. This ended up with 100 new features. We also computed similar features using NMF and PCA, in total 300 new features. 0.9821 with a single LGB. <br> <br> After that, we removed all raw categorical features except app since we supposed embedding features cover information available from them. Surprisingly, this minor change made our public LB score jump up from 0.9821 to 0.9828. Actually, we don't know what causes this significant score improvement. <br> Besides features mentioned here, we created higher dimensional LDA features and features that try to address the duplicate sample problem. These features somewhat improve our public LB score. <br> <br> <strong>Models:</strong> <br> We used day 7 &amp; 8 for training and day 9 for validation, and chose the best number of iterations of LGB. Then, we trained a model on day 7 &amp; 8 &amp; 9 with the obtained number of iterations for creating submission. After we finished feature engineering, flowlight's five-bagged LGB model reached 0.98333 on public LB (and 0.98420 on private LB), which was trained on 646 features. As far as we remember, the memory usage for training this model was less than 100GB (&lt;64GB will be possible with minor code modification). <br> I implemented a simple three layer NN model as some kernels do. It scored worse than LGB models by 0.0013 points with 0.005 down-sampling rate at first. Then, I realized it should be trained with more negative data samples, probably. However, we didn't afford to use so many examples because of the massive amount of our features (and my bad implementation, disk space, no GPU, close deadline, etc...). The final three-bagged NN model scored 0.98258 on public LB. <br> We made our final submission with a rank-based weighted averaging. It is composed of seven bagged LGB models and a single bagged NN It scored 0.98343 on public LB. <br> <br> <br> [1] <a href="http://quinonero.net/Publications/predicting-clicks-facebook.pdf">Practical Lessons from Predicting Clicks on Ads at Facebook</a> <br> [2] <a href="https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/41159.pdf">Ad Click Prediction: a View from the Trenches</a> <br> <br> EDIT: Added some extra explanations based on frequently asked questions in comments.</p> <p>Thank you. We did the former. We changed our training dataset for each training to get more diversity.</p> <p>Basically we did the following two steps. <br> 1. download past submissions from Kaggle <br> 2. Run the following code <br></p> <pre>submissions = [pd.read_csv(path) for path in downloaded_submission_paths] submissions_as_rank = [submission.rank() for submission in submissions] ensemble = np.mean(submissions_as_rank, axis=0) </pre> <p>no. Predicting more positives wasn't a problem for AUC, luckily.</p>
TalkingData AdTracking Fraud Detection Challenge
22nd Place Overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData AdTracking Fraud Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks... sorry about the wait - these are some pretty lightweight tips, most of which I could probably have safely shared long before the end :)</p> <p>Thanks to Kaggle &amp; TalkingData for another entertaining diversion :) Congrats to all winners and thanks to all who shared Kernels, tips and solution write-ups, they’re great to read.</p> <p>This is a really interesting data set and as I related in my <a href="https://www.kaggle.com/jtrotman/eda-talkingdata-temporal-click-count-plots/code#310213">story here</a>, click fraud is a really insidious problem that drives people to do really <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/54765">odd things</a> (thanks <a href="/yifanxie">@yifanxie</a> - amazing to actually see it!)</p> <h2>Overview</h2> <p>I used broadly similar features to everyone else, next click times and count features, and stuck to the route recommended by early leaders – using lightgbm and training on the full training set – it surprised me that my 32Gb machine could. After reading the great <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/55325">shared tips</a>, switching to creating a NumPy array upfront then populating that by loading feature columns was very efficient, with 33 features it barely even needed any swap space.</p> <p>I validated on the 2nd half of the last day of training for simplicity, just one validation set, then I’d save predictions in a dataframe and do groupby(‘hour’) and check AUC that way. I did pay more attention to the hours that matched the test set hours, but I thought other hours might lead to useful insights (but nothing noteworthy to report).</p> <p>My general process for one iteration was quite lightweight:</p> <ol> <li>an hour of feature engineering – save columns individually</li> <li>a 1.5-2 hour validation run</li> <li>a 3 hour full model build</li> </ol> <p>After setting this up, the only real manual effort is step 1, the validation run just determines # of trees (which was quite stable over feature sets, but sometimes repeated if the results were poor). The full model build is a one click process that builds a model, makes test set predictions with varying numbers of trees (to test assumptions about increased data set size &amp; scaling # of trees), then uploads the submissions via the highly recommended <a href="https://github.com/Kaggle/kaggle-api">Kaggle API</a>.</p> <p>I managed 13 iterations, never more than one in a day, so it was a bit of a slow waiting game, and a long way from optimal. Getting results in fewer iterations is down to some intuition and luck in selecting what works. There are a lot of features I tried that didn’t work, so my remaining features are mostly fairly baseline/simple things… (<a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56283">Matrix factorization of IP vs app log click counts</a> is the one thing I really wish I’d tried, it will summarize an IP’s affinity for apps and interact very well with the app and IP count features… Well done to <a href="/cpmpml">@cpmpml</a> &amp; other top teams :)</p> <h2>Entropy features</h2> <p>Whereas most people used nunique, I thought that might be noisy. With a click count histogram like e.g. counting os’s for an app = [12345, 5, 1, 1] it’s technically 4 unique os’s, but very low entropy (it’s more like one os).</p> <p>The original feature encodings made this easier: they were all 0..n, so it was simple to do it all in NumPy using <code>np.add.at()</code> to fill a 2D array with counts, then summarize like this:</p> <pre><code>from scipy.stats import entropy # e.g. a=’ip’ ; b=’app’ cc = np.zeros((int(df[a].max()+1), int(df[b].max()+1)), dtype=np.int32) np.add.at(cc, (df[a].values, df[b].values), 1) (cc&amp;gt;0).sum(0) # this is nunique for a (cc&amp;gt;0).sum(1) # this is nunique for b map(entropy, cc) # entropy of a over b map(entropy, cc.T) # entropy of b over a </code></pre> <p>All two way interactions were possible with 32Gb of RAM, and took only minutes to compute this way. I also combined device &amp; os into one new categorical (only 6561 unique values), and app &amp; channel (1518 unique values), which enabled some 3-way interactions like entropy of IP over (device, os).</p> <h2>Sessions</h2> <p>Four new features:</p> <ul> <li>sort by ip, dev, os</li> <li>define max gap e.g. 60 seconds</li> <li>every time there is a change in ip/device/os, or a gap in the clicks, assign categorical session ID</li> <li>for sessions, summarize: <ul><li>duration</li> <li>count</li> <li>click rate (duration/count)</li> <li>app entropy</li></ul></li> </ul> <p>I intended to add entropy over gaps, e.g. lots of similar gaps of 5-6 secs = low entropy = bots. Thought maybe click rate was enough.</p> <h2>Naive Count Features</h2> <p>Normally, count features group on one or more columns and count the actual occurrences. Inspired by Naive Bayes classifiers: another view on the data is to assume the fields are independent, and do:</p> <ul> <li>univariate count</li> <li>divide counts by the number of rows to get a probability of seeing the value</li> <li>log() the probability</li> <li>sum different combinations of these columns similarly to normal feature interactions (i.e. -, +, /, *).</li> </ul> <p>Mathematically, this is the same as a geometric mean probability of seeing the record, under the assumption the columns are independent. Again, extremely fast to compute using NumPy alone.</p> <p>The combinations &amp; loading code I used:</p> <pre><code>def loadraw(name, dtype): return np.fromfile(name, dtype=dtype) def log_p_feat(t, col): # t is ‘train’ or ‘test’ return loadraw('../feats/%s_log_p_%s'%(t,col), np.float32) x[:,a] = log_p_feat(t, 'ip') + log_p_feat(t, 'dev') + log_p_feat(t, 'os') x[:,a+1] = x[:,a] + log_p_feat(t, 'app') x[:,a+2] = x[:,a] + log_p_feat(t, 'chan') x[:,a+3] = x[:,a] + log_p_feat(t, 'app') + log_p_feat(t, 'chan') </code></pre> <p>Adding these features in with normal 2D/3D counts might pick up on interesting things. With these ‘naive’ counts: rare values overlap - a low (log) probability row might be rare IP with common OS <strong><em>or</em></strong> a rare OS with common IP. (Although IP was much higher cardinality, so probably dominates the log sum. A weighted mix might be better.)</p> <p>This can produce a very large number of unique values, so lightgbm value binning might come to the rescue here and avoid over-fitting. I’m not 100% convinced this was a good idea… but it did lead to a good single model (0.9811 public, 0.9825 private) where these naive count features had high feature importances.</p> <h2>Bit Shifting</h2> <p>Small tip: by looking at the maximum values for each column, you can work out how many bits are required to store each, and happily, the five categoricals all fit in a single 64 bit integer. You can pack them like this:</p> <pre><code>def do_pack(df): a = 0 a += df.app.values.astype(np.uint64) a += df.os.values.astype(np.uint64)&amp;lt;&amp;lt;10 a += df.channel.values.astype(np.uint64)&amp;lt;&amp;lt;20 a += df.ip.values.astype(np.uint64)&amp;lt;&amp;lt;30 a += df.device.values.astype(np.uint64)&amp;lt;&amp;lt;50 return a </code></pre> <p>They can be unpacked (these work on values or NumPy arrays):</p> <pre><code>def fapp(v): return v&amp;amp;0x3ff def fos(v): return (v&amp;gt;&amp;gt;10)&amp;amp;0x3ff def fchan(v): return (v&amp;gt;&amp;gt;20)&amp;amp;0x3ff def fip(v): return (v&amp;gt;&amp;gt;30)&amp;amp;0xfffff def fdev(v): return (v&amp;gt;&amp;gt;50)&amp;amp;0xfffff </code></pre> <p>This is vectorized, so runs extremely quickly, and the result is like a hash, but without collisions. You can use the result to do groupby() operations - more efficiently in both time &amp; RAM. You can also ignore fields by setting all their bits to one, e.g. to ignore device <code>a |= 0xfffffl&amp;lt;&amp;lt;50</code>. (Because 0 was a valid value for each field.) Using <code>np.save()</code> (or <code>a.tofile</code> and <code>np.fromfile</code>) on the 64 bit array you could save the entire 4 days of train &amp; full test into 1.9Gb on disk. Times could be similarly packed into 32 bits, about 900Mb… With these copies of the data I actually managed to do some feature exploration in idle moments (in Java) on an old 8Gb MacBook!</p> <h2>Categoricals</h2> <p>Grouping on all five combined fields, there were many different series, some with 10k-20k appearances, but a long tail of shorter series. Instead of separate count &amp; cumcount features I tried a categorical feature to mark where the row is in the rarer/shorter series. Using max_bin of 255, there are 8 bits to play with: 4 bits to mark the length of the series (1..15) and 4 bits to mark where in the series the record appeared (0..14).</p> <p>Where v is all five fields combined as above, use groupby(v).size() to get df.series_size and groupby(v).cumcount() to get df.series_cumcount, then:</p> <pre><code>idx = df.series_size.values&amp;lt;=15 cat = np.where(idx, df.series_size.values&amp;lt;&amp;lt;4, 0) cat |= np.where(idx, df.series_cumcount.values, 0) </code></pre> <p>So this is one feature that encodes the length of the ‘series’ of [ip,dev,os,app,chan] <strong>and</strong> the position within the series. Tree splits can then address subsets like:</p> <pre><code>if cat == 0x32 if 3_element_series and this_record_is_last_in_series if cat == 0x20 if 2_element_series and this_record_is_first_in_series </code></pre> <p>To make the latest versions of lightgbm do these kind of splits you need to set the <code>max_cat_to_onehot</code> parameter correctly, in this case to 256 or more – but beware this applies to all categorical features, so for example channel may be used for one hot style splits too – it depends on how many values are seen in each column at dataset construction time (lightgbm samples data to construct the bins). All model parameters/regularizations are trade-offs of some kind.</p> <p>Limited time and iterations mean I can’t be sure this didn’t overfit, particularly for series that span train &amp; test times, and things like “length 14 series, element 4” which is not likely to mean anything (the hope is, this could uncover some really quirky leak!) Perhaps addressing only shorter series would be better, or chunking the series by time. I relied on the model to tell me which were more important - a quick inspection of models shows low values like 0x20, 0x21 were used the most.</p> <h2>Leaderboard</h2> <p>From <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56524">https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56524</a></p> <p><img src="https://kaggle2.blob.core.windows.net/forum-message-attachments/326995/9398/jt_public_lb.png" alt="JT public LB"></p> <p>For me, validation scores stayed very consistent, but leaderboard scores were mysteriously lower. It was only eventually removing some features that fixed this: all of the count features that used the corrected Chinese timezone day. (These were: <code>ip_day_count</code>, <code>app_channel_day_count</code>, <code>device_os_day_count</code>.) I’m still not sure what caused that, and it’s puzzling that other’s made these work, but removing them on the penultimate day finally achieved validation / leaderboard correlation, getting me to public #105.</p> <p>As #105 slid down into the #200’s (I need finer resolution on that graph!) I gave in and as a backup plan rank averaged my main sub with Dirk’s which was worth 2e-5 of AUC and one spot on the private LB!</p> <pre><code> private public model single lgbm 0.9825639 0.9811706 v12 single lgbm 0.9825211 0.9810671 v13 selection 1 0.9828068 0.9813324 rank average v12+v13 (Private #23) selection 2 0.9828279 0.9816611 rank average v12+v13 + Dirk (Private #22) </code></pre> <p>Thanks Yifan - my background is in embedded software engineering, so I have a habit of trying to cram things into as few bits as possible :)</p> <p>The use of entropy is something I discovered in my first Kaggle competition, on a similar theme to this one: <a href="https://www.kaggle.com/c/facebook-recruiting-iv-human-or-bot">Facebook Recruiting IV: Human or Robot?</a> in the <a href="https://www.kaggle.com/c/facebook-recruiting-iv-human-or-bot/discussion/14628">secret sauce thread</a> post by <a href="/smallyellowduck">@smallyellowduck</a> - I remember being blown away by that solution at the time - and thinking that was a lot of data to deal with... in another few years this dataset might start to look small, and if Kaggle keep updating Kernel capability roughly in line with Moore's Law, tips like this could have a short half life!</p>
TalkingData AdTracking Fraud Detection Challenge
28th place, A 0.0006x boosting trend feature and non-overfitting target encoding (attributed rates)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData AdTracking Fraud Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks a lot for your encouraging! And I also learnt much from you. You're really awesome.</p> <p>Wow! Feel excited to be 翻牌 by Da Lao.</p> <p>At first, I want to say Thanks to all you nice and friendly kagglers who've shared their enlightening ideas. This is my first competition on Kaggle and I find people here are really willing to share and help. Really learnt a lot here. I'm so touched (* ̄︶ ̄*)</p> <p>Although 28th is not a very good place, I do have some useful features to share which I didn't use at last. Why didn't I use them? Because I am such a bonehead to trust the deceptive public LB rather than my reliable local CV. Dear CV, I'm so sorry that I got you wrong. ( ´・ω・)ノ(._.`)</p> <h1>The first one: <strong>About the overfitting target encoding</strong></h1> <p>First, my special thanks to @NanoMathias and his Notebook <a href="https://www.kaggle.com/nanomathias/feature-engineering-importance-testing">https://www.kaggle.com/nanomathias/feature-engineering-importance-testing</a> which enlightened me a lot. As many have commented, I encounter the overfitting problem when I tried the target encoding (attributed rate) features.</p> <p>Intuitively, the historical attributed rate is helpful as it depicts the quality of an app or the habit of a user and the like in a way or another. But afterwards I find more than 25% (143786 records) downloads came from the users (ip-os-device) who clicked only once and downloaded only once. And much more users are there who clicked once but did not download. I suspect the overfitting might result from these, and here are some tricks I come up with to cope with it:</p> <ol> <li><p>Use the data from the previous day only. This cause data on day 6 to have all NaN but this helps any way. For instance, when you calculate the attributed rate for day 8, use data on day 7.</p></li> <li><p>Use all data before the current day. This results in a more precise historical attributed rate and is better than 1). Putting it clearly, use the data from day 6 to calculate the attributed rate for day 7 and use data from day 6 and day 7 to calculate the attributed rate for day 8 and day 6,7,8 for day 9 and day 6,7,8,9 for day 10.</p></li> <li><p>Use all data excluding the current day. Intuitively, this gives the closet results to the original one which use all data to calculate the "historical" attributed rate and this did boost most. You use day 7,8,9 to calculate for day 6 and day 6,7,9 for day 8 and so on.</p></li> </ol> <p>We are approaching the "real" attributed rates piecemeal. By now, the third way boosted me most and a further step will take me into overfitting. I don't know if there exist a not that radical step to improve over it with no overfitting based on 3).</p> <p>I use the attributed rate on the following 6 combinations: ip-app, ip-os, ip-os-device, ip-app-os-device, app-channel, app (Not a real combination). Besides, smoothing helps a little. One way is to use the "log(count)" given in the Notebook abovementioned. Another way is to use Laplacian smoothing. However, I didn't find a good parameter for Laplacian smoothing to beat the "log(count)"-fashion. One last interesting thing, Combining the unsmoothed with the smoothed ones improves a little further (0.0001 over using only the unsmoothed ones)</p> <p>Laplacian smoothing:</p> <p>(downloads + m * average_rate) / (counts + m)</p> <p>Need to pick a good "m". The average_rate is the mean over the attributed rates you got for a combination like the ones I mentioned above.</p> <p>Sorry I don't know how to insert a formula.</p> <h1>The second one: <strong>Trend feature is a good thing</strong></h1> <p>This is a serendipity I got when one day I happened to glimpse my cache files used to calculate daily attributed rates (Right, the intermediate results for the target encoding features). I find some apps' daily attributed rates are increasing or decreasing continuously. So I gave it a shot. I use the downloads (Note that it's the number of downloads not clicks nor attributed rates) on the previous day to divide that of the day before the previous day. For instance, </p> <pre><code> trend_day_10 = downloads_on_day_9 / downloads_on_day_8 </code></pre> <p>I calculated this on only 3 combinations: ip-app-os-device, ip-os-device, app. And it boosts me from 0.9821 to nearly 0.9828 on private LB, which is rather helpful than my hotchpotch-like blending of all my weird XGBs and LGBs. I didn't use it at last because it drops 0.0001 on public LB and I am such a bonehead to trust the deceptive public LB rather than my reliable local CV. Although this results in all NaN on day 6 and day 7 (as you don't have data from the two previous days), it did help a lot. Besides, I also come up with another way to represent the trend but had no time to try: 1. get the differences of the series of downloads on day[6,7,8,9]. 2. get the sign (representing the increase or decrease) of the differences. 3. Encode it. For instance:</p> <p>Downloads on day 6,7,8,9: [1, 2, 3, 2]</p> <p>After step 1: [1, 1, -1]</p> <p>After step 2: [+, +, -]</p> <p>And then you assign different integers to different patterns. I don't know if this work.</p> <p>Other interesting features (helps a little, about 0.0001):</p> <ol> <li><p>ip-app-os-device counts / ip-os-device counts, which describes the preference for apps of a user.</p></li> <li><p>ip-app / ip.</p></li> </ol> <p>At last, the competition gave me a really profound lesson: TRUST YOUR LOCAL CV :(</p> <p>English is not my native language so if I didn't put it clearly and make you mixed up, feel free to comment!! :)</p> <p>Thanks to all you nice kagglers!! :)</p> <p>You're right. I did miss this. Actually when I was extracting the features I got warnings about this, but I thought they are not so important and my unsmoothed ones can compensate for this so I just left it there. In a nutshell, I'm too lazy to change my code XD. I thought your +1 is more reasonable. Thanks for your pointing out.</p> <p>Wow 0.003! That's a big improvement.</p> <p>Yes. At last I fed both the log-fashion-smoothed and unsmoothed rates to my model. Smoothing helps more when I use less data (data only on the previous days), but helps less when I use all the days excluding the current day.</p> <p>It's also weird for me. Thanks also for your asking!</p> <p>Yes! NaN for day 6 and day 7. Filling with 0 or something else may help further. At least it helps for the attributed rate features.</p> <p>I'm afraid the data within one hour is not sufficient to cover enough users and will cause sparsity, but it will work I presume. Looking forward to your discovery :)</p> <p>You are likely to be right. I didn't inspect the distribution of this feature carefully, rather I just extracted and fed them into my model, and hopefully it worked.</p>
TalkingData AdTracking Fraud Detection Challenge
34th place, thoughts on CV and Target Encoding in R
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData AdTracking Fraud Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi @ChinaBoy, Not quite, LGB2 contains all LGB1 features that are non categorical + features that are BASED ON the target encoding, not the target encoding itself. Features like Avg(is_attributed) by groups and those which are based on USER as i stated: e.g: how many times did user (ip, os, device) ever download any app</p> <p>how many times did he download the current app</p> <p>how many unique apps did he click on.</p> <p>Hi @Snorlax &amp; @Terracota I hope this small snippet helps (it is in R) I couldn't run in in kernel so i couldn't share </p> <pre><code> ##Load my LGB model #1 AUC 0.906 model&lt;- lgb.load("e:/Project/Latest Files/Models/LGBM1.mod") #Import Training + Test Supp + Test data including all their features (after preprocessing) TrainData2&lt;-fread(file ="f:/AllDataTrainCVTest.csv"); TrainData2[,click_time:=fastPOSIXct(click_time,tz="UTC")] #Which columns i need for LGB model # 1 to predict - keep only these columns FeatureCols&lt;- c("app", "device", "os", "channel", "hour", "ipOnly_dayCount", "ip_app_dayCount", "ipOnly_day_hourCount", "ipOnly_Count", "ip_app_Count", "User_UniqueApps", "User_Day_UniqueApps", "ip_UniqueUsers", "ip_UniqueApps", "ip_Day_UniqueApps", "ip_UniqueDevices", "ip_UniqueOs", "App_UniqueChannel", "User_app_NextClick", "User_app_NextClick2", "User_Count", "User_app_Count", "User_dayHourCount" ) cols&lt;-FeatureCols TrainData2&lt;-TrainData2[,..cols] #Convert to matrix and predict TrainData2&lt;-as.matrix(TrainData2);gc() LGBPrediction2&lt;-data.table(LGBPrediction2=predict(model, data =TrainData2) ) ;gc() fwrite(LGBPrediction2,"f:/LGBPrediction2.csv") #Add predictions as additional column to the data TrainData2&lt;-cbind.data.frame(TrainData2,LGBPrediction2) #Add columns that state prediction as 1 or 0 based on different thresholds of prediction. #The threshold were chosen carefuly after examining confusion matrices and sensitivity values #for different values, tried to minimize false positives TrainData2[,LGBPred2IntOnlyModel99:=ifelse(LGBPrediction2==0.99,1,0)] TrainData2[,LGBPred2IntOnlyModel997:=ifelse(LGBPrediction2==0.997,1,0)] TrainData2[,LGBPred2IntOnlyModel999:=ifelse(LGBPrediction2==0.999,1,0)] #Add different features a few times each, so can check for each one, which threshold worked best TrainData2[,User_appDownloadsBasedOnModel99:=sum(LGBPred2IntOnlyModel99),by=.(ip,device,os,app)] TrainData2[,User_appDownloadsBasedOnModel997:=sum(LGBPred2IntOnlyModel997),by=.(ip,device,os,app)] TrainData2[,User_appDownloadsBasedOnModel999:=sum(LGBPred2IntOnlyModel999),by=.(ip,device,os,app)] TrainData2[,User_appDownloadsBeforeBasedOnModel99:=cumsum(LGBPred2IntOnlyModel99),by=.(ip,device,os,app)] TrainData2[,User_appDownloadsBeforeBasedOnModel997:=cumsum(LGBPred2IntOnlyModel997),by=.(ip,device,os,app)] TrainData2[,User_appDownloadsBeforeBasedOnModel999:=cumsum(LGBPred2IntOnlyModel999),by=.(ip,device,os,app)] TrainData2[,User_AppsDownloadedBasedOnModel99:=uniqueN(app),by=.(ip,device,os,LGBPred2IntOnlyModel99==1)] TrainData2[,User_AppsDownloadedBasedOnModel997:=uniqueN(app),by=.(ip,device,os,LGBPred2IntOnlyModel997==1)] TrainData2[,User_AppsDownloadedBasedOnModel999:=uniqueN(app),by=.(ip,device,os,LGBPred2IntOnlyModel999==1)] TrainData2[,User_DivAppsDownloadedandClickedBasedOnModel99:=User_AppsDownloadedBasedOnModel99 / User_UniqueApps] TrainData2[,User_DivAppsDownloadedandClickedBasedOnModel997:=User_AppsDownloadedBasedOnModel997 / User_UniqueApps] TrainData2[,User_DivAppsDownloadedandClickedBasedOnModel999:=User_AppsDownloadedBasedOnModel999 / User_UniqueApps] #Run model a few times with different threshold features to test AUC RunTest(c("User_appDownloadsBasedOnModel99", "User_appDownloadsBeforeBasedOnModel99","User_AppsDownloadedBasedOnModel99","User_DivAppsDownloadedandClickedBasedOnModel99"),"LGBM2UserDownloads99") RunTest(c("User_appDownloadsBasedOnModel997", "User_appDownloadsBeforeBasedOnModel997","User_AppsDownloadedBasedOnModel997","User_DivAppsDownloadedandClickedBasedOnModel997"),"LGBM2UserDownloads997") RunTest(c("User_appDownloadsBasedOnModel999", "User_appDownloadsBeforeBasedOnModel999","User_AppsDownloadedBasedOnModel999","User_DivAppsDownloadedandClickedBasedOnModel999"),"LGBM2UserDownloads999") </code></pre> <p>Thanks @CPMP. You were extremely helpful during the competition, any where i go, any kernel or discussion, you are always there with your insight and experience. Thanks! </p> <p>Do you plan on taking 3 days break to recover your sleep and start the new Avito? =D</p> <p>Hi! I wanted to share some of my findings during this competition. It's my first ever competition or ML task, so much of what I tried was really bad, but it taught me good intuition making these horrible mistakes and have a better "radar" for possible leaks and over fitting. </p> <p>I learned a lot on R's data.table, LGBM, XGBoost, I experimented with keras and H2O NNs (didn't get good results) and also experimented with AWS's VMs.</p> <p>My public lb is 65 and private is 34, i think this increase came from good generalized CV and features, after trying many things and participating in many discussions on the matter.</p> <p>I ended up using 2 LGBM models (second one uses the predictions of the first, details in item #3), with main features including: Count clicks by groups, UniqueN by groups, Next click by groups, and the target encoding based features detailed in item #3)</p> <p>I won't be long on all the mistakes i did, i will focus on the things I did that gave the significant boosts along the way and seemed to have generalized well. I hope this will be of any value to any of you.</p> <ol> <li><p>Dropping any concerns for over fitting my model to day 9 hour 4, although the difference of CV and LB score was lowest, it obviously overfit. After some discussions on the matter, I ended up training on days 7 and 8, CV with day 9, and then retraining on same rounds on entire data. **I read @CPMP's post and learned that he used 1.2 multiplier on the Num_Rounds with the full training set, will definitely try it in the future!!</p></li> <li><p>I calculated all of my feature calculations on train + test supplement data. This is after trying to run features only where hour exists in the test set, or only on the train\ regular test set.</p></li> <li><p>My last boost up from 0.9806 to 0.9816 was using an interesting approach to target encoding i encountered somewhere in the discussions (I don't remember who suggested the approach so sorry for not giving the credit). The method involved taking my best lgb model and predict on all train and test supplement data. Then using the predicitions to do target encoding on the entire data and using it to create features involving the "is_attributed" field. After some experimentation, i ended up with this: I took the prediction and converted it 3 times to a boolean 1/0 "is_attributed" value.</p> <ul><li>Once if pred &gt;= 0.99 then 1 otherwise 0</li> <li>then pred &gt;= 0.997 then 1 otherwise 0</li> <li>Then pred &gt;= 0.999 then 1 otherwise 0</li></ul></li> </ol> <p>I did this because i wanted better sensitivity for the model, i am ok missing some 1's but don't want many 0's to be mistakenly predicted as 1's. I then calculated 2 sets of features 3 times, once for each prediction result set. I had mean target features by groups (past is_attributed = 1 divided by count of clicks), and another set of features that included, amongst other features:</p> <ul> <li><p>how many times did user (ip, os, device) ever download any app</p></li> <li><p>how many times did he download the current app</p></li> <li><p>how many unique apps did he click on.</p></li> </ul> <p>After running some test runs of the new features, i found the highest increase in score with mean target when using the &gt;=0.997 prediction, and the the other set of features maxing with the &gt;=0.99 prediction. I then used both of the sets in a new lgb model that uses only the Quantity (non categorical) features from the first lgb model + the new target encoded features.</p> <p>4, I used bayesian optimization to optimize hyper parameters for the 2 models separately. Unfortunately, even after testing for different methods of optimization (GP Upper bound, expected probability etc.) with different Kappa\Epsilon results. I did not find it running differently or optimizing better in any way. It was basically running like random search. I still need to learn more on it, but taking the best found hyper parameters did give me a decent boost in score.</p> <p>Thanks for reading and thanks for being such an awesome community!! And of course to all the awesome Kagglers that assisted and shared their wisdom during the competition. This was quite an intense 2 months =)</p> <p>Thanks <a href="/snorlax">@snorlax</a>. My R code in this competition is a very big mess, nothing you can just run and expect a result. If you're looking for a specific part i can gladly copy paste it for you </p>
TalkingData AdTracking Fraud Detection Challenge
560th Solution :)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData AdTracking Fraud Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>In my understanding, use oof is about creating kfold train file for the second-level training.</p> <p><a href="https://github.com/Rocuku/kaggle-talkingdata/blob/master/src/kfold.py">Here</a> is the code I do kfold train and create kfold train file. Am I having any misunderstanding about oof stacking?</p> <p>I don't get medal this time, but I am learned a lot from this competition.</p> <p>In the final version, I used 22 features include nextClick and prevClick. My features all come from kernels and discussions. Thanks for every kagglers who share these nice works. </p> <p>Here is the summary of my approach:</p> <ol> <li>use 99% train data (random 1% validation set for early stopping), get 0.9802LB by LightGBM and 0.9796LB by XGBoost</li> <li>add test_supplement to generate features above, get 0.9804LB by LightGBM and 0.9800LB by XGBoost</li> <li>run 10-fold CV, get both 0.9806LB by LightGBM and XGBoost</li> <li>use RF as the stacker, stacking two 0.9806LB models, get 0.9810LB</li> </ol> <p><a href="https://github.com/Rocuku/kaggle-talkingdata">Here</a> is my pipeline.</p> <p>I really learned a lot from public kernels. It's a pity losing my medal at the last moment, but I‘m still glad I join this competition. </p> <p>See you in next competition:)</p> <p>Thanks! <br> Actually, I am trying to figure out why Xgboost get the same score as LightGBM just after I run 10 fold CV.</p> <p>Actually, there is a nice stack code in kernels. My code is based on this. <br> <a href="https://www.kaggle.com/aharless/simple-linear-stacking-lb-9730">https://www.kaggle.com/aharless/simple-linear-stacking-lb-9730</a></p> <p>Hope so, Thanks! :)</p> <p>Thanks!</p> <p>Thanks! ;)</p> <p>Quadro M4000 and 128G RAM:)</p>
TalkingData AdTracking Fraud Detection Challenge
5th place solution (0.9836 with pure feature engineering)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData AdTracking Fraud Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I used feature importances to get the effective subset of the specific features group. For example, there are 180 patterns of "nunique_counts_ratio", but I didn't use all of them because if I use all of them training time becomes very long but the gain is small. <br> In contrast, I didn't use feature_importance but used valid score to check if the specific features group is effective or not. For example, I made "lagged_ca_diff_mean", which means the average of difference between click_time and attributed_time in the previous day. Actually some of them have high feature_importance but the valid score doesn't improve, so I threw away all of them. I think it's very important to use both valid score and feature importances when selecting features.</p> <p>Thanks Danijel, now you are grandmaster. :) I think I can't improve my model to 0.9836 without the discussion with you. I'm really looking forward to see you in another competition, and hopefully I want to see you in the real world :)</p> <p>Ah I completely forgot this. this is the rolling mean of timedeltas of 10 samples before the click. </p> <p>I did nothing special, just used .npy file to store features and load it and concat them, which took much time. but I didn't want to use DataFrame to avoid memory explodes...</p> <p>No, It's impossible :( It means, For example, time_diff_k_for has 2^5 -1 = 31 patterns because there are (ip, os, device, app, channel). and nunique_counts_ratio has 180 patterns. I added all of them and select by feature_importances. I didn't do accurate feature selection, did very rough one.</p> <p>I used n1-megamem-96 instance, which have 1.4TB RAM. It's not so expensive because I used preemptive instance.</p> <p>for example, if specific ip's click log is like this, <br> 00:00:00 <br> 00:00:05 <br> 00:00:10 <br> 00:00:15 <br> 00:00:23 <br> 00:00:30 <br> 00:00:35 <br> they have 5 sec timedeltas 4 times. so, tdf_top_counts_ratio can be caluclated by this. <br> 4/(7-1) = 0.67 <br> This is tdf_top_counts_ratio. </p> <p>Thanks pocket! I'm very happy to team-up with you, I learned a lot from you :) I really hope you become grandmaster :)</p> <p>Thank you my briliant team, all teams who competed with us, and all people in the competition. <br> and Congrats Danijel, who became grandmaster in this competition! <br><br> It's my first Kaggle competition and I'm really happy to get 5th in this competition. <br><br> My model is based on simple LightGBM, with lotta feature engineering. I made 2 models, one with 561 features, the other with 695 features. but increasing the number of features from 561 to 695 gave very small score up, so I'll explain only about 561 features model. <br><br> I posted the '9836_features.txt', which shows the feature name and the importance(num_split). <br></p> <h1>Features (only effective ones)</h1> <p>actually I made more than 30 types of features. Here I show the 6 types which are effective.</p> <h2>1. time_diff_k_for, time_diff_k_back</h2> <p>time_diff_1_for means the click time difference with next click, <br><br> time_diff_1_back means the click time difference with previous click. <br><br> time_diff_2_for means the click time difference with next next click. <br><br> when k = 1, 2, it is very effective.<br></p> <h2>2. nunique_counts_ratio, 3. top_counts_ratio, 4. top10_hoge</h2> <p>For example, if specific ip has 100 samples and the distribution of the device is <br><br> 2(30 samples), 3(25 samples), 5(20 samples), 7(15 samples), 8(10 samples), <br><br> nunique_counts is 5, nunique_counts_ratio is 0.02, top_counts is 30, top_counts_ratio is 0.3, <br><br> top2_counts is 25, top2_counts_ratio is 0.25, top3_counts is 15, top3_counts_ratio is 0.15. <br><br> top_device is 2, top2_device is 3, top3_device is 5, top4_device is 7, top5_device is 8. <br></p> <h2>5. tdf_top_counts_ratio</h2> <p>sometimes, there are ip or (ip, os, device) that has always similar timedeltas. for example, if specific ip's click log is like this, <br><br> 00:00:00 <br><br> 00:00:05 <br><br> 00:00:10 <br><br> 00:00:15 <br><br> they always have 5 sec timedeltas. <br><br> so, I got top_counts, top2_counts, top3_counts from timedeltas.<br></p> <h2>6. lagged_dl_counts_ratio</h2> <p>lagged target encoding. <br><br> in day1, this features are missing. <br><br> for day2, I used target mean in day1. <br><br> for day3, I used target mean in day2. <br><br> for day4, I used target mean in day3.<br></p> <h1>Adding Features &amp; Feature selection</h1> <p>I tried many patterns and select features using feature_importances, which were really time consuming. I made more than 7000 features, but most of the experiments failed.<br></p> <h1>Ensemble</h1> <p>As <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56319" target="_blank">https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56319</a> says, Michael's blend NN improved the score and finally got 0.9840 :)</p> <h1>Comments</h1> <p>I tried very hard to deal with duplicate problem, but finally couldn't find the leak. <br><br> I was very sad to see this. We could have won this competition if we had noticed it… <br><br> <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56268" target="_blank">https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56268</a> <br><br> Anyway I really enjoyed my first kaggle competition, I'll continue kaggle and want to win the next competition. <br></p> <h3>Thanks all, see you again!</h3> <p>Thanks CPMP, your information in the discussion/kernel were really valuable and I was always impressed with you :)</p> <p>This is my parameter. params = { 'boosting_type': 'gbdt', 'objective': 'binary', 'tree_learner': 'serial', 'metric': 'binary_logloss', 'learning_rate': 0.05, 'max_bin': 255, 'num_leaves': 63, 'max_depth': -1, 'min_data_in_leaf': 1000, 'feature_fraction': 0.7, 'bagging_freq': 1, 'bagging_fraction': 0.7, 'lambda_l1': 1, 'lambda_l2': 1, 'sigmoid': 1, 'verbosity': 0, }</p> <p>It doesn't give bad effect for me. If there is no sample or no positive sample in previous day, I set the value 0. so, there are no NaN samples in day2, day3, day4 and no problem happened. </p>
TalkingData AdTracking Fraud Detection Challenge
8'th place solution. [ods.ai] blenders in game )
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData AdTracking Fraud Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>At first, as usual, I want to say Thank you to all kagglers who share their approaches and thoughts during this and other competitions. Despite some annoying events (unfortunately such events can happen in any competition) kaggle platform is still the best place for practitioner Data Scientist who want to lift up their skills and go deeper into this DS Rabbit's hole )</p> <p>Our Team was built last week just before the end of competition (before merging deadline), I was at 80th place (at that moment), <a href="/johnpateha">@johnpateha</a> just decided to take part in this game that day, <a href="/ppleskov">@ppleskov</a>, <a href="/maksimovka">@maksimovka</a>, <a href="/yaroshevskiy">@yaroshevskiy</a> and <a href="/ddanevskyi">@ddanevskyi</a> occupied ~100th places on the LB. I was ready to get a bronze (in case of luck) and finish my attempts to climb up, but one morning...</p> <p>"Knock, Knock, Kruegger" ) say <a href="/johnpateha">@johnpateha</a>. "We have a good chance to catch a gold here, just trust me, wake up and go <strong>"вджобывать"</strong> (to do a really hard work). "The metric is fine, there is a lot of data, and in the worst case we don't lose anything".</p> <p>Hmm. Why not? We made the team, several hours later combined our team with <a href="/ppleskov">@ppleskov</a> &amp; Co, and start <strong>"вджобывать"</strong>.</p> <p><strong>Here I just describe my part of solution, my colleagues will add their own approaches later.</strong></p> <p>This competition is really hard due to size of the dataset. We need a lot of RAM to train our models especially if we have a lot of features. So this is why my solution before merging has been based on day 9 for training, day 8 for target encoding calculation - and 10% of train (shuffled) for validation. It was enough to climb up to 9805 score, but I feel that for this pipeline it is a ceiling.</p> <p><strong>Dataset</strong>: </p> <p>In the final solution I use day 7 for target encoding, day 8+9 for training and last 2.5M rows from train as holdout. We decided to just blend our final solutions and not to use stacking on that level. (and of course, we had to comply with our team name! :)</p> <p>I spent a lot of hours and made a lot of attempts to fit my dataset to memory, I used all tricks I knew before and found there on the forum - but my machine (with 64Gb RAM) time after time told me "Out of Memory", "Out of memory"...</p> <p>The final approach that I found - is to use numpy memmap as data storage, I describe this method here:</p> <p><a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56105">https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56105</a></p> <p><strong>Features</strong>:</p> <p>I have several group of features, most of them you can find in the public kernels, nothing special: </p> <ul> <li>Count by several groups</li> <li>NextClicks</li> <li>TargetEncoding over groups</li> <li>Statistics (mean/var)</li> <li>and so on...</li> </ul> <p>After merging I added to the dataset some features from my colleagues' solution, like duplicate orders, and so on. No "Killer features", but overall score became more stable.</p> <p><strong>Feature Engineering</strong></p> <p>At first I tried to use my favorite method of feature selection (random shuffling - like Boruta) but without success, so I returned back to old-fashioned style - greedy forward selection by small groups (3-5 attrs at a time). If my val score was raising I added that group to the dataset, in other case - gave up the whole group.</p> <p>After adding a group I tried to "cut tail" of features based on feature important but in very "conservative" manner.</p> <p>The final solution contains 74 features.</p> <p>Finally I selected three big groups of features in addition to the full set and built models for all my best parameters on these sets.</p> <p><strong>Models and parameters tuning</strong></p> <p>I used lgb as my base model, tried to build FM-FTRL/XGB, but with no overall improvement. I also was unlucky in CatBoost - it refused to run on my data, I don't know, probably it took offense ).</p> <p>I tried to use some parameters I found in the public kernels, but the best one I got - I got from Bayasian optimization process. I really a big fan of this method so I suggest trying it if you haven't tried it before.</p> <p>I use this implementation and it gives me very nice results in all competitions I use it.</p> <p><a href="https://github.com/fmfn/BayesianOptimization/blob/master/README.md">https://github.com/fmfn/BayesianOptimization/blob/master/README.md</a></p> <p><strong>Diversity in dataset</strong></p> <p>Initially I built my models on last 75M rows (according to memory constraints), than on last 100M rows (thanks to my teammates for one more power computer) and finally (when I implemented dark numpy magic :) ) - on the whole dataset.</p> <p><strong>Ensembling</strong></p> <p>I used method that dropped me down to ~1500 place in the Toxic competition (I didn't prepare it well in Toxic), but in this competition it gave a huge improvement. The method is Scipy optimize with softmax restriction over weights. I tried to use power ensembing, caruana's hillclimbing, just geometric averaging - but scipy (in case you have holdout prediction for all of your model) gave the best score in all environment I tested it.</p> <p>My final solution was blending with scipy weights over 7 best models (from ~30 overall).</p> <p>The oof score of my best has .99235 -&gt; .9823x on public LB. Combining it with models from <a href="/johnpateha">@johnpateha</a> and other teammates we had the final score that lifted us up to 8'th place )</p> <p><strong>Final words</strong></p> <p>Thank you to all my teammates, especially <a href="/johnpateha">@johnpateha</a> who pushed me to Gold medal ) We did a very nice cooperative work inside our team, got the results in short time, so my dear colleagues - you are the best! )</p> <p>It was my first team competition (I played solo before) and I really appreciate the results of working in a team.</p> <p>Happy kaggling! <br> (C) Kruegger</p> <p>P.S. Some Intrigue - <a href="/johnpateha">@johnpateha</a> did huge investigation over data, one of his advices helped to lift score of my model up to +0.0001. Advice was "just put 0 to this 4(four)! rows in the dataset." Dark magic? ) But let him to describe his solution himself.</p>
TalkingData AdTracking Fraud Detection Challenge
84th Place Solution with Open Source Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData AdTracking Fraud Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Some features are from other's public kernel. Some features is constructed using my own knowledge for this issue.</p> <p>I am sorry for this late topic. And my English is not good. Hope for your understanding. The open source code is in my Github repository: <a href="https://github.com/ShawnyXiao/2018-Kaggle-AdTrackingFraud">https://github.com/ShawnyXiao/2018-Kaggle-AdTrackingFraud</a>. If you can star or fork this project to motivate me who has just entered the field of data mining, I will be grateful~</p> <p>The journey of this competition is quite interesting for me. It should be perhaps the one which competition I spend the least time on. It was on April 25th that I discovered this competition. Then I casually downloaded the submission in other's kernel and submitted it. At that time, I was ranked over 300th+, then I put it down. When I continued doing it, it is May 2nd and I was ranked over 1100th+. From May 2nd to May 7th, I only spent about <strong>6 days</strong> in this competition (there are other jobs I have to finish during this period, so it is not a full-time job on this competition). Luckily, I got a <strong>silver medal</strong> finally. The figure below is the ranking change during my competition.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/336330/9550/rank.png" alt="My rank on public LB"></p> <p>Because of the short time, my results are all generated from the single model LightGBM, and I have not tried other models. So I will just share two parts that I think are more important: processing of billion-level data and feature construction.</p> <h2>Processing of billion-level data</h2> <p>The data provided by the organizers is about 10G, with more than 100 million samples. How to use limited memory to process this 10G data is very critical for this competition. I generally used the following operations:</p> <ol> <li>When reading data using Pandas, if you do not specify the data type of each column, it will be read in the most conservative manner: use uint64 to read non-negative integers, use float64 to read floating-point numbers and integers with some null values. If you use this most conservative way to read data, the memory consumption is enormous. Therefore, <strong>reading data using the appropriate data types uint8, uint16, uint34, float32, etc.</strong>, can save us very much on memory resources.</li> <li>Many variables will not be used afterwards. But if they are kept in memory, we will also consume our precious memory resources. Therefore, when a variable <code>a</code> (especially a large memory variable) is no longer used, we should <strong>remove it from memory: <code>del a</code></strong>.</li> <li>We often use a variable to refer to different objects. At this time, we will generate some objects that can no longer be referred. In theory, Python will automatically do garbage collection, but we need to trigger certain conditions. Therefore, we can <strong>often call the function <code>gc.collect()</code></strong> to trigger garbage collection.</li> <li>For categorical features, we generally perform one-hot conversions. If you do not convert, the model will treat the categorical features as ordered continuous values; If you convert, the feature dimension will become extremely large, and the memory consumption will increase and the speed of training model will be slowed down. LightGBM optimizes the categorical features and only needs to <strong>specify the categorical features</strong> when preparing the dataset for model.</li> </ol> <h2>Feature construction</h2> <p>The feature construction is particularly critical for improving the effects of results. Feature construction can be decomposed into two questions: <strong>what dataset to be used in construct features on</strong> and <strong>what features to be constructed</strong>.</p> <h3>1. What dataset to be used in construct features on</h3> <p>At beginning, I used the <code>train+test</code> dataset to construct features, get 0.9800 on public LB, in the bronze medal position. Later I tried to use the <code>train+test_supplement</code> dataset to construct features and scores went up directly, get <strong>0.9813 on public LB</strong>, in the <strong>silver medal</strong> position! Therefore, from this phenomenon we can notice that the bias of the model trained from the <code>train+test</code> is much larger than the <code>train+test_supplement</code>.</p> <h3>2. What features to be constructed</h3> <ol> <li>Group by <code>[ip, app, channel, device, os]</code>, calculate next time delta</li> <li>Group by <code>[ip, os, device]</code>, calculate next time delta</li> <li>Group by <code>[ip, os, device, app]</code>, calculate next time delta</li> <li>Group by <code>[ip, channel]</code>, calculate previous time delta</li> <li>Group by <code>[ip, os]</code>, calculate previous time delta</li> <li>Group by <code>[ip]</code>, unique count of <code>channel</code></li> <li>Group by <code>[ip, device, os]</code>, unique count of <code>app</code></li> <li>Group by <code>[ip, day]</code>, unique count of <code>hour</code></li> <li>Group by <code>[ip]</code>, unique count of <code>app</code></li> <li>Group by <code>[ip, app]</code>, unique count of <code>os</code></li> <li>Group by <code>[ip]</code>, unique count of <code>device</code></li> <li>Group by <code>[app]</code>, unique count of <code>channel</code></li> <li>Group by <code>[ip]</code>, cumcount of <code>os</code></li> <li>Group by <code>[ip, device, os]</code>, cumcount of <code>app</code></li> <li>Group by <code>[ip, day, hour]</code>, count</li> <li>Group by <code>[ip, app]</code>, count</li> <li>Group by <code>[ip, app, os]</code>, count</li> <li>Group by <code>[ip, app, os]</code>, variance of <code>day</code></li> <li>Group by different combination, calculate CVR (I didn't try it because of short time. Somebody said it could improve 0.0005 score)</li> </ol>
TalkingData AdTracking Fraud Detection Challenge
9th place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData AdTracking Fraud Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Sorry, I have no paper - but I think of it as a step up from ctuniq - where ctuniq counts the unique groups in a feature level; entropy tells how concentrated it is in certain groups. For example entropy over minutes; is there generally an even spread over minutes or do we have bursts of clicks at certain times. </p> <p>Nice idea to monitor the variation between train and test...</p> <p>Congrats to all the winners, and disappointed to see so many people affected by the late sharing of scripts - so sad to see a 0.9811 sub uploaded on last day and affect people's hard work. </p> <p>Our solution was a couple of LGB's at public 0.9821 or 0.9820, and nnet at public 0.9816. We did a lot of feature engineering in <code>data.table</code> this is very nice for building features on such a dataset, especially, time based features. Entropy helped - <a href="https://github.com/owenzhang/kaggle-avito/blob/a7a2cc853b0ca86f07cdb9dd483779b2927b99ee/avito_utils.R">linky</a> ; particularly entropy over time based features like minute. Device <code>3032</code> was all, or nearly all, no downloads and was not in test, so we got circa 0.0004 from removing. We also got split second times and incorporated that into lead times - this was done by turning the ordering per second, and count of clicks per second, into sub-second time - if there are 100 clicks in a second <code>16:00:00</code>, first click gets <code>16:00:00.00</code>, second gets <code>16:00:00.01</code>, next <code>16:00:00.02</code> ... etc. This gave us good lift, around 0.001 early in the competition, but we did not test the lift later to see the drop by leaving it out. Stacking gave us, circa .0003. Counts of previous periods, eg. period this time yesterday, helped also. A lot of other things tried with no, or small lift - used about 30 features in all. </p> <p>Thanks Alex, the repo is a bit messy currently but here is an example of making the split second lead time in data.table - calculating the feature runs on full data set in less than a minute - most of the time is data load.</p> <p>Repo link -- <a href="https://github.com/darraghdog/talking-data-click-fraud/blob/1827a7a437dddce85642e01f561953c3f0066573/features/make/secdistR_2703.R">secdistR_2703.R</a></p> <p>Thanks CPMP. Well done on your score and position, I'm interested to hear your validation strategy - mine had +/- .0002 variance which made it difficult to assess features - I used test hours from day 9 to validate, and full day 8 to train. </p>
TalkingData AdTracking Fraud Detection Challenge
Hints on 13th place solution.. and finally Grandmaster!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData AdTracking Fraud Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to everyone for the messages! :)</p> <p>Taking a look at other kagglers' solutions, mine feels simpler... but I guess implementation plays a role</p> <blockquote> <p>I hope you'll compete again! </p> </blockquote> <p>Obviously, there is really no doubt about it!</p> <p>Congratulations on your achievement CPMP, you are almost a GM ;)</p> <p>Hi everyone! First of all, congratulations to the winners, and thanks to all of those who shared ideas or kernels in the forum. This competition has been very challenging, considering the amount of data involved and the tricky time-variant problem.</p> <p>I'll just give some hints on my solution, besides what is already described in many posts and kernels:</p> <ol> <li>Use test supplement: it is crucial, as it gives extra data to calculate next click features;</li> <li>Focus on a clean and correct approach first, then on CV / local validation (time-variant problem + extremely imbalanced dataset = impossible to have a 100% reliable validation, at least for me), finally on the public LB (clearly too small and not representative of the private LB) </li> <li>Careful with next click feature: a click at 14.30 in the test set, for instance, has a maximum next click at 90 minutes (using test supplement), in train it could be over a day. I ended up using three 24 hours time frames and calculating the next click within this time frame, otherwise there would be a clear difference between training and test features.</li> </ol> <p>Other notes:</p> <ul> <li>I used R + xgboost and lightgbm (and heavy use of data.table for grouping, counting, shifting etc), training on days 7-8-9 in some models and on 8-9 in others;</li> <li>Validation setup: random 10mln rows in validation, 25mln in train plus all the "ones" not present in validation (remember AUC cares only about order) - repeat with different samples</li> <li>Memory: 32GB of ram and 60GB of swap (never used swap in previous kaggle competitions). </li> <li>Other features: some "previous day" counts and ratios proved to be not particularly effective, but useful to create diverse models.</li> </ul> <p>At the end of the day, 6 competitions, 5 gold medals, 4 solo gold medals, 1 prize in private masters-only competition, it's time for a break! But I always come back to Kaggle sooner than expected. It is an addicition with no effective therapy and a great way to learn, deal with real world problems and keep up to date!</p>
TalkingData AdTracking Fraud Detection Challenge
My brief summary,a mainly NN based solution(3th)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData AdTracking Fraud Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I used this kind of setting:7 8-&gt;9 and 7 9 -&gt;8 and 8,9-&gt;7 in trainset and 7 8 9 for 10.</p> <p>I hope you can verify my finding in this competition,my setting:batchsize:100000,Adam,LR=0.01 for first 10 epoch. </p> <p>@eric I edited the post fo answer you question,I hope it will be helpful.Yes,I am quite happy with the top 3 result.Thank you!</p> <p>@StudyExchange,I used ensemble indeed,I predict the train and test dataset and built model on it indeed,which helped me a lot.</p> <p>Q1.2:I did not use them on LGBM models although it helped to improve 0.0001 to 0.0002,as my NN models were trained after LGBM models,I did not retrain and predict again<br> Q1.2: As you can see in my features list,I used it.<br> 3th layers&lt;&gt;pseudo label,NN model indeed<br> X:all the predictions from N former layer models and the predictions grouped by IP_Day and app-os-channel <br> y:the is_attributed value of train-set.<br></p> <p><a href="/mamasinkgs">@mamasinkgs</a>,you've a great job,although we can not get good position only by luck,we do need it sometimes.As there are so many competitions on kaggle,you can get better rank for sure.<br> It's stacking,I tried to avoid use terms like OOF but I found that I should use them :)<br> please refer to <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56262#326739">here</a></p> <p>As I said in an interview by kaggle recently,I always try to find related solutions and papers and learn from them and then modify the setting and parameters based on experiments,by doing so, I can save a lot of time,we can try to stand on the shoulders of gaints.I read papers everyday and implement some of them as a practice and get better understanding of them , when a competition started I will select some of them to use and find more related.</p> <p>Thanks,you can refer to the network structure I posted and used the setting I mentioned <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56262#325825">here</a>,as I must am very busy in next two weeks I can not provide full set of my codes. sorry.</p> <p>@the1owl,<a href="https://github.com/dennybritz/deeplearning-papernotes">do you like this one?</a> When we find a good papers,we can follow the papers it refered,and the papers list as best papers of a conference is a good place to find the good papers.:)</p> <p>Hi,@StudyExchange so for late reply.<br> prediction from former layers,if we have 3 models,then the X will be:<br> m1,m2,m3,mean_m1_by_ip_day,mean_m2_by_ip_day,mean_m3_by_ip_day,mean_m1_by_app-os-channel_day,mean_m2_by_app-os-channel_day,mean_m3_by_app-os-channel_day</p> <p>loss = 'binary_crossentropy'<br> I didn't care the gap between the loss and auc,just choose higest auc epoch, as the best auc means almost lowest loss in my training process.</p> <p>I have recieved the code in my email.</p> <p>Sure,I did experiments on it,if I add 5 click before and after a click time,and if I used dense layer,the result was much worse than this setting.<br>If we examine the struture of the GRU,this kind of setting is different than dense with tanh,dot,sigmoid......We may say,it's not true RNN,but this kind of structure did work.</p> <p>I suggest you read the great thread of <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56283">@CPMP</a>,I hope I can use his features in my model.</p> <p>@StudyExchange</p> <p>1.My features is quite simple,I suggest you read the great thread of <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56283">CPMP</a><br> 2.Yes<br> 3.I have add some details,please refer to the last part of my post.<br> 4.I use out of fold data to build ensemble models.<br></p> <p>I did not use multi-step setting because it's time consuming.And I have just modified the post and add a paper related to help you know why I used this kind of structure: <a href="https://arxiv.org/abs/1710.05941">Searching for Activation Functions</a></p> <p>I was expecting the idle_speculation's 0.9835 with 1 submission at that time. :)</p> <p>As I can train an epoch in 120s-150s also,25 epoches is enough although I trained 30-35 epoches,OOF is not unacceptable,if we use binary format file instead of pandas, it is very quick.</p> <p>@Mihai Cvasnievschi,your comment is just want I want to write,thank you.</p> <p>Congrats to all the winners(['flowlight', 'komaki'].shuffle(),PPP is already in use) and all the kagglers who have worked hard and have learned a lot of from this competition.<br></p> <p>Thanks to TalkingData and Kaggle for such an interesting competition.<br></p> <p>Here is the summary of my solution,it is mainly based on NN models.<br></p> <p>It was very hard for me to choose among Landmark Competitions and TalkingData's,so I started them at the same time,when I found I can get decent NN models, I focus on this one,it is always interesting to solve a problem other than CV/NLP using NN models.<br></p> <p>My models mainly based on 23 features,by using these features,my single LGBM model scored 0.9817 on public LB,it is not a good one compared to other kagglers,this is the first time that I used LGBM in kaggle competition indeed,so there are a lot to learn from you!<br></p> <p>I designed NN models based on those 23 features,and prepared the features to fed them into network carefully[NA,out of vocabulary,log,scale],my NN model can reach 0.9820 on public LB.As we all know, the click delta is important,so I fed deltas of last 5 and next 5 click_times to the network and designed a model with RNN cell to find the patterns of the click series,my model can reach 0.9821 on public LB and 0.9830 on private LB.<br></p> <p>Then I designed different NN models to add diversities,they are very simple,for example,adding some res-links to dense layers.I don’t know the single model performance of these 4 models because I judged them only by diversities.<br></p> <p>After have ensembled my NN models and LGBM models by weighted average,I can get 0.9827 on public LB and 0.9835 on private LB.<br></p> <p>Then,I predicted full set of train data on my n-fold models,it’s a little time consuming,but it’s a relatively small dataset for me when compared to other datasets I have met,I can train and predict a fold of my model in 2.x hours on a 1080i GPU.<br></p> <p>I trained second level NN models using predictions from the whole train and test dataset,and added some group by features based on IP,app-os-channel,my ensemble score improved to 0.9833 on public LB and 0.9840 on private LB,which is a huge improvement.<br></p> <p>This improvement happened at 30 hours before the competition end,I hope I could get such an improvement two days earlier,because I had no time to solve some limitations/weaknesses of my NN models,as I simulated on part of the data,I found NN models may led to a small drop(0.0003 also) on private dataset in some situation,which can explain my minor drop on private LB.So when you were talking about 0.9835 solution,I wanted to tell you don’t expect too much on my solution :) I had thought I will be very happy if I am still in top 5 when the private LB revealed<br></p> <p>Sorry for not so detailed,I feel quite sleepy now.<br></p> <p>------------------------------------Some Details------------------------------<br> <b>The features:</b><br></p> <pre>channel 1011 os 544 hour 472 app 468 ip_app_os_device_day_click_time_next_1 320 app_channel_os_mean_is_attributed 189 ip_app_mean_is_attributed 124 ip_app_os_device_day_click_time_next_2 120 ip_os_device_count_click_id 113 ip_var_hour 94 ip_day_hour_count_click_id 91 ip_mean_is_attributed 74 ip_count_click_id 73 ip_app_os_device_day_click_time_lag1 67 app_mean_is_attributed 67 ip_nunique_os_device 65 ip_nunique_app 63 ip_nunique_os 51 ip_nunique_app_channel 49 ip_os_device_mean_is_attributed 46 device 41 app_channel_os_count_click_id 37 ip_hour_mean_is_attributed 21 </pre> <p>a simple GRU network:<br></p> <pre>class GRU_V0a(): def __init__(self, **kw): super(GRU_V0a, self).__init__(**kw) self.categorical=['app', 'device', 'os', 'channel', 'hour'] self.continous=[col for col in features if col not in self.categorical] self.categorical_num = { 'app': (769, 16), 'device': (4228, 16), 'os': (957, 16), 'channel': (501, 8), 'hour': (24, 8), } def build_model(self): categorial_inp = Input(shape=(len(self.categorical),)) cat_embeds = [] for idx, col in enumerate(self.categorical): x = Lambda(lambda x: x[:, idx,None])(categorial_inp) x = Embedding(self.categorical_num[col][0], self.categorical_num[col][1],input_length=1)(x) cat_embeds.append(x) embeds = concatenate(cat_embeds, axis=2) embeds = GaussianDropout(0.2)(embeds) continous_inp = Input(shape=(len(self.continous),)) cx = Reshape([1,len(self.continous)])(continous_inp) x = concatenate([embeds, cx], axis=2) x = CuDNNGRU(128)(x) x = BatchNormalization()(x) x = Dropout(0.20)(x) x = Dense(64)(x) x = PReLU()(x) x = BatchNormalization()(x) x = Dropout(0.20)(x) x = Dense(32)(x) x = PReLU()(x) x = BatchNormalization()(x) x = Dropout(0.05)(x) outp = Dense(1, activation='sigmoid')(x) model = Model(inputs=[categorial_inp, continous_inp], output=outp) print(model.summary()) return model </pre> <p>Thanks <a href="https://www.kaggle.com/aharless/gpu-nn-validation-more-features/code"></a><a href="/aharless">@aharless</a> publish the code which I used as a reference,such as GaussianDropout.<br> It's really simple? :)And the devil is in detail:<br> To make NN model work,preprocess is very important.<br> Fill-NA:Fill max value for delta feature.Fill mean for other feature.<br> Log of delta features,part of count features when the values are large and all the nunique features.<br> StandardScale<br></p> <p>It is not a tranditional RNN,the input_lenght=1 indeed,so I just used the function of CudnnGRU to get pattern of clicks which can be expressed like the following:</p> <pre>zx=sigmoid(K.dot(Wz,X) hx=tanh(K.dot(W,X) h=zx*hx </pre> <p>As we want to know the theory behind the stucture,and I have no time to prove it or write a paper, we can have a look at the paper by Google:<a>Searching for Activation Functions</a> Swish:x · σ(βx), where σ(z) = (1 + exp(−z))−1,we can get some ideas to explain my setting.</p> <p>As to private score estimation,it's always a interesting part of my competition :).<br> There is not certain method to do so,I just bear in mind:<br> Distribution variances lead to score variances.<br> For example,in this competition,some category values in test-set are not in trainset,so I changed the same ratio of category value of validation set to values unseen in trainset.And the App19 is a very important app with high ratios of download and is imbalance in train and test-set.What's more,the ratio is differenct between the public and private set,so I tried to keep the ratio of my validatition as test set...We can also use a submission which I believe it is stable as True label,and caculate AUC based on it,if the public and private score as close,then we can believe it too. <br> Thanks for all the congrats to me ,I will upvote your comment and will not reply to everyone to save space of this page. <br></p> <p>When we use NN models in such kind of problem,the data preprocess is very important.<br> If there is no click from a clicktime,the next_click_feature will be NA,we can not fill mean or zero to NA,max delta is more reasonable.<br> On the other side,for other features fill mean to NA item is more reasonable.<br></p> <p>Yes,mini-history of a virtual user,if he click too often,he have a trend of not downloading a app.</p>
TalkingData AdTracking Fraud Detection Challenge
Solution #50 Story - One Week Hackathon
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData AdTracking Fraud Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="/serigne">@serigne</a> haha, good to know some else are using dart tree out there! do you have any good tips at tuning? there isn't much shared experience out there so I am more or less just followed what worked before, and try to follow whatever work in a given dataset. </p> <p>First of all, let me say up-front that I don't particularly think we have much to boast about this result - but it was an effort of very nice team effort in a very nice competition that stretches our RAM management techniques</p> <p>We did this competition as an ongoing effort to work with friends in <a href="https://diveintocode.jp/ai_curriculum">DiveIntoCode</a> to provide data science mentorship to their students - so we spend most of our effort to help students to understand the <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/54765">business context of click fraud</a>, as well as creating <a href="https://www.kaggle.com/ogrellier/basic-predictions-using-target-encoding-on-app/code">friendly kernels</a> to ease our students into handling the rather large dataset. So it wasn't until the last week we decided to make a stronger effort on the LB, and we kind of end-up doing this in a hackathon manner in the last 3-4 days. Our best submissions were submitted and selected within the last 20 minutes before the close of play, so that was entirely bonkers and fun way to keep kaggle addictive :)</p> <p>So here we go, here is a description of our approach:</p> <p><strong>Information Sharing</strong> We use slack for communication, google drive for sharing feature/data, and then a google spreadsheet to manage various aspect of information. For us, this way of working turned out to be very important - we started so late and needed a way to gather "intelligence" that is disclosed in via kernels and discussion. We specifically have a dedicated list of bookmarks shared among us, and also key EDA results (both from various kernels but also our own effort) captured in a shared google slide</p> <p><strong>Modelling Approach</strong> Since we started so late, we had very little choice but to start with the most secure methods - that means LightGBM, and midway through we discovered that extracting features using the test supplement gave a much better score. We also ruled out traditional K-fold CV setting in the belief that it would take too much time for us to iterate - well, we now know there are much more clever methods for faster iteration as shown by <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56328">PPP's solution</a> by <a href="/panfeiyang">@panfeiyang</a>.</p> <p>The one thing I would recommend people to check out is the<code>boosting=dart</code> option from LightGBM. As <a href="/kazanova">@kazanova</a> kindly shared in his <a href="https://www.kaggle.com/c/instacart-market-basket-analysis/discussion/38100">Instartcart solution</a> Dart tree can out-perform typical GBM. What I have since discovered by experience is that it tends to outperform GBM when the dataset is relatively large. My intuition is that this allows the dropout mechanism to fully explore its potential. I had tried to use dart tee on the much smaller dataset from Porto Seguro without success at all. </p> <p>Nevertheless, tuning Dart tree can be quite tricky, some of the typical logic on GBM doesn't apply for instance decreasing learning rate doesn't necessarily lead to better accuracy because it interacts with drop-out related parameters. Anyway, we started with the parameter shared by <a href="/kazanova">@kazanova</a> and started out tuning using the last 25millions rows in training data as local validation. Here is one of several sets of parameters that gave us good model:</p> <pre><code>num_leaves': 22, min_sum_hessian_in_leaf':20, max_depth': 7, learning_rate': 0.2, num_threads': NUM_CORES, feature_fraction':0.7, bagging_fraction':0.85, max_drop':6, drop_rate':0.01, min_data_in_leaf':10, bagging_freq': 1, scale_pos_weight':200, </code></pre> <p>I don't have deep knowledge about how dart trees work, so I can only speak by experience. I shall now go and have a good read at <a href="https://arxiv.org/abs/1505.01866">this paper</a>, folks with better knowledge please also share your experience and we would be much appreciated!</p> <p><strong>Feature Engineering</strong> not much to share here, we were mainly scavaging features from shared kernels, but <a href="/ogrellier">@ogrellier</a> did implement a much neater way to compute "time delta" features. In addition to channel, app, os, and hour which were the strongest, we used different variations of next clicks and previous clicks - we generated additional features like "the click after next click" and some of them played an important role in our models. </p> <p>Other features are your usual count and unique_count features. We also have a <code>click_rate</code> feature which measures the frequent were the clicks from specific IP during the day which helped a bit By the last day of the competition, we had 34 features, we could have generated more but time was running out.</p> <p>A work on feature selection: one by one feature selection was too long a process for us, so throughout the week I have been throwing 5-6 features by batch, and keep them if they increase the local validation score which thankfully corresponded well with LB. I personally trusted a lot on how LightGBM prioritises features, and I operate in the following principles: </p> <ul> <li>whenever a batch of features improve local validation score, keep them</li> <li>whenever a batch of feature decrease score - I will remove the most import features among this batch according to LightGBM, and re-test</li> <li>whenever features were not used by LightGBM at all i.e. 0 importance score I will drop them. </li> </ul> <p>Not sure this is the best approach, but in general it worked in this instance, and our ultimate constraint was that we couldn't manufacture enough feature fast enough within a limited amount of time - ideally we would like to have more than 50+ features which I believe will improve (or overfit) our models more. </p> <p><strong>Post-processing</strong> So again this little secret was kindly shared by <a href="/plantsgo">@plantsgo</a>, and it gave a 0.0002 boost to our models, we weren't so sure if this will overfit private LB, so we submit one with post-processing and one without.</p> <p><strong>The End Game</strong> We spent a large part of the last two day trying to get FM working - in the hope that we could have an alternative model to merge with LightGBM - we weren't familiar with FM algorithms and it took a lot of time without getting useful results. In highlight we could have better invested this time in NN - we took a choice and it didn't work out so next time we will approach with new knowledge. I personally enjoyed learning a lot from <a href="/anttip">@anttip</a>'s wonderful FM kernel so no harm was done there. </p> <p>What caused a bit of panic was that we had a bug while trying to reduce the RAM requirement of our data, and it caused invalid values in our predictions - we spent hours fixing that while testing our LightGBM model with various parameters. Lucky the bug was fixed, and we manage to submit our best solutions within the last 20 minutes of the competition! The final solution was a weight average of two LightGBM model, so again pretty boring stuff :)</p> <p><strong>Take Away</strong></p> <ul> <li>Dart tree is good, in fact very good for large dataset - our parameters for dart tree was faster and more accurate than the gbm equivalent </li> <li>Trust NN - I really should have considered using NN as an alternative model - especially since I notice the drop-out mechanism from dart tree was working well for this dataset. </li> <li>START EARLIER IF YOU WANT GOLD: well, at least this applies to us average folks. I think we could have generated a lot better solution even if one more week - but then it might be next intense and less fun. ^_^</li> <li>[update] Downsampling when the data is large and imbalanced - now looking at both the 1st and 2nd solutions, it is clear that downsampling on negative observation were key steps for top teams - it didn't work out for me in the past, but I guess this depends on dataset. I will make sure I try this next time! </li> </ul> <p>By the way, we are the biggest overfitter in top 50 (i.e. biggest drop between public and private LB), so perhaps this is exactly what you shouldn't be doing if you want to get a gold medal - LoL </p> <p>lol, looks like we are on the same boat there :)</p>
TalkingData AdTracking Fraud Detection Challenge
Solution #55 up on github
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData AdTracking Fraud Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was an awesome competition! Thanks to the organizers and to all kagglers for sharing ideas and kernels! Last day was crazy, just three hours before the deadline I was kicked out of top 100 on public leaderboard. Surprisingly my submissions scored much better on the private leaderboard, climbing up almost 70 positions. Here is what I used: - Aggregate factors from Baris' kernel computed on all data with test_supplement - Prev/next click times (1-, 2-, 3-step) computed on all data with test_supplement - LIBFFM out-of-fold using 5 folds - LGBM on 50% data (using all is_attributed=1 and 50% of is_attributed=0) - Average of a few LGBM models trained on different subsets of data and factors for the final submission - Training with 96GB RAM - Store data column-wise in binary formats for fast loading - HOCON configurations</p> <p>My code for this competition is on github <a href="https://github.com/stys/kaggle-talkingdata-adtracking-fraud-detection">https://github.com/stys/kaggle-talkingdata-adtracking-fraud-detection</a></p>
TalkingData AdTracking Fraud Detection Challenge
Solution #6 overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData AdTracking Fraud Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks, makes sense now.</p> <p>Thanks! You did well too, I'd like to know how you get so much from ensembling given I basically failed at it.</p> <p>Thanks!</p> <p>Hi, I found that adding the app factors in this case was hurting, I only kept the ip factors (and the user factors with the other matrix factorization). I don't know why using app factors is hurting.</p> <p>Using ALS is fine I think.</p> <p>Hi, it was my top predictor according to lgb, but removing it did improve score.</p> <p>Thanks, glad you find it interesting.</p> <p>I ran quite a lot of experiments, maybe not 7,000 as Mamas did, but hundreds, probably around 500. And removing original features was part of it.</p> <p>I had the idea myself, no external source. I'm going to publish soon my libfm in keras and how it is used here.</p> <ol> <li>Have a reliable CV setting. Without it you cannot do FE.</li> <li>Try a lot, fail a lot, learn from failures.</li> <li>Read top teams writeup after each competition, look for things you did not think of, learn from them.</li> </ol> <p>They make test look like the rest of the data. They have about the same number of clicks per day. UTC days are imbalanced, first and last UTC days on the data have way less clicks, esp day 6. </p> <p>Because day 6 is so small, lag features computed for day 7 are not very good. </p> <p>Using China day means the first 24 hours of data have no lag, but then all remaining data has lag computed form at least a full day. This yield better result in my model.</p> <ol> <li><p>Features were computed on the concatenation of train and test_supplement. test is a subset of test_supplement. I did not compute features on test, I predict on test_supplement, then join. I explained how I did it <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/53378#306201">here</a>. Later, I added the not_last feature to the join columns.</p></li> <li><p>lagged features are computed on previous China day, not on a sliding window.</p></li> </ol> <p>Here are two great presentations:</p> <p><a href="https://www.slideshare.net/OwenZhang2/tips-for-data-science-competitions">https://www.slideshare.net/OwenZhang2/tips-for-data-science-competitions</a></p> <p><a href="https://www.slideshare.net/HJvanVeen/feature-engineering-72376750">https://www.slideshare.net/HJvanVeen/feature-engineering-72376750</a></p> <p>lag features helped but not much, maybe 0.0002.</p> <p>Thanks!</p> <p>I'll publish the libfm in keras part, hopefully it will answer your question.</p> <blockquote> <p>how should I decide to add it or not?</p> </blockquote> <p>Read again my writeup, I explain it.</p> <blockquote> <p>Just make a submission to see if it improves score?</p> </blockquote> <p>Absolutely not. That's what I called LB probing in y writeup. This will make you feel good on public LB, and most probably result in a disaster on the private LB. Key is to use local validation as I explain.</p> <p>I will publish the libfm part soon.</p> <p>HI I shared my matrix factorization code.</p> <p>@Jimmy Chen, raw counts are not good at all.</p> <blockquote> <p>I will publish the libfm part soon.</p> </blockquote> <p>Done, see <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56497">https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56497</a></p> <p>Done, see <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56497">https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56497</a></p> <p>I <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56497">shared my libFM implementation in Keras</a>.</p> <p>Thanks Olivier, well done too on your side too. I sincerely hope that the pair you form with Yifan will get the gold it deserves in a coming competition.</p> <p>not_last is not used in any grouping, it just helps the model learn that clicks happen preferably in the last row of each group.</p> <p>Usually people use between 1% and 15% more, but here it seems that going further helps. I would not draw a genera conclusion though.</p> <p>I tried 1.0, 1.1, 1.2, and 1.3 on two different models, and 1.2 was giving the best LB result on both.</p> <p>Great question. Lag features are undefined for first China day.</p> <p>Local score on the rest of test hours is not correlated with private LB as hour 4 is with public LB. I mean, there is a larger gap, but they evolve in sync. Any improvement on my local score translated in private LB improvement as far as I can see.</p> <p>You're right, you should have concatenated all data to compute your features.</p> <p>Thanks! Let me know if and when you start it, would be interested to see how it goes.</p> <p>Yes, you should try your ideas. That's the only way to know if they work or not.</p> <p>Thanks. I thought I explained it in my write up... It is in the <strong>Validation</strong> section. I basically add them one by one and keep what works.</p> <p>Bravo for you solo, sorry your rank was hurt by Dirk's late share.</p> <p>Start with <a href="http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html">http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html</a> and <a href="http://scikit-learn.org/stable/modules/decomposition.html">http://scikit-learn.org/stable/modules/decomposition.html</a></p> <p>Truncated SVD is similar to PCA.</p> <p>For libfm, I'll write something on my blog on how to implement it in Keras.</p> <p>You did well too, congrats!</p> <p>Thanks a lot Adarsh!</p> <p>Thanks, means a lot coming from you. I wish i had thought of RNN as you used them!</p> <p>Your question makes me think again. I added that feature quite late, but it could make sense to use it for grouping. It probably would help.</p> <p>Here: <a href="https://www.ibm.com/developerworks/community/blogs/jfp?lang=en">https://www.ibm.com/developerworks/community/blogs/jfp?lang=en</a></p> <p>Not very active lately because of Kaggle.</p> <p>Thanks!</p> <p>regarding our point on machines, any VM on a cloud is way faster. A 4 cores i7 with SSD is faster. A Macbook pro is faster.</p> <p>But yes, this is bigger and faster than an entry level laptop. </p> <p>I used to compete on Kaggle with a macbook pro with 16GB. I know how challenging it is to work with large datasets... Net result is that I fried my macbookpro and decided to buy a relatively cheap machine that could scale out. Yu can find second hand machines similar to it at a very reasonable price now.</p> <p>I won't share the code in whole as it is messy, I didn't use source code control, rather copied files around. I have 250 files in my model directory... Cleaning this and documenting is a daunting task, I am glad to not be in prize position for that ;)</p> <p>But I will share some of it via my blog. I'll put pointer to it from here when I do it.</p> <p>First of all, all those who managed to get decent submissions out of this huge dataset deserve kudos. Even if you score is below Kirk's shared kernel. What you did is way more valuable, and lessons learned here will help you later. </p> <p>Second, sharing is great when done in good faith, and lots of people did share a lot here, too many of them to name them. Eve people who started here, like @Samrat, shared a lot. This is what makes this community so valuable.</p> <p>Third, thanks to <a href="/inversion">@inversion</a>, Kaggle, and Talking data for organizing a very challenging, ans almost leak free competition. I say almost because it is clear now that the test data was sorted by click time then target value. Exploiting this was the final twist that helped some of us fare better. But the impact is not that large, I estimate it to be about 0.0004 for me. And it was <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/55677">disclosed soon enough</a> for everyone to react to it. Thanks to <a href="/plantsgo">@plantsgo</a> for sharing it soon enough.</p> <p>I didn't decide to go solo from the start, but as time went by, I saw I was making progress every day, and decided to go solo till the end. In retrospect I am not sure it was wise, I didn't sleep much in the last week ;) I did receive some invites to merge during the last week before deadline, and I thank people for them. I did miss a very late invite from a top 10 team as I was away that evening. I wonder what would have been our score if we had teamed.</p> <p>Anyway here is my solution. Given I was solo, and given the size of the data set, which meant hours to produce a submission, I decided to focus. I focused on a single type of model, LightGBM. I split my time roughly as follows:</p> <ul> <li>80% feature engineering</li> <li>10% making local validation as fast as possible</li> <li>5% hyper parameter tuning</li> <li>5% ensembling</li> </ul> <p>I spent most of my time doing feature selection, as my machine was not usable with 50 features or more. I wish I had used the <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56105">trick shared by Kruegger</a>, maybe I would have been able to add more features. However, being forced to be selective about features probably led to better models in the end. And I would not have been able to move past40 features without using the 'two_round' parameter as suggested by <a href="/authman">@authman</a>.</p> <p>My private LB score comes from a single lgb run with 48 features that scored 0.9825 public and 0.9835 private. I submitted a blend of this with 5 other similar models that yield 0.9828 public and 0.9837 private, but for some reason <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56234">that sub isn't taken into account</a>. This is OK as my rank would not change with it. I didn't submit all the 5 models individually during the competition, but did it after to get their score. The best single lgb run scores 0.9827 public and 0.9836 private, with 48 features.</p> <p>I mostly used a 20 core Xeon at 2.3 MHz with 64 GB RAM and 64GB swap. That machine is a bit slow, but it scales when using 20 threads. I also used another machine with a 4 core i7 and 2 GPU to run Keras (see below).</p> <p><strong>Validation</strong></p> <p>Let's us look at validation. This is key. If you don't have a good validation scheme then you rely solely on LB probing, which can easily lead to overfit. I ended up settling on:</p> <ul> <li>training on day &lt;= 8, and validating on both day 9 - hour 4, and day-9, hours 5, 9, 10, 13, 14.</li> <li>retraining on all data using 1.2 times the number of trees found by early stopping in validation</li> </ul> <p>Using two validation sets was to make sure I was not overfiting to one of them. The hours were selected to match the public and private test data. I also watched the train auc in the early days, discarding features that improved validation but also increased the gap with train a lot. I stopped watching train auc in the last week to speed up things, but last time I checked I had a quite small gap.</p> <p>I also used LB, i.e. only kept a feature if local auc and LB improved. Yes, I know this can lead to overfit, but given I was filtering first on local validation I think I escaped it for the most part.</p> <p>This was a very effective scheme, with the hour 4 score being the same as LB score with a std difference around 0.0001. However, it is very time consuming because of the computation of the auc metric for early stopping. In order to speed it for feature evaluation I used two lighter ways. First, using only day 9 data, with 5% of hour 4 data for validation, the rest for training. This could run in less than one hour, and was used as a filter. Only features that improved on that went to the next stage which was train on day 8 and validate on day 9, both hour 4 and other test hours. This was also a very effective scheme, with very good correlation with LB score, but it was not effective when evaluating lag features. I therefore switched to training on day &lt;= 8 later on.</p> <p>Another way of speeding feature evaluation was to share each feature in a separate feather file. This way, testing a feature set only requires assembling a set of files into one dataset. Features were mostly tested by adding them one by one, and keeping them if local validation score improved by at least 0.00005. I also added several of them at once, then removed them one by one to see if validation score decreased. I basically did feature selection full time for the competition, preparing experiments to be run while I was away during day, or while I was sleeping. The machine never stopped.</p> <p><strong>Feature Engineering</strong></p> <p>Features were computed on the concatenation of train and test_supplement, sorted by click time then by original order. Now I am not sure the second item was useful.</p> <p>I used several families of features.</p> <ul> <li>Only app, and os from the original features were kept. They were handled as categorical, and were my strongest 2 features with a third category made of the hour in the day.</li> <li>China days. Introduced 24 periods that start at 4 pm. These were used for lag features based on previous day(s) data.</li> <li>User: ip, device, os triplets. </li> <li>Aggregates on various feature groups, similar to what was shared in many public kernels. Aggregates I used were count, count of unique values, delta with previous value, delta with next value. Time to next click when grouped by user was important. Other useful ones I didn't see in kernels: delta with previous app.</li> <li>Lag features, based on previous China days values. Previous count by some grouping, and previous target mean by some grouping. The latter was a weighted average with the overall target mean, the weights being such that groups with few rows in it had a value closer to the overall average. This is a standard normalization in target encoding.</li> <li>Ratios like number of clicks per ip, app to number of click per app.</li> <li>Not last. This was to capture the leak. It is one except for rows that are not the last of their group when grouped by user, app, and click time. I ignored channel as I think that clicks are attributed to the most recent click having same user and app as the download.</li> <li>Target. This is to also capture the leak. I modified the target in train data by sorting is_attributed within group by user, app, and click time. The combination of both ways to capture the leak led to a boost between 0.0004 and 0.0005.</li> <li>Matrix factorization. This was to capture the similarity between users and app. I use several of them. They all start with the same approach; construct a matrix with log of click counts. I used: ip x app, user x app, and os x device x app. These matrices are extremely sparse (most values are 0). For the first two I used truncated svd from sklearn, which gives me latent vectors (embeddings) for ip and user. For the last one, given there are 3 factors, I implemented libfm in Keras and used the embeddings it computes. I used between 3 and 5 latent factors. All in all, these embeddings gave me a boost over 0.0010. I think this is what led me in top 10. I got some variety of models by varying which embeddings I was using.</li> </ul> <p><strong>Hyper parameter tuning</strong></p> <p>I spent time given how long it is to run an experiment, but I didn't tune much. Main settings were to scale positive by around 400, use an initial score that minimizes expected loss if target is constant, and min child per leaf to be such that it requires at least 1 positive plus another example, in order to avoid overfiting to single positive examples. I used 31 leaves and a depth of 8. </p> <p><strong>Ensembling</strong></p> <p>On my local validation, the best way to blend several models was to average the logit of the predictions (aka raw predictions). I started doing restacking, i.e. adding validation predictions to day 9 features, and training on it, but this was hitting my 50 feature limit, and runs were very long. I did not ran it the last day for that reason. It may have given me a little additional boost, but I don't think it would have been enough to move up in the LB, because my models were not diverse enough anyway. I also think that using only day 9 for second level was leaving too much on the table. I thought of generating of prediction for the full dataset, but that was a daunting task. I now see this is what <a href="/bestfitting">@bestfitting</a> did, great move on his part.</p> <p><strong>Takeaway</strong></p> <p>I think it was crazy to do this solo, too much work for a single person. I really admire the other fools that went same way. Once solo, I am not sure I should have done things differently, except for spending time to alleviate the 50 features limit.</p> <p>Also, as often in my competitions, I make a lot of progress the last day, not sure why. In this case I moved from 0.9824 to 0.9828 public, and 0.9832 to 0.9837 private. The lesson is to never give up, and not let the public LB dictate your mood.</p> <p>I hope the above will be useful to some. Thanks for reading it all ;)</p> <p>Edit: I <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56497">shared my libFM implementation</a>.</p> <p>I would not have been able to use more than 40 parameters if I had not set the 'two_round' lightgbm parameter to True.</p> <p>You can always install Linux on a Windows machine. </p> <p>No, I am not sure. I think it should.</p> <p>I set <code>min_data_in_leaf</code> to 1 + <code>scale_pos_weight</code> value.</p>
TalkingData AdTracking Fraud Detection Challenge
Solution to Duplicate Problem by Reverse Engineering (0.0005 Boost)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData AdTracking Fraud Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks a lot for your positive comments:)</p> <p>@Danijel I would cry if IP%27 would turn out to be useful. It could be a good karma for me:) Thanks and congrats btw, we were lucky that we just scored just a very tiny bit higher.</p> <p><a href="/plantsgo">@plantsgo</a> I was expecting you guys were using the same, because you had a jump after your discussion post about duplicates. Then it must be a coincidence.</p> <p>@Laevatein Indeed. I also got some benefit from removing channel categorical feature.</p> <p>As you have all noticed, there were duplicate samples with different labels. I will chronologically explain how I found a solution for it. </p> <p>In the middle of competition, I think I have found time_to_next_click on [ip, app, device, os] feature earlier than public kernels and most of the LB. There was a discussion about the fact that click ids are not sorted. @Bojan stated that click ids are leaky in some TalkingData competitions. At that time, I didn't take this comment serious, and I even made a joke about IP%27. Maybe, some people had believed due to my high score which is actually thanks to relatively early discovery of time_to_next_click feature but not leak.</p> <p>Then I have put serious thinking effort on two things: Why time_to_next_click works best in [ip, app, device, os] group but nothing else like channel? Why there are duplicates with different labels?</p> <p>My theory is TalkingData has 2 tables: Clicks table C(click_id, ip, app, device, os, channel, time) and Downloads table D(ip, app, device, os, time). So, the labels we see are not necessarily real downloads. This is how TalkingData maps D to C. If you check download times, some of the apps are downloaded 12 hours after they are clicked. This is not realistic. So a download in D is mapped to <strong>the latest</strong> click on C which has the same <strong>(ip, app, device, os)</strong> . <strong>channel</strong> is not important because there is no such thing as channel of a download. Then I noticed that we don't know which click is the latest if they are in the same second. I thought I can try ordering them by click_id, but this information is not available in training set. Then I thought if my logic is correct I can order the training set by (time, is_attributed) if and only if I have one time delta feature which is on (ip, app, device, os).</p> <p>So, sorting the training data by <strong>(time, is_attributed)</strong>, and the test data by <strong>(time, click_id)</strong> gives +0.0005 boost on public LB if the time delta feature is used. It doesn't work with other time deltas because sorting by (time, is_attributed) is not valid for them and leak wrong information.</p> <p>Maybe others have better solution for this, but for me it was a big discovery. Since it was not a random discovery but serious thinking effort, I couldn't feel comfortable to share this during the competition.</p>
TalkingData AdTracking Fraud Detection Challenge
[2nd Place Solution] from PPP
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData AdTracking Fraud Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It is the trivial 5-fold CV we used. We do not guarantee the correctness of K-fold CV if the features and samples involve timestamps, but in Kaggle contests we can try and see if it works.</p> <ol> <li><p>The 5-fold CV is also used to make predictions. </p></li> <li><p>We did not use target encoding features, because we were afraid that target encoding would somehow overfits the training data.</p></li> <li><p>Yes, almost every day we spent a lot of time on selecting features.</p></li> </ol> <p>The leak is about duplicated samples with different labels in both train and test set, which was first shared <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/55677">here</a> by our teammate Plantsgo. Some of the top teams found an amazing way to use it which helps improve about 0.0005, have a look at <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56268">AhmetErdem's solution</a> and <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56283">CPMP's solution</a>. </p> <p>The codes are messy so we probably will not make it public.</p> <p>I also tried some features based on sliding windows but nothing works. :(</p> <p>Good question. Intuitively, the scale_pos_weight param is directly proportional to the ratio of negative samples. So if you set scale_pos_weight to 100 when training the entire dataset, you should set it to 100*5%=5 after sampling 5% negative samples. (You can also tune the parameter, it does not make a big difference.)</p> <p>Train + old_test (supplement).</p> <p>Categorical inputs are embedded and fed into an FM-like dot-product layer. Numerical inputs are fed into 3 FC layers. Then we concatenate the outputs and fed to the last FC layer. </p> <p>Yes. We only picked several most frequent app/os/channels.</p> <p>I am very proud of our team, PPP. Plantsgo and Piupiu are great teammates and excellent competitors. We all have top-class solo performances and even better scores as a team. </p> <p>The best single model is trained on the sub-sampled data and the submission is the average prediction of 5-fold CV.</p> <p>就是最普通的五折CV。。</p> <p>For each feature, it is firstly extracted from the entire dataset (as a very large column), then merged into the small sub-sampled data (so only about 5% entries of that column are kept). So the resulted features have correct values. </p> <p>Congrats to the top teams and thanks Kaggle and TalkingData for hosting such a perfect competition. Also congrats to Plantsgo, the new Kaggle grandmaster! </p> <p>Overall the competition was wonderful (except for D**k's Kernel) and we learned much during the last month. Here I'd like to briefly describe our solution as well as some important techniques we used. To summarize, our solution consists of: </p> <ul> <li><p>a framework that is both time and memory efficient to cope with the large dataset,</p></li> <li><p>some regular features,</p></li> <li><p>two methods, LightGBM and NN,</p></li> <li><p>a simple weighted average ensemble of predictions.</p></li> </ul> <h2>Framework</h2> <p>It is essential for us to use sub-sampling to reduce time and memory costs. At the early stage, we found it so hard to deal with the whole dataset due to the limitation of RAM (128G for Plantsgo, 64G for me, and 32G for Piupiu). Piupiu bought another 16G RAM immediately, but he was upset when finding that it merely helps. The difficulties to deal with such large data are 2-folds: it's hard to extract features, and it's slow to train a model. As the training data is extremely imbalanced, we reduced the data size by sampling a small fraction (5%) of negative samples. In the feature engineering phase, new features were extracted from the entire data and merged into the sub-sampled data. In the training phase, only the sub-sampled training samples were used so it was about 10+ times faster than directly training the entire data. We used 5-fold CV to see the offline performances. It took about half an hour to train one fold (with an Intel i7 core and two hundred of features).</p> <h2>Feature engineering</h2> <p>Our team did not have any magic features, although my teammates Plantsgo and Piupiu are both feature engineering experts. Our features were regular in the sense that almost all our features were open-sourced by others in the Kernels one or two weeks after we had used them. What a sad story. There were several kind of features: count features, cumcount features, time-delta features, unique-count features, and which [app/os/channel]s each IP appears in the data. Because we have the sub-sampling framework, we have enough memory space to get hundreds of features.</p> <h2>Models</h2> <p>We used two methods, LightGBM and NN. The best single model was LightGBM with Plantsgo's features which scored 0.9837 on the private LB. I have been trying to make a strong neural network to beat LightGBM during the whole month, but obviously I failed. Our best NN scored 0.9834 on the private LB which had a dot-product layer for categorical inputs and deep fully-connected layers for continuous numerical inputs. I believe there must be better NN structures and I really hope to learn it from other top teams. </p> <h2>Ensemble</h2> <p>The three of us had three LGB predictions and three NN predictions. So we averaged these 6 predictions by trivially applying some weights inferred from their public LB scores. Now that the private scores are revealed, we find it more correlated to the offline CV scores rather than the public scores. If we had trusted the offline scores, our final score could have been better. </p> <h3>The leak on the test set: a sad story</h3> <p>In a word, we did not use the leak on test set, even though it was us to post the topic to Discussion. We were filled with grief when we heard that the leak would help improve about 0.0004. </p> <p>Thanks for reading it! </p> <p>也谢谢大家的支持!</p> <p>Thanks. It's similar to FM, one embedding per categorical feature. I did not implement an FFM-like structure considering that the memory usage on GPU may be too large.</p>
TalkingData AdTracking Fraud Detection Challenge
[late sub 0.9828771]How Boosting from 0.9671 to 0.9823 my first competition on kaggle
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData AdTracking Fraud Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks. I was using 40GB ram machine. I use lightgbm CLI api, and two_round_loading=true, that can load more than 40GB datasets to train.</p> <p>Thank you. Weeks before the end, I was using last 10m rows of day9 as valid, that was always a .004 gap between local and public score. Until I came to set hour 4,5,9,10,13,14 as valid, I actually have three random split valid sets. As I tuned the lightgbm more and more, the gap finally narrowed down to .0005~.0008. (.9823 local average, .9816 public, .9823 private)</p> <p>First of all, THANKS to kaggle and talking data to launch this great competition. Congrats to all the winners and participants who contributed to this game, I really learned so much from kernels and discussions from you guys. Awesome.</p> <p>For my work, ranked 65th, not that good enough but I'd like to share what i've been done throughout this game. I kept track of everyday works like the following.</p> <p><strong>1%</strong> <br> At the very beginning I had no idea with the ip, app...so I brutely made LOTS OF groupby counts features (~60). Ran a lightgbm on day9, to reach 0.9671.</p> <p><strong>2%</strong> <br> Added LOTS OF conversion-rate features (~20) to reach 0.9681.</p> <p><strong>50%</strong> <br> At this point I learned from a few kernels that time-related features are keys to this competition. Added a few time-delta features jumping to 0.9799.</p> <p><strong>75%</strong> <br> Read discussions every morning and found test_supplement.csv. Run all features with that again to reach 0.9807.</p> <p><strong>90%</strong> <br> Use full train.csv to train lightgbm with the same features to reach 0.9810.</p> <p><strong>95%</strong> <br> Use hours 4,5,9,10,13,14 as validset and train again to reach 0.9813.</p> <p><strong>98%</strong> <br> Add two more time-till-next-click groupby, to reach 0.9815. (At the very last day...)</p> <p><strong>99%</strong> <br> Simple post blend 3 or 4 lightgbms to finally reach 0.9816 public score. And my ranking for private leaderboard is 1 position upper (65th) than public leaderboard (66th).</p> <p>My final model is a single lightgbm with around 40 features. One interesting finding is that my local valid score is around 0.9823 which exactly match the private LB score. (public is 0.9816 though)</p> <hr> <p><strong>WHAT didn't work in my model</strong> <br> 1. empirical bayes target encoding for such as ip-app-download-rate didn't improve anymore. <br> 2. click in the next/last hour/24hour didn't work... <br> 3. feature that indicates ip's coming day...since I learned that newly coming ip has higher CVR...but didn't work... <br> 4. sample weighed for testhour in training lightgbm didn't work...</p> <p><strong>WHAT i wish i should've done</strong> <br> 1. should've done more time-delta features, as I found in the last day...Due to my ugly coding, I felt lazy to add more of that... <br> 2. more cleverly dealing with duplicates (ip, app, device, os, channel, click_time). I came to <a href="/plantsgo">@plantsgo</a>'s discussion on modifying duplicates and @CPMP 's deep thoughts on that...Intuitively I agree that the last line of duplicates should be a 1, if I just have more time to reorder and try that data training again to see the LB score.</p> <p>Anyway, really learned a lot from this competition, and really willing to join the next one... and see u guys there ;)</p> <hr> <p><strong>Updated 5/11 <br> I did reorder duplicates clicks by (time,is_attributed), and added next 2,3 clicks time-delta features. SO that gave me 0.9828771 on private leaderboard which could jump to 21th place.</strong></p> <p>Thanks. I work on a 32 cores workstation with around 40GB ram. I use pandas to read and process dataset by chunks, and within each chunk I use multiprocessing to process "subchunks" and finally combine all outputs.</p>
TalkingData Mobile User Demographics
5th place solution (Kazanova|Faron) and some thoughts
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData Mobile User Demographics <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was a <em>controversial</em> competition, and partly it has left me with a bad taste in my mouth, because I saw many excellent data scientists having their ranks demolished after the leakage was found and personally (our team actually) because we spent lots of time to make useless models to try and maintain our ranks &#8211; it felt like a bad investment that we just had to somehow reverse! </p> <p>For these reasons (albeit the wrong reasons), this competition had me quite engaged both on the forums and on the court (!) and many things have been said. Let it be in the heat of the moment and I hope we can learn from this and improve processes so that we have more leaky-free competitions in the future. </p> <p>Irrespective of the leaky data (that needs to be prevented), I think that we (as competitors and data scientists) ought to think of leaks responsibly, because at the end of the day we care about the reputation of data science and what Kaggle stands for (as the home of data science) as well as the work that we do here, apart from the wasted hours producing non-useful models, it backfires on all of us when we cannot solve the host&#8217;s problems appropriately. Of course there will always be people that will try to leverage leaks to win competitions and money, but <strong>this was not the intention</strong> in this competition and with different handling we could possibly have saved ourselves the finish-line drama. </p> <p>We still ought to not let the competition finish without some solutions on the forums as it was great for most part of the challenge :)</p> <p>We deem our solution before utilizing the leak to be 99.+% leak-free since all our models are bagged with shuffled data and we did not make use of any sort of local collaborative filtering that could potentially capture the leakage. We estimate our score would have been on public LB around 2.222 &#8211; 2.223. </p> <h2>Feature-wise before leakage:</h2> <p>We produced device id features based on events such as : </p> <pre><code>count(*) as event_count, count(distinct(DAY)) as distinct_days, range(date) as range_date, min(date) as minimum_date, max(date) as maximum_date, min(hour) as minimum_hour, max(hour) as maximum_hour, </code></pre> <p>and some more date-based features (like how many events on hour &#8216;18&#8217;)</p> <p>We encoded the brand device model in many different ways, such as creating likelihoods (based on gender) or creating averages (based on age) or combinations of both. We also use them as one-hot encoded variables too. </p> <p>We then created some aggregates based on event_id from app_event data , such as :</p> <pre><code>count(*) as appevent_count, count(distinct(app_id))/count(*) as appid_proportion, mean(is_active) as appevent_meanactive, max(label_id) as appevent_maxlabel, mean(label_id_count) as appevent_labelcount, mean(app_count) as appevent_appcount, mean(app_active) as appevent_meanappactive, max(app_active) as appevent_maxappactive, range(app_active) as appevent_rangeappactive </code></pre> <p>And after joining back with event data, we created further aggregates as :</p> <pre><code>sum(case when appevent_count = . then 1 else 0 end)/ count(*) as appeventnull, mean(appevent_count) as mean_appevent_count, mean(appid_proportion) as mean_appid_proportion, mean(appevent_meanactive) as mean_appevent_meanactive, mean(appevent_maxlabel) as mean_appevent_maxlabel, mean(appevent_labelcount) as mean_appevent_labelcount, mean(appevent_meanappactive) as mean_appevent_meanappactive, mean(appevent_maxappactive) as mean_appevent_maxappactive, mean(appevent_rangeappactive) as mean_appevent_rangeappactive, min(appevent_count) as min_appevent_count, min(appid_proportion) as min_appid_proportion, min(appevent_meanactive) as min_appevent_meanactive, min(appevent_maxlabel) as min_appevent_maxlabel, min(appevent_labelcount) as min_appevent_labelcount, min(appevent_meanappactive) as min_appevent_meanappactive, min(appevent_maxappactive) as min_appevent_maxappactive, min(appevent_rangeappactive) as min_appevent_rangeappactive, range(appevent_count) as range_appevent_count, range(appid_proportion) as range_appid_proportion, range(appevent_meanactive) as range_appevent_meanactive, range(appevent_maxlabel) as range_appevent_maxlabel, range(appevent_labelcount) as range_appevent_labelcount, range(appevent_meanappactive) as range_appevent_meanappactive, range(appevent_maxappactive) as range_appevent_maxappactive, range(appevent_rangeappactive) as range_appevent_rangeappactive </code></pre> <p>On apps and labels we created various datasets:</p> <p>Some are based on <a>Dune&#8217;s</a> excellent <a href="https://www.kaggle.com/dvasyukova/talkingdata-mobile-user-demographics/a-linear-model-on-apps-and-labels">script</a> on the kernels (thank you Dune , proud to be a former teammate of yours :) )</p> <p>Some others are based on tf-idf .</p> <p>Others are based on tf-dif but maintaining some of the ordering within the event_id (like respecting the order the apps appear within an event id).</p> <p>Some others we tried to create combinations of all possible apps that may appear within an event id .</p> <p>For all of these we created some truncated svd outputs to work with densified data.</p> <p>We built models on different variants and combinations of them. </p> <h2>After the leakage:</h2> <p>We included row_id /n as feature. This gave +0.055 uplift to our xgboost (from 2.256 to 2.202 on public lb) model and and + 0.015 to nns (from 2.233 to 2.218).</p> <p>We experimented with adding previous and next apps and labels&#8217; sequences in each row before creating our tf-idfs &#8230;but if failed.</p> <p>We also tried to include prediction from other models for +- observations for each sample&#8230;but also failed! </p> <p>Finally we created a simple flag that stated whether +-1 observation of each row shares the same brand or device and that gave a further 0.01 uplift on xgb but made nns slightly worse. It was verified on leader board</p> <p>With the eye we could have reconstructed the order of the initial data within train-and-test, or we could have come up with some proxy rules to further reconstruct the order, but we thought that to be </p> <ol> <li>Too time consuming to invest time to</li> <li>Completely ml-nonsense &#8211; e,g it would have been pure leakage &#8211; no data modelling whatsoever!</li> </ol> <h2>Models Before Leakage</h2> <p>As my ex-teamate <a href="https://www.kaggle.com/utility">utility</a> correctly <a href="https://www.kaggle.com/c/talkingdata-mobile-user-demographics/forums/t/23286/you-were-only-supposed-to-blow-the-doors-off/133972#post133972">proclaimed</a> </p> <p>We (or at least I) don&#8217;t know how to look at the data and let the models make the work for me.</p> <p>Hence, our initial ensemble would consist of more than 100 models (possibly much trimmed down via fs, we never finished that). </p> <p>Best model was an nn with 2 layers, rectifiers. (1000, 1000) with l2 regularization and 3 epocs (batch size 32) on dune&#8217;s data only with LB Public score 2.233. </p> <p>We built many other nns with slightly worse score with different architectures all 30-tmes bagged and shuffled using keras. </p> <p>Best xgboost was 2.256 but we did not really invest much (e.g. we thought this to be an nn-comeptition). This used many of the aggregate features we mentioned before + the likelihoods and the tf-idf data.</p> <p>Linear models scored around 2.273.</p> <p>We created many ETs, RFs, adas from sklearn.</p> <p>We used libfm too. </p> <p>We build many models predicting AGE and Gender separately too, to be used as Meta features. </p> <h2>Models AfterLeakage</h2> <p>All xgbs +0.055-0.065 . </p> <p>all nns + 0.01-.015</p> <p>we just re-constructed the top 20 (based on performance) models via including the leakage and created a level 2 ensemble using 3 meta models nn (50%) xgboost (37.5%), ET (12,5%) to get our final score. </p> <p>We saw big uplift via re-adding the leaky features in the Meta modelling step&#8230;but was overfitting :)</p> <h2>Conclusion</h2> <p>Ultimately I would like to thank everybody for their contributions in the kernels. (I Always tell myself I should make one , but I am too afraid it will be really bad , as my coding style comes from the stone age&#8230;. If you can even refer to it as coding style. My former teammates can confirm&#8230;). </p> <p>I have learnt a lot in this competition, especially running nns more efficiently with sparse data :) (credit to <a>chenglong</a> )</p> <p><a href="https://www.kaggle.com/mjahrer">Jahrer</a> for his hard work on maintaining the top spot for a long time (before the leakage) as well as for for giving away leakage saving us time from searching for nonsense (as in data/value-wise).</p> <p><a href="https://www.kaggle.com/old/alexxanderlarko/kernels">Alexander&#8217;s</a> <a href="https://www.kaggle.com/alexxanderlarko/talkingdata-mobile-user-demographics/bag-of-apps-keras-11-08-16/code">script</a> </p> <p><a href="https://www.kaggle.com/fakeplastictrees">Fakeplastics</a> for finding the leak and notifying the moderators about it.</p> <p>I Share my frustration with lots of other great data scientists that were in top 10 before the leakage but got thrown down like Stas and Giba and many more.</p> <p>Lastly I would like to thank Kaggle for organizing this and talkdata for providing the data as it was for most part and nice competition and we did find ourselves somehow wiser after this with new tricks in our Arsenal. </p> <p>I hope sincerely that more steps will be taken to ensure the integrity of the data in future comps. </p> <p>[quote=Lei Hao;134773]</p> <p>I see this in email, and come here just to give your upvotes! :)</p> <p>[/quote]</p> <p>You seem like THE ideal practitioner of data science I've been searching for my whole life...</p> <p>Hatsune Miku is the photo? Nice!</p> <p>[quote=MartijnNanne;134749]</p> <p>[quote=&#924;&#945;&#961;&#953;&#959;&#962; &#924;&#953;&#967;&#945;&#951;&#955;&#953;&#948;&#951;&#962; KazAnova;134583]</p> <h2>Models AfterLeakage</h2> <p>All xgbs +0.055-0.065 . </p> <p>all nns + 0.01-.015</p> <p>we just re-constructed the top 20 (based on performance) models via including the leakage and created a level 2 ensemble using 3 meta models nn (50%) xgboost (37.5%), ET (12,5%) to get our final score. </p> <p>We saw big uplift via re-adding the leaky features in the Meta modelling step&#8230;but was overfitting :)</p> <p>[/quote]</p> <p>So was the output of the first level models (the top 20) 8 features (6 for each age class and 2 for the gender classes)? And then you feeded these 8 features to the level 2 models? I would really like to know how meta ensembling works :)</p> <p>[/quote]</p> <p>No , all our models after leakage had 12 outputs (e.g the normal logloss problem) , the probability to belong to each classes as in the normal submission format. Only before the leakage we experimented via creating some binary models (e.g predicting gender) or regression model (e.g. predicting Age as a value, like 32, 27 , 23, 45 etc - not class, 1 column)</p> <p>To understand better about how stacking works, I would like to take the liberty and redirect you to my github profile where I have included an example of how to <a href="https://github.com/kaz-Anova/ensemble_amazon">do stacking using the amazon data</a> (from the <a href="https://www.kaggle.com/c/amazon-employee-access-challenge">infamous competition</a> that made me-and-many-others start kaggling) . It shows other ensembling techniques too:</p> <p>I think it will be better to understand using an example.</p> <p>Hard Link: <a href="https://github.com/kaz-Anova/ensemble_amazon">https://github.com/kaz-Anova/ensemble_amazon</a></p> <p>[quote=Denis Vlasov;134731]</p> <p>KazAnova, could you share exact architecture of your best NN? What are the dropouts, metrics, optimizer, regularization? My best non-leaky NN gave me 2.240 on public LB and only 2.233 after stacking with a bunch of other models. I have never built nnets before this competition, so I think I'm missing something. I'll pay you in upvotes :) Thanks!</p> <p>[/quote]</p> <p>You know how to bargain - I like you!</p> <p>There you go, batch size = 32 , epoc=3, bagged 30 times. </p> <pre><code>models.add(Dense(output_dim=1000, input_dim=input_dims, init='lecun_uniform', W_regularizer=l2(0.000025))) models.add(Activation('relu')) models.add(Dropout(0.5)) models.add(Dense(1000, init='lecun_uniform', W_regularizer=l2(0.000025))) models.add(Activation('relu')) models.add(Dropout(0.4)) models.add(MaxoutDense(output_dims, init='lecun_uniform')) models.add(Activation('softmax')) opt = optimizers.Adagrad(lr=0.0035) models.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy']) </code></pre> <p>I edited. Does it work now?</p> <p>[quote=Jeong-Yoon Lee;134645]</p> <p>Thanks, Mario. Very helpful - and I made sure to give you +1. :)</p> <p>[/quote]</p> <p>That's the dream! Thank you :) </p> <p>[quote=Simeng;134642]</p> <p>Great sharing KazAnova! </p> <p>I am curious about LIBFFM you mentioned. I have tried LIBFFM for this multi-class problem (one-versus-rest), but did not get any sensible results (barely converged). Can you share some more details ?</p> <p>[/quote]</p> <p>Ok, you caught me!</p> <p>I actually used my own modified version of LibFm made in Java. I have not released yet as it will be linked to my thesis ... I will keep you guys posted when it gets released :)</p> <p>The parameters were :</p> <pre><code> LibFmClassifier libfm= new LibFmClassifier(); libfm.threads=12; libfm.Type=&quot;SGD&quot;; libfm.maxim_Iteration=23; libfm.C=0.0001; libfm.C2=100; // regularization on latent features libfm.init_values=0.235; // initial variance of weights libfm.lfeatures=5; // number of latent features libfm.shuffle=true; libfm.copy=false; libfm.learn_rate=0.012; libfm.tolerance=0.0001; libfm.usescale=false; libfm.verbose=false; </code></pre> <p>I think they are self explanatory, so you can try the equivalent ones in the C++ implementation and tell me if they work :) . </p> <p>NOTE SGD is with adagrad and is solved as 1 vs all.</p> <p>It scored around + 0.01 on top of the normal logistic regression and I speculate it added to the ensemble before the leakage. Not sure how much. I did not re-run it afterwards.</p> <p>The implementation is based <a href="http://www.csie.ntu.edu.tw/~b97053/paper/Factorization Machines with libFM.pdf">on this paper</a>.</p> <p>EDIT:</p> <p>link : <a href="http://www.csie.ntu.edu.tw/~b97053/paper/Factorization Machines with libFM.pdf">http://www.csie.ntu.edu.tw/~b97053/paper/Factorization%20Machines%20with%20libFM.pdf</a></p> <p>[quote=Yiyun Chen;134619]</p> <p>Thanks this is very helpful!</p> <p>[/quote]</p> <p>This has no real meaning if you don't plus 1 (+1) my comment :D hahaha . I would have preferred to hate me and give me the plus 1! </p> <p>There are many Leader boards to chase you know!</p> <p>(e.g you are Welcome :) )</p> <p>[quote=Yiyun Chen;134601]</p> <p>Thank you KazAnova,</p> <p>How many models do you use in 1st layer? </p> <p>[/quote]</p> <p>Hi Yiyun Chen</p> <p>After leakage we used about 20 . Before leakage we built more than 100 . I just counted mine only (without Mathes) and they were 98. </p> <p>I can see 18 xgboosts with different targets, 12-class, 2-class (e.g. gender) , regression (e,g age)</p> <p>12 ETs</p> <p>12 RFs, </p> <p>18 linear models (like logistic regression, ridge , sgdclassifiers , linearsvms etc)</p> <p>6 adaboosts</p> <p>The rest are nns.</p> <p>We added in all our 1st layer models , not sure if Mathes/Faron (who was in charge of Meta ) managed to do any selection because we intended to do that the last day. I will ask and update.</p> <p>i don't know exactly how much each model added in ensembling . I would expect 60-65% of these models to be redundant though , but we can live with that - e.g. better safe than sorry :)</p> <p>Many of the above models are initialized with random parameters. And then I try to tune myself to the best possible outcome given some constraints. E.g I will make an xgboost where I will force it to have max_depth =10 . I will then start tuning parameters around it to make the best possible xgboost I can given that constraint .</p> <p>From my experience this leads to fairly diverse models that add in ensembling , even if they don't reach in accuracy the top dogs . </p>
TalkingData Mobile User Demographics
My approach of single model with 2.234 public LB (without leak)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData Mobile User Demographics <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>[quote=LouisKitLungLaw;136928]</p> <p>[quote=Yiyun Chen;134493]</p> <p>First I split the data into two parts, with events and without events.</p> <p>For without events part, I train only on phone brand and phone model. I use 1 NN and 1 XGB and train on all the training data. The weighted average can reach 2.38409 public LB.</p> <p>[/quote]</p> <p>Thank you so much for sharing your solution. </p> <p>One quick question. I am confused when you say you split the data into 2 parts but train on all the training data for without events part. If you split the data into 2, then how can you train on all training data for 1 part?</p> <p>[/quote]</p> <p>Sorry I didn't express myself clearly. I should say I build two models, with events and without events.</p> <p>For training for model without events, I actually use all the data because the data with events also have phone and brand. For prediction for test data, I only use this model on the test data without events.</p> <pre><code> model | Training data | Testing data model without events | train on all, use phone brand | predict on data without events model with events | train on data with events | predict on data with events </code></pre> <p>[quote=Sameh Faidi;134530]</p> <p>Did you say? &quot;Among them, version 1 with NN with 1024*128 (two hidden layers) and dropout 0.8, 0.5, activition sigmoid&quot;</p> <p>Your code show different structure. I think you have a very good single NN score. </p> <p>[/quote]</p> <p>For that I mean the structure I use for the devices with events. The code I give to shubin is NN for the devices without events.</p> <p>[quote=Tony Schroeder;134507]</p> <p>Is there a quick way to calculate entropy of app, app label, hours, day of the week? I'm new to these competitions and never seen that kind of feature before.</p> <p>[/quote]</p> <p>My preprocessing of these features is storing them as a long string. So for each devide_id I only have one row.</p> <p>For example, the app labels for a device is like &quot;152 , 709 , 704 , 705 , 706 , 61 , 179 , 178 ,...&quot;</p> <p>Then use this</p> <pre><code>from collections import Counter from scipy.stats import entropy merged[&quot;app_lb_entropy&quot;] = merged.app_lb.apply(lambda x: entropy((Counter(x.split(&quot; , &quot;))).values())) </code></pre> <p>[quote=Shubin;134505]</p> <p>@yiyun,</p> <p>i had the same question as Garvys. i have been struggling to break 2.39. could you please tell more about your NN and xgb. specifically what is your structure for NN and paramters for xgb(do you use gblinear?). it is a mystery for me that @Manel scores 3.85 range using xgb with gbtree. thanks for sharing. </p> <p>your prediction for events are a lot better than mine and i believe if you stack with non-event devices and you should be in top 10 with the leakage column. </p> <p>Shubin</p> <p>[/quote] Ok</p> <p>NN</p> <pre><code>model = Sequential() model.add(Dense(256, input_dim=X_train_v.shape[1])) model.add(PReLU()) model.add(BatchNormalization()) model.add(Dropout(0.5)) model.add(Dense(64)) model.add(PReLU()) model.add(BatchNormalization()) model.add(Dropout(0.5)) model.add(Dense(12)) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(X_train_v, y_train_lb, nb_epoch=5, batch_size=16, verbose=0) </code></pre> <p>xgb: </p> <pre><code>param = {'objective':'multi:softprob', 'subsample':0.7, 'max_depth':5, 'colsample_bytree':0.7, 'eval_metric': 'mlogloss', 'num_class':12,'eta':0.05} </code></pre> <p>nround=350</p> <p>It's gbtree not gblinear</p> <p>Now if you upload a submission Kaggle will give you private result. So the score can be different.</p> <p>And I don't think I can be top 10 even if I stack with non-event data because I didn't spend much time on the leakage and on current leaderboard, the effect of digging more on leakage is much more significant than digging more on original data. </p> <p>[quote=Garvys;134496]</p> <p>Hello,</p> <p>Thank you for sharing your solution :)</p> <p>Concerning the model without events, you used Phone Brand and Device Model as features. I guess you used One Hot encoding on them ? Did you create any other features based only on these two features to get a score of 2.38409 ?</p> <p>Myself, I wasn't able to go below 2.39 with NN and XGB and I try to understand why...</p> <p>Thx</p> <p>[/quote]</p> <p>I just use one hot encoding. For NN I run 10 times with different seeds.</p> <p>Hi, I want to share my approach of single model with 2.23390 public LB without leak. </p> <p>First I split the data into two parts, with events and without events.</p> <p>For without events part, I train only on phone brand and phone model. I use 1 NN and 1 XGB and train on all the training data. The weighted average can reach 2.38409 public LB.</p> <p>For with events part, I have several versions of features.</p> <p>1, one hot of app, app labels, hours, day of the week ; entropy of app, app label, hours, day of the week ; count of app, app labels ; average of non zero longitude and latitude (if all the longitude, latitude of a device is 0,0, then I mark it in a binary column called 'missing longitude latitude') ; phone brand and phone model; For this version, all the apps are installed ones.</p> <p>2, similar to 1, but use TF-IDF for app labels</p> <p>3, similar to 1, and also create features for active apps</p> <p>4, similar to 1, and also add TF-IDF for label categories (the text description)</p> <p>Among them, version 1 with NN with 1024*128 (two hidden layers) and dropout 0.8, 0.5, activition sigmoid, performs the best. It has 5 fold CV as 1.918 for the data with events. Combined with the data without events, it can reach 2.23390 public LB. If we run it with different seeds 10 times , it can reach 2.23210 public LB, which ranks about 30 on the leaderboard before the leak, and the total running time is about 2 hours on gtx 980ti.</p> <p>I'm not good at stacking models. I use the different versions of features and train them using NN and XGB and only reach 2.227 public LB. I think the reason may be for the data without event, I always use the same prediction. In other words, I only stack models on data with events, which is only about 30% of the data.</p> <p>Hope this helps.</p> <p>[quote=eggeggsuperman;134536]</p> <p>[quote=Yiyun Chen;134512]</p> <p>[quote=Tony Schroeder;134507]</p> <p>Is there a quick way to calculate entropy of app, app label, hours, day of the week? I'm new to these competitions and never seen that kind of feature before.</p> <p>[/quote]</p> <p>My preprocessing of these features is storing them as a long string. So for each devide_id I only have one row.</p> <p>For example, the app labels for a device is like &quot;152 , 709 , 704 , 705 , 706 , 61 , 179 , 178 ,...&quot;</p> <p>Then use this</p> <pre><code>from collections import Counter from scipy.stats import entropy merged[&quot;app_lb_entropy&quot;] = merged.app_lb.apply(lambda x: entropy((Counter(x.split(&quot; , &quot;))).values())) </code></pre> <p>[/quote]</p> <p>Thanks a lot for your sharing Yiyun. I am not familiar with entropy. Do you mean you are calculating the entropy for each device's app labels? Or the KL divergence of the app labels to the set of app labels in each of the 12 demographic groups? (to compare how &quot;close&quot; the device is to the 12 groups based on the distribution of app labels). Sorry for any misunderstanding.</p> <p>[/quote]</p> <p>I just calculate the entropy for each device's app labels. This is to take the collection of app labels as a distribution. I haven't check whether this feature is helpful. Maybe it's useless.</p>
TalkingData Mobile User Demographics
Some of the 3rd place solution history (Danijel|Matias)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData Mobile User Demographics <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>[quote=Anon;134683]</p> <p>Thank you Matias. Great work. Are you planing to post the files in github anytime? Would be great help if you could.</p> <p>[/quote]</p> <p>Hi Anon, by now I can share my best &quot;single&quot; model not using the leak (at least not explicitly) <a href="https://www.kaggle.com/chechir/talkingdata-mobile-user-demographics/keras-cv10-with-bags5-wevents-alldata">https://www.kaggle.com/chechir/talkingdata-mobile-user-demographics/keras-cv10-with-bags5-wevents-alldata</a> (the code won't run in Kaggle because of the missing folds split file, and also because of time/memory limitations) Notice that the final prediction was built with a logistic regression for devices without events, and the keras only is used for the devices with events. Cheers!</p> <p>Hello, here I would like to share what we were doing all those weeks, and specially the last and leaky end of the last week. I'll give my view point and Danijel may add his viewpoint as well.</p> <p>As usual in any competition, I started doing some exploratory analysis and my initial assumptions were that the ratio of usage of different apps would be predictive for age and gender. For example if the user has a lot of events associated to PokemonGo, war craft and other games, it's very likely that he is one of my coworkers! I even built this Kaggle script where I try to analyze ratio of apps type usage: <a href="https://www.kaggle.com/chechir/talkingdata-mobile-user-demographics/only-another-exploratory-analysis">https://www.kaggle.com/chechir/talkingdata-mobile-user-demographics/only-another-exploratory-analysis</a> </p> <p>At the same time I saw the great script of Yibo: (<a href="https://www.kaggle.com/yibochen/talkingdata-mobile-user-demographics/xgboost-in-r-2-27217">https://www.kaggle.com/yibochen/talkingdata-mobile-user-demographics/xgboost-in-r-2-27217</a>) and I copied his way to encode everything to 1/0 (event the ratios). Then I started to use a bunch of xgb models as well as a glmnet model, blending them all using multivariate regression (nnet package in R). I was doing reasonable well (around 20-30 place on the LB) when I saw the dune_dweller script (don't need to add the link!). At that time I was trying to learn Keras, so I used her feature engineering and plugged a keras model.. It had a great performance and boosted my score around the 17th position! For some reason I decided to share it on the Kaggle scripts: <a href="https://www.kaggle.com/chechir/talkingdata-mobile-user-demographics/keras-on-labels-and-brands">https://www.kaggle.com/chechir/talkingdata-mobile-user-demographics/keras-on-labels-and-brands</a>. And our best model single model for devices with events is just that model with some new features and a more layers and regularization. It scored 2.23452 on the LB. </p> <p>The additional features to this model were:</p> <ul> <li>TF-IDF of brand and model (for devices without events) </li> <li>TF-IDF of brand, model and labels (for devices with events) </li> <li>Frequency of brands and model names (that one produced a small but clear improvement) </li> </ul> <p>And the parameters were: </p> <pre><code>model = Sequential() model.add(Dropout(0.4, input_shape=(num_columns,))) model.add(Dense(n)) model.add(PReLU()) model.add(Dropout(0.30)) model.add(Dense(50, init='normal', activation='tanh')) model.add(PReLU()) model.add(Dropout(0.20)) model.add(Dense(12, init='normal', activation='softmax')) </code></pre> <p>Using the average test predictions for each fold model helped a lot here (I was using 10 folds)</p> <p>Then we merged our teams with Danijel who was doing a very creative xgboost combination besides other things that he would explain much better than me. Together we started retraining some of our models on CV10 and then switched back to CV5 because of processing time reasons. For the ensemble weights we used the optim package in R (iterated 3 times) and also built differen ensembles for devices with events and devices without events (that was another great idea from Danijel). </p> <p>When the leak issue raised we were around 11th to 13th on the LB and we started to look where it was. My team mate Danijel was the man how built a clever matching script that combined with our best submission allowed us to fight for the top places in those crazy 3 last days. We found also that devices with events weren't taking advantage from the leak, so we only used leak models on non-events devices. </p> <p>To me this competition was a great experience, I learned a lot from my team mate Danijel, and also from Dune_Dweller, Yibo and from fakeplastictrees and the dam leak!</p>
TalkingData Mobile User Demographics
Team ensemble's solution + framework
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TalkingData Mobile User Demographics <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>You're very welcome. I'd love to see that more and more people use this to win competitions as my team did. :)</p> <p>BTW, its pre-leak public LB score is 2.230 and post-leak public/private LB scores are 2.187/2.193 with normalized row IDs.</p> <p>Hi,</p> <p>I'd like to share my team, ensemble's solution and framework.</p> <p>The code is available at gitlab: <a href="https://gitlab.com/jeongyoonlee/talkingdata/tree/master">https://gitlab.com/jeongyoonlee/talkingdata/tree/master</a></p> <p>and team's internal LB is available here: <a href="https://gitlab.com/jeongyoonlee/talkingdata/wikis/home">https://gitlab.com/jeongyoonlee/talkingdata/wikis/home</a></p> <p>We joined the competition late, and had just enough time to build and run the end-to-end framework without much feature engineering. So feature-wise, there is nothing fancy, but I hope that you can find the framework itself helpful. :)</p> <p>As you can see, it uses Makefiles to pipeline feature generation, single model training, and ensemble training. The main benefits of our framework based on Makefiles are:</p> <ul> <li>It's language agnostic - You can use any language to do any parts of pipeline. Although this specific version uses Python throughout the pipeline, I used to mix R, Python, and other executables to run the pipeline.</li> <li>It checks dependencies automatically - It checks if previous steps were completed, and if not, it runs those steps automatically.</li> <li>It's modular - When working with others, it's easy to split tasks across team members so that each one can focus on different parts of pipeline.</li> </ul> <p>If you are new to Makefiles, here are some references:</p> <ul> <li><a href="http://kaggler.com/kagglers-toolbox-setup/">http://kaggler.com/kagglers-toolbox-setup/</a> - see the Makefile section</li> <li><a href="http://blog.kaggle.com/2012/10/15/make-for-data-scientists/">http://blog.kaggle.com/2012/10/15/make-for-data-scientists/</a></li> </ul> <p>Enjoy. :)</p>
TensorFlow - Help Protect the Great Barrier Reef
10th place solution — 2xYOLOv5 + tracking
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow - Help Protect the Great Barrier Reef <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulates all the participants and many thanks to organizers. It was very challenging and interesting competition.</p> <p>This post was written jointly with&nbsp;<a href="https://www.kaggle.com/danjafish" target="_blank">@danjafish</a> and we want to say a big thank you to our team.</p> <h1>Summary</h1> <p>Our solution is an ensemble of two differently trained yolov5 models + tracking.<br> <img src="https://i.imgur.com/hrGOAv1.png" alt="summary"></p> <p>Our final submit code is available here<br> <a href="https://www.kaggle.com/parapapapam/2xyolov5l6-tracking-lb-0-678-private-0-722" target="_blank">https://www.kaggle.com/parapapapam/2xyolov5l6-tracking-lb-0-678-private-0-722</a></p> <h1>Models</h1> <p>In our solution we used two yolov5 models trained in different ways.</p> <h2>YOLOv5l6 1280</h2> <p>(by <a href="https://www.kaggle.com/parapapapam" target="_blank">@parapapapam</a>)</p> <h3>Idea</h3> <p>The main trick of this competition is the use of high resolution. But when you just resize the image, you increase the noise without adding any additional information, so the reason why it works better not in the image.</p> <p>My hypothesis was that the reason of that effect is in achors assignment for small object. When you increase image resolution you make your objects bigger, but anchors grid step is the same and you get better allignment so model works better.</p> <p>Therefore I decided do not increase input resolution, but use higher resolution inside the model. For that I changed stride of second convolutional layer from 2 to 1:</p> <pre><code>... [-1, 1, Conv, [128, 3, 1]], # 1-P2/4 ... </code></pre> <p>So this model will have 2 times bigger resolution starting from 3rd convolutional layer and should behaves like 2560 model but without resizing the original image.</p> <h3>Validation Strategy</h3> <p>Split by video. Video 1 was used as a validation. For submit model was retrained on videos 0 1 with validation on video 2.</p> <h3>Data</h3> <p>I’ve used kaggle data with 6 iterations of fixing the labels. I’ve checked FP errors of models with high confidence and fix them manually using labelImg. It improves CV without any effect on LB. <br> I’ve trained on labeled images + 10% unlabeled data picked at random.</p> <h3>Augmentations</h3> <p>I used albumentations with the following pipeline:</p> <pre><code>aug_strong = [ A.HorizontalFlip(p=0.5), A.VerticalFlip(p=0.5), A.RandomRotate90(p=1.0), A.OneOf([ HueSaturationValue(p=0.5, hue_limit=0.05, sat_limit=0.7, val_limit=0.4), A.RandomBrightnessContrast(p=0.5, brightness_limit=0.15, contrast_limit=0.15), A.ToGray(p=0.1), ], p=0.8), A.OneOf([ ShiftScaleRotate(p=0.4, shift_limit=0.1, rotate_limit=3, scale_limit=(0.5, 2.0), min_bbox=27, max_bbox=None, border_mode=0, value=(114, 114, 114)), A.Perspective(p=0.1, scale=0.1, pad_mode=0, pad_val=(114, 114, 114)), ], p=0.8), A.OneOf([ A.GaussianBlur(p=0.5, blur_limit=(3, 5)), A.MotionBlur(p=0.5, blur_limit=(3, 5)), A.MultiplicativeNoise(p=0.5, multiplier=(0.9, 1.1), per_channel=False, elementwise=True), ], p=0.2), ] bbox_params = A.BboxParams(format="yolo", label_fields=["class_labels"], min_area=16, min_visibility=0.2) transform = A.Compose(aug_strong, bbox_params=bbox_params) </code></pre> <p>And from YOLOv5 mosaic=1.0 and mixup=0.5.</p> <p>Also I’ve used enchancement based on clahe and channel stitching for train and inference. It gave around +0.005 CV/LB.</p> <h3>Train params</h3> <p>10 epochs, 8 batch size, SGD 0.01, OneCycle, input size 1280x720.</p> <p>Also I’ve changed obj: 8.0 because it depends on image size and I have to multiply it by factor of 2 ^ 2 = 4 because I’ve used two times bigger resolution inside the model and also experiment shows that multiplying it by 2 also a bit increased CV.</p> <h3>Results</h3> <p>CV ~0.65 (+ tracking 0.66) / Public 0.67 (+ tracking 0.685) / Private 0.699 (+ tracking 0.718)</p> <h3>What didn’t work</h3> <p>Copy paste COTS bboxes augmentation. Style transfer. External data.</p> <h2>YOLOv5l6 3100</h2> <p>(by <a href="https://www.kaggle.com/danjafish" target="_blank">@danjafish</a>)</p> <p>My approach was quite straightforward. I got most of my ideas from <a href="https://www.kaggle.com/c/global-wheat-detection/discussion/172569" target="_blank">https://www.kaggle.com/c/global-wheat-detection/discussion/172569</a> and <a href="https://www.kaggle.com/c/global-wheat-detection/discussion/172418" target="_blank">https://www.kaggle.com/c/global-wheat-detection/discussion/172418</a>. However, I decided to use yolov5 for a start since I found it the most promising.</p> <ul> <li>Validation strategy: split by video. Video 1 was used as a validation. After selecting thresholds and other parameters, the model was re-trained on all data.</li> <li>Data: I used only kaggle data. Trained on not empty images only. Adding empty ones slightly improved scoring on validation, but significantly reduced LB. That's why I gave it up.</li> <li>Various augmentations from&nbsp;<a href="https://albumentations.ai/" target="_blank">albumentations</a> (I borrowed pipeline from <a href="https://www.kaggle.com/c/global-wheat-detection/discussion/172569" target="_blank">https://www.kaggle.com/c/global-wheat-detection/discussion/172569</a>):<ul> <li>HorizontalFlip, ShiftScaleRotate, RandomRotate90</li> <li>RandomBrightnessContrast, HueSaturationValue, RGBShift</li> <li>RandomGamma</li> <li>CLAHE</li> <li>Blur, MotionBlur</li> <li>GaussNoise</li> <li>ImageCompression</li> <li>CoarseDropout</li></ul></li> <li>I also changed the default mosaic implementation to get images of about the same size and made up parts of images with CenterCrop.</li> <li>Inference size: 3100. I tried different sizes and different architectures. The best results on validation were shown by yolov5l on size 3000X3000. However it was much worse on LB than l6 3100X3100. That's why I used the later one.</li> <li>Train params: 20 epochs, Adam optimizer, lr 0.001 ony cycle sheduler. bs=4</li> <li>Hardware: most of the time I used 4 V100 GPUs with 16GB RAM. Model takes about 2 hours to train.</li> <li>What did’t work:<ul> <li>Manual reannotations of data - some COTS were not labeled because they appeared in frames later than updated annotation. (my assumption). Adding these COTS slightly improved CV, but significantly worsened LB.</li> <li>TTA. Slightly impoved LB and CV, but takes much longer to run.</li> <li>Blend of different yolo architectures</li> <li>Blend one stage and two stage detectors. I spent the last week completely devoted to the models on mmdetection, hoping to improve the ensemble score. However, my cascade rcnn showed much worse results on both CV and LB.</li></ul></li> <li>Final scores CV/Public+tracking/Private+tracking: ~0.643(video 1)/0.693/0.722</li> </ul> <h1>Ensemble</h1> <p>We took predicts of two above models and use WBF with iou_threshold = 0.6 to blend them with weights 0.3 and 0.7. After that we filter results with confidence threshold ≥ 0.1.</p> <p>But we didn’t spend enough time to tune the weights and the final result seems not much better that each model, but we have strong models, so each of them + tracking already is in gold :).</p> <h1>Tracking</h1> <p>I’ve already posted tracking approach based on norfair library<br> <a href="https://www.kaggle.com/parapapapam/yolox-inference-tracking-on-cots-lb-0-539" target="_blank">https://www.kaggle.com/parapapapam/yolox-inference-tracking-on-cots-lb-0-539</a><br> So I will not go into details of norfair tracking implementation based on SORT algorithm.</p> <p>Here I want to mention that this problem is not a MOT problem because objects are almost static and we have to deal with only camera movement.<br> For that we’ve developed algorithm of homography calculation between every two consecutive frames using ORB keypoint descriptors. Next we transform our bboxes using homography matrix to obtain predictions on the next frame.It gives much accurate predictions on two consecutive frames comparing to norfair:<br> <img src="https://i.imgur.com/vwMG9AP.png" alt="frame1"><br> <img src="https://i.imgur.com/zwZjqRu.png" alt="frame2"></p> <p>Green labels is a ground truth, the blue ones are predictions using just norfair tracking, red ones — homography transform from previous frame + norfair. As we can see, homography + norfair match much better than just norfair.<br> When we have no detection for object that was detected at least two frames before we predict it 2 more frames using tracking bboxes and if the object doesn’t appear remove it from tracker.<br> Also we filtered objects that are moving out of the image.</p> <p>Our tracking approach gives stable +0.01-0.02 CV/Public/Private.</p> <h1>Other ideas</h1> <h3>Rounding</h3> <p>We found that using of bboxes = bboxes.astype(int) instead of bboxes = bboxes.round().astype(int) improves CV and Public ~ 0.005 and have no effect on Private. It seems the same like was already discussed<br> <a href="https://www.kaggle.com/c/tensorflow-great-barrier-reef/discussion/307605" target="_blank">https://www.kaggle.com/c/tensorflow-great-barrier-reef/discussion/307605</a>.</p> <p>Thank you for reading and happy Kaggling! :)</p> <p>Thanks Rajaram.<br> The difference not only in anchor sizes, but also in a grid size.<br> If you make resolution 2 times bigger inside the model you make your anchor grid step 2 times smaller and it makes assignment between objects and anchors better so this is the key.<br> Regarding custom anchors — yes we used custom anchors for 1280 model, but it has not so much effect comparing to the default anchors.</p> <p>Thanks Sean.<br> You have to keep exactly the same config. And also we observed that the validation metrics don't overfit when you train longer.</p> <p>Thanks Alexandre.<br> Yes because the model is fully convolutional and it doesn't depend on the shape of input tensor.</p> <p>Thanks Remek.<br> Congratulations to you and your team!<br> You and your team did a big contribution in this competition.</p> <p>Thanks Chris.<br> Congratulations to you!<br> And thank you for all knowledge you shared on kaggle. :)</p>
TensorFlow - Help Protect the Great Barrier Reef
11th Place solution - Team COTS
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow - Help Protect the Great Barrier Reef <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you Richard 😆</p> <p>We are happy to survive in the shakeup and it turns out trusting CV is indeed the key here! It is a great team effort and it has been a great journey to work with <a href="https://www.kaggle.com/anjum48" target="_blank">@anjum48</a>, <a href="https://www.kaggle.com/yamsam" target="_blank">@yamsam</a> , <a href="https://www.kaggle.com/imeintanis" target="_blank">@imeintanis</a> and <a href="https://www.kaggle.com/markunys" target="_blank">@markunys</a> . With this gold medal, 3 of us ( <a href="https://www.kaggle.com/imeintanis" target="_blank">@imeintanis</a> , <a href="https://www.kaggle.com/markunys" target="_blank">@markunys</a>, and me) will become Competition Master, it is just too good to be true 😆</p> <p>In general, it is very important to observe the video clip with those model predictions, it helps us a lot for observing some improvement areas, like tracking, ensembling, and adding new data. Here are some key points about our solutions:</p> <p>Best private LB model pipeline:<br> <img src="https://i.postimg.cc/QdFy25t6/COTS-ens-multitracker-0775-drawio-1.png" alt="pipeline"></p> <ol> <li><strong>Final Submission Model (2 best CV, 1 best CV &amp; LB, 1 best LB)</strong><ol> <li>5 YOLO WBF + 1 Cascade RCNN WBF: only used those predictions from RCNN that has &gt;= 0.3 IOU with YOLO prediction in order to correct YOLO's bounding box). <strong>CV 775 Public LB 626 Private LB 720</strong></li> <li>7 YOLO WBF: <strong>CV 781 Public LB 628 Private 719</strong></li> <li>4 YOLO WBF + 1 best LB model WBF: 4 YOLO WBF CV 771, <strong>Public LB 653, Private LB 706</strong></li> <li>2 best LB model WBF: Not sure CV, <strong>Public LB 718, Private LB 666</strong>.</li></ol></li> <li><strong>Ensemble, final submission <a href="https://www.kaggle.com/vincentwang25/11th-place-solution-100-iterations-journey" target="_blank">notebook</a></strong><ol> <li>Model inference with 1.3 x training size (multiscale inference in yolov5)</li> <li>low confidence threshold for every single model and high confidence threshold after WBF (<strong>~0.005 CV improvement</strong> compared with using high conf threshold before and low conf threshold after WBF)</li> <li>Model picking criteria: diversity from data (CLAHE preprocessing or not), training method (1 stage, 2 stages), and model (yolov5 or rcnn)</li></ol></li> <li><strong>CV scheme</strong><ol> <li>manually picked sequence that represents 20% of data (4707 images) -- balancing between representative and data size. <strong>This results in a final CV and PVT LB correlation of 90%.</strong> <ol> <li>Pick criteria: # of COTS per frame + how hard it is to predict by f2 score (we don't wanna pick a subset of sequence that is too easy or too hard to predict) + sequence across different video.</li></ol></li> <li>Not using 5 folds because of computational constraints. Not using video_id because it takes too much data away.</li></ol></li> <li><strong>Training</strong><ol> <li>training with all annotated images + ~5% background image (those background images are picked from those high FP images or randomly, it <strong>improves CV round 0.005</strong>).</li> <li>Using CV to choose hyperparameter and best epoch number --&gt; training with all data for submission.</li> <li>YOLOv5: default hyperparameter (mixup, mosaic, flip, HSV, translate, scale) but with learning rate 0.001 and Adam optimizer.<ol> <li>1 stage model: training with GT data for 20 ~ 40 epochs, batch size 4, training image size 2400~3600 (depending on model size).</li> <li>2 stage model: training with original data like 1 stage model, then with GT data for 10 epochs with 0.0001 LR rate.</li> <li>Some models also used <code>--label-smoothing 0.2</code> for faster convergence and higher CV score.</li></ol></li> <li>Cascade-RCNN: mmdetection framework with augmentation flip, rotate, CLAHE, HueSaturation, RandomBrightnessContrast, RandomSizedBBoxSafeCrop.</li></ol></li> <li><strong>Postprocessing</strong><ol> <li>Tracker for each model before WBF (<strong>~ 0.01 CV improvement</strong> compared with using tracker after WBF)</li> <li>For the tracker, we filtered out those predictions from trackers that are right on the edge because they are usually FP (this <strong>increases CV by ~0.002</strong>). Also, we use <code>initialization_delay=2</code> instead of 1 to depress too many FP, this <strong>improves CV by ~0.005</strong>.</li> <li>Using Optical Flow (<a href="https://www.kaggle.com/yamsam/optical-flow-by-raft" target="_blank">notebook</a>) to reset tracker to reduce FP (~ 0.01 private LB improvement)</li></ol></li> <li><strong>Data</strong><ol> <li>GT data: Add COTS through WBF prediction: Some real COTS is not marked in original data. We added it manually through observing the predicted video clip. In total we added 1213 COTS, around 10% of the original COTS number. This <strong>improves CV ~0.01</strong>.</li></ol></li> <li><strong>Things didn't work</strong><ol> <li>SAHI</li> <li>modify tracker to reduce obvious FP during each sequence.</li> <li>Using other datasets, like <a href="https://github.com/chongweiliu/DUO" target="_blank">DUO</a></li> <li>Trying to use best CV model and best LB model together to improve LB score but never get better than original best LB score (due to the tightness theory…..)</li> <li>YOLOX, YOLOR, YOLOv5-p7, Swin Transformer with Retina, fasterRCNN and TPH YOLOV5.</li> <li>Tried to add mosaic and mixup in mmdetection framework but didn't work.</li></ol></li> <li><strong>Hardware / # of gpu hours</strong><ol> <li>3 x RTX 3090 + 2 x 2080Ti + 1~2 GPU with memory 34GB + colab pro</li> <li>GPU hours 850+ hours</li></ol></li> </ol> <p>List of final ensemble candidates and their CV score in case someone is interested.<br> <img src="https://i.postimg.cc/VNcWKbb0/image.png" alt="pic"></p> <p>In the end, thank you to the great Kaggle community for many amazing sharing throughout the process, it is an exciting learning experience!🔥🔥</p> <p>Thank you Remek, I learnt a lot from your sharing, thank you!<br> Yes! <code>_t</code> stands for tracking, so those score are with tracking added :D </p>
TensorFlow - Help Protect the Great Barrier Reef
12th place solution - YOLOv5 + Optical flow tracker
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow - Help Protect the Great Barrier Reef <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for comments. In my experiment, norfair did not worked for my CV score.</p> <p>Thank you for sharing your model. It improved my public score significantly!</p> <p>First of all, I would like to thank the host for organizing such an exciting competition. I am glad to survive this big shake!</p> <h2>Detection model training</h2> <p>Because I utilized only the kaggle-kernel's GPU, I wanted to reduce GPU memory usage and training time. So I upscaled the training images by two times (2560x1440) and split them into four 1280x720 images. After that, I discarded the split image not containing bounding boxes. So that means the training size was 1280. In the inference phase, 2560 size was used. By doing this, I thought that the results are almost same to those obtained when training with 2560 size with reduced memory usage.</p> <ul> <li>YOLOv5m6</li> <li>4 folds Group-Kfold grouped by sequence</li> <li>1280 size with x2 upsacaled &amp; 4 split images</li> <li>Augmentation: Mainly, flipud p=0.5, mixup p=0.5, rot90 p=0.5 was changed from "hyp.scratch.yaml". To implement rot90, I modified "augmentations.py".</li> </ul> <h2>Tracking</h2> <p>I reused my optical flow based tracking code which I made in the <a href="https://www.kaggle.com/c/nfl-health-and-safety-helmet-assignment/discussion/285156#1569583" target="_blank">NFL competition</a>. Utilizing optical flow, next frame's bouding boxes can be estimated from previous frame's detection result. So it can continue tracking even if the detection model lost detection. I varied the number of frames continuing tracking in the lost detection condition according to the number of tracking counts up to that frame, and its maximum number is five.</p> <h2>Choice of final submission</h2> <p>In my CV, I noticed that upscaling the image size equally both of training size and inference size from original 1280x720 size contributed to both CV score and LB score. However, using different size between training and inference (for example training 2560 and inference 3840) was only contributed to LB score and got worse the CV score. So I suspected the possibility of overfitting and I chose the following four submissions.</p> <h5>1. Best public LB</h5> <ul> <li>Ensemble of following 3 models<ul> <li>Best 2 public LB score models out of 4 folds with inference size of 3840 (x1.5 upscaled from training size)</li> <li>YOLOv5s model shared by <a href="https://www.kaggle.com/freshair1996" target="_blank">@freshair1996</a> (score 0.665) with inference size of 6400</li></ul></li> <li>TTA: original, LR flip, UD flip (*)</li> <li>Ensemble: 3 models x 3 TTAs were ensembled by WBF<br> (*) The TTA pattern was limited by 9 hours limitation. If more TTA pattern is used, the more score will be got.</li> </ul> <h5>2. Best CV</h5> <ul> <li>All of 4 folds with inference size of 2560 (= training size)</li> <li>TTA: original, LR flip, UD flip, Rot90</li> <li>Ensemble: 4 models x 4 TTAs were ensembled by WBF</li> </ul> <h5>3. Middle of CV and public LB</h5> <ul> <li>Best 2 public LB score models out of 4 folds</li> <li>TTA: 2 inference size (2560, 3840) x original, LR flip, UD flip</li> <li>Ensemble: 2 models x 2 sizes x 3 TTAs were ensembled by WBF</li> </ul> <h5>4. TensorFlow EfficientDet</h5> <p>To get the TensorFlow prize, I trained the EfficientDet-D2 model, but the private LB score of it was 0.543…</p> <p>The scores are following.</p> <table> <thead> <tr> <th>submission</th> <th>Private LB w/ Tracking</th> <th>Private LB w/o Tracking</th> <th>Public LB w/ Tracking</th> <th>Public LB w/o Tracking</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>0.712</td> <td>0.697</td> <td><strong>0.718</strong></td> <td>0.703</td> </tr> <tr> <td>2</td> <td>0.708</td> <td>0.700</td> <td>0.621</td> <td>0.609</td> </tr> <tr> <td>3</td> <td><strong>0.718</strong></td> <td>0.705</td> <td>0.667</td> <td>0.658</td> </tr> </tbody> </table> <p># Updated 13th place to 12th place</p>
TensorFlow - Help Protect the Great Barrier Reef
15th place solution: YOLO-X only, Seq-NMS
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow - Help Protect the Great Barrier Reef <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you. Congratulations, too!</p> <blockquote> <p>Q. How much improvement has Seq-NMS brought you?</p> </blockquote> <p>A. about 0.015. I only have my experiments to base this on, but I think it's a little better than the tracking method that was shared on kernel.</p> <p>By the way, since you seem to like memes, I'll give you this image.<br> <img src="https://i.imgur.com/Tg7MjJS.jpg" alt=""></p> <p>Congratulations to the winners and all participants who finished this competition 😊<br> In this competition, many participants were focused on increasing the score of Public LB, and I was not able to keep up with Public LB at least halfway through.<br> However, in my local experiments, I have found that methods that increase the image size only during inference, even when the threshold and ensemble methods are properly adjusted, result in extremely low scores for the entire training data.<br> So, I thought there would be shake. Although I was not able to get into the gold zone, I am glad that I participated because I was able to recognize the necessity of believing in my own experiments.</p> <p>Ok, my solution is a pretty simple one, but I would like to share.</p> <p><strong>1. Preprocessing</strong><br> Since the video was basically switching almost every sequence, I built a validation with 4 folds based on the sequence (However, I think there were some sequences that were a little continuous).<br> One coin I did was to make the number of CoTS almost the same in each fold. In training, I did not use images without CoTS, but in validation, I used all images without CoTS in order to check the False Positive properly.<br> The image was simply divided by 255 without any noise removal, and the size was 1952 x 3520.<br> Due to GPU limitations, I couldn't try a larger size, but at least up to this size, I was able to get a small gain with Local CV. However I got the most gain up to about twice the size of the original image.</p> <p><strong>2. Model: All I want to use is YOLO-X</strong><br> This time I really wanted to use <code>anchor-free</code> YOLO-X, which I had never used before, so I stuck to it, even though many people have had success with YOLO-v5 in Public LB.<br> It was hard to modify YOLO-X because many parts were hard-coded, but I tried to tweak the <a href="https://arxiv.org/pdf/2107.08430.pdf" target="_blank">top-K selection algorithm</a>, add augmentation, and some other dubious modifications. But in the end, only augmentation seemed to have an effect. In that sense, YOLO-X's perfection as a model may be high 😏</p> <p><strong>3. Augmentation and learning strategy</strong><br> In addition to the augmentation (mosaic, mixup, etc.) that YOLO-X has as a default function, I added RandomGamma, RGBshift, Sharpen, GaussNoise, etc.<br> Instead, the probability of applying mixup and mosaic has been slightly reduced from the default value and assigned the rest probability to the newly added augmentation path. This may have increased the diversity of the input images somewhat, and I was able to get a gain in Local CV(I don't know the specific ablation values in detail, as the experiment took some twists and turns).<br> In addition, I used the <a href="https://arxiv.org/abs/2104.00298" target="_blank">progressive learning</a> method used in EfficientNetV2: gradually increasing the size of the image (e.g. 1280 =&gt; … =&gt; 3520) as the learning progressed. At the same time, I remember that regularization (increasing the probability of application of augmentation) was also strengthened somewhat.</p> <p><strong>4. Inference</strong><br> The inference was very simple: I did TTA with Flip and ensembled with WBF for 4-fold (so 8 models). I think this is almost same as many of the participants.<br> Note that I set thresholds to improve the local CV. The F2 metrics were sensitive to the threshold settings to some extent because our evaluation metric was based on the confusion matrix.</p> <p><strong>5. An original point that I have worked out: Seq-NMS</strong><br> One point that I devised a little is the post-processing.<br> Since the task is object recognition in video, I made a post in <a href="https://www.kaggle.com/c/tensorflow-great-barrier-reef/discussion/293812" target="_blank">this thread</a> early stage in this competition, and I was reading almost all papers. Many of them reported that feature aggregation, which is a method of enriching input features by using past images in the neighborhood, is more performing than post-processing methods such as tracking, and I thought that was probably true. However, since the majority of feature aggregation methods is based on RPNs, it was too much of a hurdle for me to apply, since I was sticking to YOLO-X, which is anchor-free. So I decided to use <a href="https://arxiv.org/abs/1602.08465" target="_blank">Seq-NMS</a>: a typical post-processing method. Specifically, the confidence is adjusted by the degree of overlap between the predicted bbox of the previous images and the predicted bbox of the current image (Ref: <a href="https://www.kaggle.com/c/tensorflow-great-barrier-reef/discussion/293812#1611428)" target="_blank">https://www.kaggle.com/c/tensorflow-great-barrier-reef/discussion/293812#1611428)</a>.<br> (The implementation can be found <a href="https://github.com/tmoopenn/seq-nms" target="_blank">here</a>. The core processing is written in cython and is fast enough, but as far as I understand, there is a serious bug in the python script that makes it not work properly as is, so I fixed the bug to use it.)<br> As long as I experimented, there is no need to consider many frames for Seq-NMS, just the prediction of the previous image. I also tried to use <a href="https://docs.opencv.org/3.4/d4/dee/tutorial_optical_flow.html#:~:text=Dense%20Optical%20Flow%20in%20OpenCV" target="_blank">Dense Optical Flow</a> to compensate for the shift of predicted bboxes of the previous frame, but in the end I did not use it because Dense Optical Flow is expensive to compute and does not have much gain.</p> <hr> <p>That's all I can remember right now. If I remember anything else, I'll add it.<br> Thank you all for your hard work, and see you at another competition :)</p> <p>Happy Kaggling😊</p> <p><img src="https://i.imgur.com/fuUtT1i.jpg" alt=""></p> <p>Feb. 18. 2022, Model pipeline figure updated</p>
TensorFlow - Help Protect the Great Barrier Reef
16th Simple Solution - Only Yolov5
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow - Help Protect the Great Barrier Reef <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! Good kaggle!</p> <p>I would like to thank the organizers for hosting the great competition.<br> also, I was very honored to become a teammate with <a href="https://www.kaggle.com/vaillant" target="_blank">@vaillant</a> and <a href="https://www.kaggle.com/hwigeon" target="_blank">@hwigeon</a>. <br> I really learned a lot from our teammates. I think it will be the unforgettable good experience.</p> <h1>Summary</h1> <p>We reached LB 709 so quickly using a large-size inference trick and naive iou-tracker.<br> but at the same time, we started to doubt the public LB since we couldn't reproduce the result clearly. we found out that Increasing image size increased recall but also yielded unnecessary bounding boxes.<br> So, we started to build robust models by checking CV F2 scores.<br> In CV, only inference with originally trained size worked.<br> We tried size threshold-based prediction, IOU tracker(i might have made a mistake), larger size than originally trained size didn't work in the local.</p> <h1>Models</h1> <p>We found out that a large model necessarily isn't needed.<br> Also what we found out is multi-scale ensemble boosts both cv and public LB scores.<br> Instead of investing our time in searching complex models, we prepared yolov5m trained with the sizes of [2400,2560,2688,2880,3000]<br> and yolov5s trained with the sizes of [3000,3600,3720,3840,3960,4032,4080,4244]</p> <h1>Traning Method</h1> <p>We changed yolo's hyperparameters slightly<br> Applying mixup 0.5, mosaic 1.0, strong hsv change helped a lot<br> 10 epoch training and adam optimizer were chosen.<br> Model was chosen by weighting Recall vs Precision 4:1</p> <h1>Validation</h1> <p>We chose video-split folds since these are more realistic.<br> Ensembled CV scores for video-split were each [0.71, 0.64, 0.77]<br> After that, we had searched optimized WBF coefficients and confidence scores using a method of grid search.</p> <p>For video fold 0-1, WBF coefficient 0.5, skip bbox 0.01, confidence score 0.1 was best<br> For video fold 2, WBF coefficient 0.5, skip_bbox 0.1, confidence score 0.2 was best</p> <p>With these values, we trained models with all the datasets and regarded the public LB as a holdout<br> the public LB scores of each model were ranged in [0.55~0.60]</p> <p>The result is private LB 0.713 and public LB 0.648.<br> We missed the gold but we couldn't choose the best private LB since this competition is a kind of shake-up competition</p> <p>Thank you for reading this!</p> <p>I also checked the same situation.<br> Our yoloX (1280,720 size) 0.399 in public LB but 0.597 in private LB<br> I learned again that we should verify cv-lb very carefully!<br> Thank you for reading this post!</p> <p><a href="https://www.kaggle.com/jarviskevin" target="_blank">@jarviskevin</a> It's from experiments. or to maximize variabilities. Relatively yolov5m was good at a smaller size but yolov5s was good at a larger size. also, performance was almost the same between yolov5m and yolov5s</p> <p>I mean increasing image size increases <strong>TRUE POSITIVES</strong> but it also increases many many <strong>FALSE POSITIVES</strong>.<br> if <strong>FALSE POSITIVES</strong> increases a lot, then It swallows the merit of increasing <strong>TRUE POSITIVES</strong> and results in a drop of F2 SCORE.</p>
TensorFlow - Help Protect the Great Barrier Reef
22th place solution - Train4Ever
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow - Help Protect the Great Barrier Reef <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Viewing the output log of my attached notebooks, you'll see</p> <p>I did create the crop data myself. And the flow is demonstrated in my attached notebooks. I think it is better to train a separated classification model rather than utilize the feature maps from the detectors (the feature maps from the detectors already carry so many tasks, regression, objectness classify, multi scale training, etc)</p> <p>Thanks to the hosts for providing an interesting competition. Although not ending up in Gold zone, we would like to share our solution. </p> <h3>Our solution consists of 3 stages:</h3> <ul> <li>Detection</li> <li>Tracking</li> <li>Classification post processing</li> </ul> <h3>1. Detection</h3> <ul> <li>Using some popular models like everyone does. We used CascadeRCNN ResNeSt200, YoloV5, YoloX, YoloR. </li> <li>Ensembling: WBF</li> </ul> <h3>2. Tracking:</h3> <ul> <li>We did not invest enough time to find a better tracking scheme than the one from the public notebook. As a result, we used that Norfair without any significant modification. </li> <li>After Norfair is applied, it is certain that the predictions contain so many FPs though it does increase the recall. It is the work of stage 3 that reduce the effect of the problem. </li> </ul> <h3>3. Classification:</h3> <ul> <li>This is important in our pipeline because we do not have too strong detectors and tracking method.</li> <li>Flow: detectors predict bboxes -&gt; crop -&gt; classification -&gt; blending</li> <li>We have 5 folds. In each fold, each of the 4 detector makes prediction on both train and validation set. We focus on hard false positive (high detection score but wrong) and hard false negative (low detection score but true). The predictions on train set from all the detectors are concatenated and reduce overlapping by NMS, then used to train the classification models. The same method is used to produce the validation set for the classification models.</li> <li>We also crawled some COTS data from the Internet with Creative Commons license to generalize the model more.</li> <li>Models: EfficientnetB7, Eca Nfnet L0</li> <li>Im size: 128</li> <li>Augmentations: heavy augmentations. Cutmix only when training EffB7 </li> <li>Training notebooks: <a href="https://drive.google.com/file/d/1uD4Wj4pxfScm-NnXuf9jSVXCru-XTGJB/view?usp=sharing" target="_blank">https://drive.google.com/file/d/1uD4Wj4pxfScm-NnXuf9jSVXCru-XTGJB/view?usp=sharing</a> (Eca nfnet l0) , <a href="https://drive.google.com/file/d/1pQuk6z5Vl7sfg7v36NVjk3WpbYYglS91/view?usp=sharing" target="_blank">https://drive.google.com/file/d/1pQuk6z5Vl7sfg7v36NVjk3WpbYYglS91/view?usp=sharing</a> (EffB7)</li> <li>2 model type x 5 folds = 10 models classification for final submission</li> <li>Blending method: Use geometric mean between the classification probability score and the bbox conf score from the detectors.</li> <li>We estimated this boosted about 6% on the private LB in our final 0.706 solution. </li> </ul> <ol> <li>We have 2 stage validation for clf.</li> </ol> <ul> <li>First, using AUC, accuracy, sklearn F2 score to evaluate on cropped images.</li> <li>Then, blending with detection score and checking the complete pipeline with competition F2 metric</li> </ul> <ol> <li>Too big, I think it is not necessary.</li> </ol>
TensorFlow - Help Protect the Great Barrier Reef
2nd Solution - YOLOv5
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow - Help Protect the Great Barrier Reef <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Sorry for the late reply, I have already made that dataset public.</p> <p>The splitor here is used to detect when it is a new sequence.</p> <p>The evaluation was always on all the dataset ( include the ones without boxes ). Do not only evaluate on samples with bounding box only, it is not correct.</p> <p>I do not heard about sequential non max suppression. Understanding from the name, i do not think it's NMS. NMS is "suppres" score, drop box, this tracking method is boost score, and try to keep box.</p> <p>I do it manually.</p> <p>13 epochs.</p> <p>Thanks Kaggle for hosting this interesting competition, and thanks my teammates for all of the efforts. Also it's my first detection competition so thanks the community for all the knowledge. </p> <p>Like the solution title, there are nothing fancy in our solution : )</p> <p>Training code: <a href="https://github.com/louis-she/reef-solution" target="_blank">https://github.com/louis-she/reef-solution</a><br> Inference: <a href="https://www.kaggle.com/snaker/yolo-ensemle?scriptVersionId=87785527" target="_blank">https://www.kaggle.com/snaker/yolo-ensemle?scriptVersionId=87785527</a></p> <p><strong>summary</strong></p> <p>Our solution is totally based on YOLOv5, I think there are three keys of this competition, <strong>train/val split</strong>, <strong>resolution</strong> and <strong>luck</strong>.</p> <p><strong>split - by video_id</strong></p> <p>In the early of this competition, we saw that there is a very solid way to split the dataset called "subsequence". After some submissions, the LB is not stable enough, so in the end we use video_id 3 fold split, because in that time, we guess that the LB and private maybe totally different video sequences, besides, score of the CV is now almost positive correlation with the LB.</p> <p><strong>preprocessing - rotation is the key</strong></p> <p>augmentations that works, inherited from YOLOv5 default augmentation, we added:</p> <ul> <li>mosaic [email protected]</li> <li>mixup [email protected]</li> <li>random rotate90</li> </ul> <p>the random rotate90 will boost ~0.02 with both CV and LB, and it also benefits ensembling with different rotation.</p> <p><strong>training</strong></p> <ul> <li>larger model works better</li> </ul> <p>the diversity of YOLOv5 s/m/l/x is not very large, so we only use YOLOv5l6 model, for speed and performance balancing.</p> <ul> <li>training with images that has ground truth only</li> </ul> <p>From YOLOv5 documentation, training with 10% background will produce the best results, but from our experiments, 0% background produce the best results.</p> <ul> <li>multiscaling +- 50%</li> </ul> <p>The YOLOv5 has builtin multiscaling switch, from what i know, scaling matters a lot in detection especially for anchor based detection methods, so we just enable it.</p> <p><strong>postprocessing/tracking</strong></p> <p>postprocessing is focusing on tracking. Instead of using existing tracking methods (which is not working), we use a more simple method, which I called "attention area". It is working as follows:</p> <ol> <li>the model has predicted some boxes B at N frame</li> <li>select the boxes from B which has a high confidence( larger than a threshold T ), these boxes are marked as "attention area"</li> <li>in the N + 1 frame, all the predicted boxes (not actually all, but confidence threshold &gt; 0.01), if it has an IoU with the "attention area" larger than 0.5, boost the score of these boxes with S, then do the confidence filtering.<br> The T=0.15 and S=0.1 is a solid choice, this method boost both CV and LB by ~ 0.01.</li> </ol> <p><strong>ensembling</strong></p> <p>after <a href="https://www.kaggle.com/sheep" target="_blank">@sheep</a> release the high resolution trick, we know that one model inference with different resolution will have a huge difference, from our experiments, single model trained with 2400 resolution and inference with 4800 resolution will get the highest LB ~ 0.73, we should be careful here, cause the CV is not true.</p> <table> <thead> <tr> <th>training resolution</th> <th>inference resolution</th> <th>mean OOF F2</th> <th>LB</th> </tr> </thead> <tbody> <tr> <td>2400</td> <td>1280</td> <td>0.4384</td> <td>-</td> </tr> <tr> <td>2400</td> <td>1800</td> <td>0.6342</td> <td>-</td> </tr> <tr> <td>2400</td> <td>2400</td> <td>0.6865</td> <td>~0.6</td> </tr> <tr> <td>2400</td> <td>3200</td> <td>0.6597</td> <td>-</td> </tr> <tr> <td>2400</td> <td>4000</td> <td>0.5402</td> <td>-</td> </tr> <tr> <td>2400</td> <td>4800</td> <td>0.3925</td> <td>0.730</td> </tr> </tbody> </table> <p>And also, train with different resolution will also give different results.</p> <table> <thead> <tr> <th>training resolution</th> <th>inference resolution</th> <th>mean OOF F2</th> </tr> </thead> <tbody> <tr> <td>1280</td> <td>1280</td> <td>0.62</td> </tr> <tr> <td>1800</td> <td>1800</td> <td>0.6685</td> </tr> <tr> <td>2400</td> <td>2400</td> <td>0.6865</td> </tr> <tr> <td>3200</td> <td>3200</td> <td>0.6976</td> </tr> </tbody> </table> <p>all the evidences show that resolution matters a lot, high resolution benefit LB, but not CV, we are not sure if the private dataset has the same distribution as the LB, or just part of it, so chosing the selections is hard, luckly this time we got 4 chances. Another observation is that ensembling model numbers will benefit WBF, so we decide to take the total 9 hours inference quota as much as we can. Our winner solution, private 0.737 public 0.648, is ensembled of the following models/resolutions, 4 models inference with 11 resolutions: </p> <table> <thead> <tr> <th>train resolution</th> <th>inference resolution</th> <th>transform</th> </tr> </thead> <tbody> <tr> <td>1280</td> <td>1800</td> <td>-</td> </tr> <tr> <td>1800</td> <td>1800</td> <td>-</td> </tr> <tr> <td>1800</td> <td>2000</td> <td>rotate 90</td> </tr> <tr> <td>2400</td> <td>2400</td> <td>-</td> </tr> <tr> <td>2400</td> <td>2600</td> <td>rotate 180</td> </tr> <tr> <td>2400</td> <td>2800</td> <td>rotate 90</td> </tr> <tr> <td>3200</td> <td>3200</td> <td>-</td> </tr> <tr> <td>3200</td> <td>3200</td> <td>rotate 270</td> </tr> <tr> <td>3200</td> <td>3400</td> <td>-</td> </tr> <tr> <td>3200</td> <td>3600</td> <td>rotate 180</td> </tr> <tr> <td>3200</td> <td>4000</td> <td>rotate 90</td> </tr> </tbody> </table> <p>Thanks for reading, 元宵节快乐。</p> <p><a href="https://github.com/ultralytics/yolov5/blob/master/train.py#L472" target="_blank">https://github.com/ultralytics/yolov5/blob/master/train.py#L472</a></p> <p>yolov5 integrated with wandb by default, you just need to install wandb and everything is out of box.<br> for augmentations, i use albumentations.</p> <p>FYI, I just released the training code.</p> <p>Train: <a href="https://github.com/louis-she/reef-solution" target="_blank">https://github.com/louis-she/reef-solution</a><br> Inference: <a href="https://www.kaggle.com/snaker/yolo-ensemle?scriptVersionId=87785527" target="_blank">https://www.kaggle.com/snaker/yolo-ensemle?scriptVersionId=87785527</a></p> <p>yes, just add S to the confidence of boxes that in the "attention area"</p>
TensorFlow - Help Protect the Great Barrier Reef
37th place solution - T0m part [Segment Copy-Paste & Progressive Learning]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow - Help Protect the Great Barrier Reef <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for your comment !!<br> optimizer=Adam, initial_lr=1e-3, warmup_epoch=1.</p> <p>and, I customized augmentation order like below,</p> <pre><code># default if mosaic or not: image = load_mosaic_image() if mixup or not: image2 = load_mosaic_image() image = mixup(image, image2) else: image = load_image() image = augment(image) image = flip(image, p) # mine if mosaic or not: image = load_mosaic_image() else: image = load_image() image = flip(image, p) if mixup or not: if mosaic or not: image2 = load_mosaic_image() else: image2 = load_image() image2 = flip(image2, p) image = mixup(image, image2) image = augment(image) </code></pre> <p>Change<br> only apply mixup when original image is mosaic augmented<br> -&gt; outside mixup augment<br> flip apply after mixup augment<br> -&gt; flip was applied before mixup (Probability)</p> <p>Sorry if this is hard to see, feel free to ask</p> <p>Thanks for your comment, Chris. Yes, I tried yolov5s ~ yolov5x and yolov5s6 ~ yolov5x6, and finally select the above models by comparing the cv results.<br> Basically, based on the size of the image, I think, a model with 6 would be suitable for the scale of this competition, but that wasn't the case in my experiment.</p> <p>Thanks to the hosts for providing an interesting competition, and congratulation to all winners.<br> This is my first detection competition and it was good experience.</p> <p>I want to share my work's key points.</p> <h1>Segment Copy-Paste Augmentation</h1> <p>We thought it was important for the model to learn to detect a small cots. So we tried augmenting the data by attaching a small starfish. However, we thought that a simple copy-paste would result in an unnatural image and overfitting of the background. Therefore, we ( <a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a> ) first train a segmentation model, created a segmentation mask, and then applied copy-paste augmentation using them.</p> <p><a href="https://postimg.cc/MXrry1SC" target="_blank"><img src="https://i.postimg.cc/dtqPc9SV/2022-02-15-14-53-57.png" alt="2022-02-15-14-53-57.png"></a></p> <p>As it is, the style is different, we transform style by shifting RGB value to meet the mean.<br> ex) segment[:, :, 0] = segment[:, :, 0] - (segment[:, :, 0].mean() - background[:, :, 0].mean())</p> <h1>Progressive Learning</h1> <p>To create a model that is robust to noise and size, image size and augmentation were made larger and stronger as the epoch progresses (Progressive Learning)</p> <p><a href="https://postimg.cc/0bRXWPzn" target="_blank"><img src="https://i.postimg.cc/TYp8PPGz/2022-02-15-14-54-11.png" alt="2022-02-15-14-54-11.png"></a></p> <h1>Model</h1> <table> <thead> <tr> <th>Model</th> <th>image_size</th> <th>description</th> <th>PublicLB</th> <th>PrivateLB</th> </tr> </thead> <tbody> <tr> <td>yolov5m</td> <td>3200</td> <td>CustomCopyPaste &amp; Progressive Learning Epoch=25</td> <td>0.629</td> <td>0.699</td> </tr> <tr> <td>yolov5m</td> <td>3000</td> <td>Epoch=8</td> <td>0.662</td> <td>0.691</td> </tr> <tr> <td>yolov5x6</td> <td>1920</td> <td>Epoch=12</td> <td>???</td> <td>???</td> </tr> </tbody> </table> <p>I used TTA and Tracking.</p> <p>train: video-0, 1<br> valid: video-2 (CV: 0.71~0.73)</p> <h1>Ensemble</h1> <p>Ensemble by using wbf.<br> and, weighted conf for each detected bbox size.</p> <p>ex)<br> if (bbox from model_1 and bbox_area &lt; 1000):<br> conf *= 3<br> elif (bbox from model_1 and bbox_area &lt; 4000):<br> conf *= 0.5<br> …<br> and then wbf</p> <table> <thead> <tr> <th>Description</th> <th>PublicLB</th> <th>PrivateLB</th> <th>sub</th> </tr> </thead> <tbody> <tr> <td>3 Model</td> <td>0.638</td> <td>0.711</td> <td>not selected</td> </tr> <tr> <td>3 Model re-train full-data</td> <td>0.638</td> <td>0.696</td> <td>selected</td> </tr> </tbody> </table> <p>CV: 0.77~0.78</p> <p>Thanks :)</p> <p>Thanks for your comment, and congrats too on getting 34th.<br> And, your contribution to this comp made it possible for me to run this competition :)<br> in all my exps, batch_size is 4. 4 is better than 8, 16 ~.</p> <p>Thanks for your comment, and congrats too on getting 45th :)<br> I think that progressive learning has made the model more robust by applying strong augmentation.</p>
TensorFlow - Help Protect the Great Barrier Reef
3rd place solution - Team Hydrogen
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow - Help Protect the Great Barrier Reef <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Pure Pytorch, probably will not open source it.</p> <p>Yes, we did, and it was our big hope for private. It scored high for public, but not private. It can reach ~0.8+ CV, 0.79 LB, but only scores ~0.69 on private, meaning that private is much more inprecise again.</p> <p>Manually looking and adjusting a subset of frames/boxes to properly fit the actual shape of the COTS and then calculating adjustments from original labels.</p> <p>Thanks to hosts and all participants for this interesting competition. This is a joint writeup with <a href="https://www.kaggle.com/ybabakhin" target="_blank">@ybabakhin</a> (big, big thanks for the awesome team work in this competition) from Team Hydrogen.</p> <h4>Summary</h4> <p>Our solution is a blend of five different object detection model types with tracking.</p> <h4>Cross-Validation</h4> <p>For cross-validation we used two strategies: one is simple split by video, and the other is making 5-folds based on subsequences. We were mostly looking at subsequence Cross-Validation scores, time-to-time checking the video folds performance. This strategy gave us a good correlation between CV and Public LB scores throughout the competition, however it failed for the Private LB. You can read our thoughts about such a behavior below in this post.</p> <h4>Models</h4> <p>Our final solution is a blend of five different model types. All our models are trained on a resolution of 1152x2048 and only using images with boxes. For augmentations we employ a mix of random flips, scale, shift, rotate, brightness, contrats, cutout, cutmix.</p> <p>An additional augmentation we utilize in most of our models is what we call “background-mix”. Here, we randomly mix an image with boxes, with a random image without boxes. This guarantees us to not overlap / distort any boxes, but generalizes better to unseen backgrounds and also randomizes the intensity of objects. Also some models benefit from using cut&amp;paste augmentation by simply cutting and pasting objects to other frames.</p> <p><strong>CenterNet</strong><br> We took an idea from the original CenterNet and substituted backbone with HRNet. The object detection task in CenterNet architecture is treated as a keypoint estimation, so we used x4 downsampled labels (288x512) for predicting heatmaps of: objects centers, width and height, and center offsets. Final submission includes two backbones: hrnet_w18 and hrnet_w32 that produced similar local performance. The models were trained for 10 epochs and were using cut&amp;paste augmentation.</p> <p><strong>FasterRCNN</strong><br> We implemented FasterRCNN to work with any timm backbone and found that NFNet models work particularly well due to generalizability and also easiness to train on small batch sizes. Our final models use eca_nfnet_l1 and dm_nfnet_f0 backbones. We use 4 feature layers and 256 FPN out channels. Models were trained for 10 epochs.</p> <p><strong>FCOS</strong><br> Similar to FasterRCNN, we integrated FCOS into our pipeline to work with any timm backbone and use the same backbones for training. Here, we use 3 feature layers and also 256 FPN out channels. Again, models are trained for 10 epochs.</p> <p><strong>EfficientDet</strong><br> We train EfficientDet models using efficientdetv2_dt backbone which has a great balance between complexity, memory usage, and runtime. Models are trained for 20 epochs.</p> <p><strong>YoloV5</strong><br> We trained yolov5l-6 using rectangular training. We adapted the original code to directly optimize F2 score, and also for rectangular training to properly work with random shuffling, mixup, cutout and other granularities. Models were trained using Adam and for 40 epochs. Inference is done on training resolution without tta.</p> <h4>Blending &amp; Tracking</h4> <p>For blending, we employ average WBF blending which is particularly useful here as it automatically incorporates a voting mechanism of boxes to downweight boxes that are not present in all models. </p> <p>For tracking, the issue with adding missing boxes using Kalman Filter is quite tricky, as also there it is tough to estimate width and height, and the public kernel solutions do not work well when models get better. So we use tracking in a slightly different way. We use euclidean distance on the center points to find the tracks, but we keep low confidence boxes for finding the tracks. Then, we are setting the confidence of low confidence boxes to the maximum of the confidences of the track. So let’s assume we have a track with confidences 0.8, 0.7, 0.9, 0.15 - in this case we increase the confidence of the 0.15 box to 0.9. The models are quite good in finding the boxes, but sometimes they are just not confident enough, so this is clearly better than estimating the new boxes from the track with Kalman filter or similar estimates. This tracking brings about 0.01 locally.</p> <p>Another tracking technique that we explored in the last days of the competition was Optical Flow estimation. We used the OpenCV implementation of goodFeaturesToTrack() and calcOpticalFlowPyrLK() methods. It produced pretty nice estimates of diver/camera movements that allowed to better predict the position of starfish across the frames. Unfortunately, it gave only 0.001 improvement in both CV and LB.</p> <h4>Label tightness &amp; Public LB</h4> <p>As probably for most people, our public LB was significantly lower than our CV for the first half of this competition. However, we quite quickly noticed that there has to be something different in public LB vs. training data, mostly because people with higher LB has so much worse CV reported than what we saw locally. After a while, the discussion about higher resolution came up, and we just blindly tried it on LB and it immediately pushed individual CenterNet and RCNN models above 0.70. </p> <p>So then some funny theories emerged on the forums and obviously we also tried to understand how something like this could happen. After several failed theories, we finally found that higher resolution than training, produces boxes that are more tight (exact) around the COTS. In training, labels are very imprecise, and also inconsistent across frames, it even looked semi-automatic. And the size of the boxes plays a huge role in this evaluation metric setting, as 0.8 IOU threshold has a particularly large impact.</p> <p>To check this theory, we just submitted manual pixel adjustment on public LB, i.e. decreasing the size of boxes by fixed pixels, and after tuning this a bit, it even worked better than the upscaling, and was much faster obviously. So our theory emerged that test data is labeled better after collecting the following additional evidence:</p> <ul> <li>Higher resolution inference produces tighter boxes, it works better on Public LB</li> <li>Manual pixel adjustment (making boxes smaller) works better on Public LB</li> <li>None of these adjustments works on ANY part in training data</li> <li>Visually and manually checking a few training examples, we found that boxes on training are on average 3px too large in all dimensions. This is the EXACT same amount that worked well for us on public LB. </li> <li>Hand-labeling a small portion of train, finetuning on it, boosted LB significantly.</li> <li>The public LB had sequences from both videos. We checked if the manual adjustment works on both videos, and it did, further strengthening our suspicion that whole test is different.</li> </ul> <p>So we were very confident that it should hold for whole test, but as we know, it did not. So either we completely misinterpreted some of this evidence, or private test is again labeled differently. In general, we personally feel like that public LB should better represent private LB, and in best case it should be similar to training data. This would lead for competitors to overall focus more on improving models, vs. trying to understand the splits. </p> <p>Did you still infer on larger resolution after training on relabeled data? It might be the reason why it didnt contribute.</p> <p>We implemented pipelines for all model types ourselves. <br> And yeah, spent quite some time on it :)</p> <p>We did not make any changes to EfficientDet hyperparameters themselve, even though we tried a lot changing anchors, anchor scales etc. What did work very well there was to increase augmentations quite significantly and adding this background mixing. And then training a bit longer. Also the efficientdetv2 backbones worked very nicely.</p> <p>No, what we mean with that is that we track our subsequence CV, and whenever we get clearly better models, we also check them on video CV setup to make sure they generalize across videos. Implementing the metric and tracking it at every step (even across thresholds) possible for all models is the first thing we always add to the pipeline.</p> <p>yeah, private lb seems to be also quite random</p>
TensorFlow - Help Protect the Great Barrier Reef
41st place Solution (7th Public LB) YOLO-Z Inspired Model with Tracking
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow - Help Protect the Great Barrier Reef <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I want to start by saying that I am not experienced in data science. I don't exactly know all the bells and whistles in data science but I love to learn and experiment. In terms of this competition, I learned a ton about object detection and it was really fun!</p> <h1>Data</h1> <p>I used a groupkfold split on sequences since the data in private LB will be new sequences, the best way to validate a model (at least in my noob eyes) is with new sequences it has not seen before. I only trained models on the same fold and did not ensemble or use validate models via oof predictions. </p> <h1>Model</h1> <p>My solution revolved around the <a href="https://arxiv.org/pdf/2112.11798.pdf" target="_blank">YOLO-Z</a> paper that looked into improving the original YOLO5 architecture in small object detection. It looks like their proposed model is somewhat included in YOLO5v6. They mention that in their experiments modifying the "width" multiplier to that of the next tier (using L or M "width" for M or S models) they achieved better results in small object detection. Since YOLO5v6 already includes smaller layers (see <a href="https://www.kaggle.com/c/tensorflow-great-barrier-reef/discussion/303766" target="_blank">this</a> discussion thread), I went into the direction of modifying YOLO5v6 models to increase the width multiplier (to see if this does better than the original). This increases the number of parameters but not the number of layers. This would keep inference time lower than using a bigger model and give the best of both tiers.</p> <table> <thead> <tr> <th>Model</th> <th>Layers</th> <th>params (M)</th> <th>FLOPs</th> </tr> </thead> <tbody> <tr> <td>YOLOv5s6</td> <td>280</td> <td>12.6</td> <td>16.8</td> </tr> <tr> <td>YOLOv5s6-wide</td> <td>280</td> <td>27.6</td> <td>35.8</td> </tr> <tr> <td>YOLOv5m6</td> <td>378</td> <td>35.7</td> <td>50.0</td> </tr> <tr> <td>YOLOv5m6-wide</td> <td>378</td> <td>62.6</td> <td>86.5</td> </tr> <tr> <td>YOLOv5l6</td> <td>476</td> <td>76.7</td> <td>111.4</td> </tr> </tbody> </table> <p>To not have to re-train the models, I had to use to transfer weights from the next model (if I want to use YOLOv5s6-wide, use YOLOv5m6 weights, etc.). I still wanted to pre-train weights since I deleted a lot of layers in the process.</p> <p>I only selected YOLOv5s6-wide and YOLOv5m6-wide to train and use.</p> <h1>Training</h1> <p>Because the "wider" models deleted many layers I thought re-training would help better fit the models. After searching a bit I found <a href="https://github.com/chongweiliu/DUO" target="_blank">Detecting Underwater Objects</a> dataset based on competitions listed <a href="https://www.kaggle.com/c/tensorflow-great-barrier-reef/discussion/304367" target="_blank">here</a>. </p> <p>I trained for 100 epochs with heavy augmentations (see <a href="https://wandb.ai/outwrest/gbr/reports/m-wide-duo-1280--VmlldzoxNTYzMDk0?accessToken=q1nh3kbk1u39ydfs53h1ffnuchpig7sntec09i7rhkszjd2dcjw6wenilritoyno" target="_blank">wandb output</a>). </p> <p>I used the COT mask dataset to make 10,000 "fake" images of cots pasted in places with different opacity levels. I found that training with this dataset increases recall in the final model (thanks <a href="https://www.kaggle.com/alexandrecc" target="_blank">@alexandrecc</a>). But I don't 100% know if it helps in public LB.<br> See this <a href="https://www.kaggle.com/outwrest/augmentation-using-image-blending-cot-masks" target="_blank">notebook</a> and <a href="https://wandb.ai/outwrest/gbr/reports/m-wide-fake-1280--VmlldzoxNTYzMTQz?accessToken=341vkrc709f82dx5xs0vxlv0j17wqj5asq8wptfc20x7jyun27ud4r2208pu9448" target="_blank">results</a>.</p> <p>Trained again on sequences with 3000 IMG size -&gt; and fined tuned again by training with a much lower LR and less augmentations. I used simple norfair tracking notebook as a postprocessing technique.</p> <h1>Overfitting Public LB</h1> <p>I noticed that by submitting 2x img size (6000) without TTA I was able to overfit LB pretty easily and achieved 7th place in LB with some CONF tunning. </p> <p>The best overfit was YOLOv5s6-wide with 7200 img size and no TTA trained on 3600 img size, 0.769 Public-0.641 Private.<br> The best notebook I had is YOLOv5m6-wide with 6000 img size and TTA trained on 3000 img size, 0.673 Public-0.706 Private.</p> <h1>TLDR</h1> <p>Modified YOLOv5m6 model to include more parameters -&gt; pretrained on DUO dataset -&gt; pretrainedx2 on a "fake" dataset (I am not 100% sure this helped) -&gt; trained on sequences -&gt; finedtuned with lower LR again.</p> <p>Hopefully, this helps. I feel like the score can be improved even with an ensemble and more conf/iou tuning since I did not explore that much. Thanks to other competitors and I hope to learn from your writeups. </p> <p>Thank you! I learned a lot from your posts</p>
TensorFlow - Help Protect the Great Barrier Reef
4th Place Solution - CenterNet
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow - Help Protect the Great Barrier Reef <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>No, I think anchor free base will perform well since the target is simple. (shape, texture, size, etc.)</p> <p>Thank you 😃</p> <p><a href="https://www.kaggle.com/bakeryproducts" target="_blank">@bakeryproducts</a> cv score ranged around 0.78~0.8 @ 1.4x to 2.0x inference size. I the last month, I build a wide range solution to face the possible cots size variance in test set, but it is not the case finally.</p> <p><a href="https://www.kaggle.com/imeintanis" target="_blank">@imeintanis</a> interesting and congratulations 😄</p> <p>although I have model that can get 0.7+ within 45min, but I did not choose it. 😅</p> <p>I have learned a lot from your sharing, thank you.</p> <p>haha, thanks. 😃</p> <p>tf.keras, thanks.</p> <p>I did not try other method, so I can not confirm whether it has advantage. </p> <p>Thank you 😃</p> <p>base = keras_efficientnet_v2.EfficientNetV2B0(input_shape=image_size, drop_connect_rate=0, pretrained="imagenet21k")</p> <p>x = base.get_layer("stack_5_block0_sortcut_swish").output<br> x = DilatedSpatialPyramidPooling(x)</p> <p>input_a = UpSampling2D(size=(2, 2), interpolation="bilinear")(x)<br> input_b = base.get_layer('stack_3_block0_sortcut_swish').output<br> input_b = convolution_block(input_b, num_filters=48, kernel_size=1)</p> <p>features = Concatenate(axis=-1)([input_a, input_b])</p> <p>x = convolution_block(features)<br> x = convolution_block(x)<br> heatmap = Conv2D(num_classes, 1, padding="same", activation='sigmoid', name='hm', dtype=tf.float32)(x)</p> <p>x = Conv2D(256, 3, padding="same", activation='swish')(features)<br> x = Conv2D(256, 3, padding="same", activation='swish')(x)<br> reg = Conv2D(4, 1, padding="same", activation='linear', name='re', dtype=tf.float32)(x)</p> <p>model = keras.Model(inputs=base.input, outputs={'hm':heatmap, 're':reg})</p> <p>oh, I see your update. offset head also is 1/8.<br> I post the model <a href="https://www.kaggle.com/c/tensorflow-great-barrier-reef/discussion/307626#1691065" target="_blank">here</a></p> <p><code>x = Conv2D(256, 3)(features)</code><br> <code>x = Conv2D(256, 3)(x)</code><br> <code>reg = Conv2D(4, 1)(x)</code><br> offset + wh together 😄</p> <p>I train on some crop sizes to full frame randomly. For example, [high, width, batch_size] in [[448, 640, 20], [640, 896, 10], [720, 1280, 6]].<br> Training takes about 1 day on single 2080ti.</p> <p>as usual I use 1 fold.<br> my val set have 4 sequences: [26651, 59337, 53708, 45015]<br> use all cots frames and half none-cots frames for saving time.<br> select checkpoints by local F2 score and public LB score.</p> <p>No, I did not try it.</p> <p>reference CenterNet paper to gen the mask</p> <p>Thanks, it's all luck. 😅</p> <p>I don't know whether I should call it centernet, but objects as points model just like segmentation + regression head.</p> <p>Thanks. Nothing special, just heavily augmentation, perspective transform, rotation, flip, mixup, random crop and batch size every step. Focal loss for heatmap, Huber loss for regression, and Adam optimizer.</p> <p>Thanks, I like objects as points method because it is easy to apply post processing.</p> <p>Thank you 😃</p> <p>No, just features -&gt; conv -&gt; conv -&gt; heatmap</p> <p>random selection according to mean and total cots counts, then decide by my mood. 😄</p> <p>I don't like data inconsistency, but you know, it is kaggle style.😅</p> <p>thank you 😄</p> <p>Since overfitting strategy failed, let's just talk about what I've done one month ago. 😅</p> <p>I use CenterNet with DeepLabV3+ architecture and EfficientNetV2 backbone, start from this <a href="https://keras.io/examples/vision/deeplabv3_plus/" target="_blank">example</a>.</p> <ul> <li>change backbone to EfficientNetV2 B0~XL</li> <li>change output heatmap size to 1/8 input, and add regression head</li> <li>training on 1280x720, inference on 1792x1008(1.4x)</li> <li>blend two can get private LB score 0.712, and 0.722 if inference on 1.6x</li> </ul> <p>Actually, I am pleased to survive in huge shake up. 😃</p> <p>Thanks everyone and congratulation to all winners.</p>
TensorFlow - Help Protect the Great Barrier Reef
5th place solution, poisson blending,detection and tracking
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow - Help Protect the Great Barrier Reef <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>hey,<br> If we paste a generated COTS to other background, it's will help our models as an augmentation method.<br> I had planed to use other COTS from internet to generate samples, but was not sure whether an image or a video is permitted to use or not and I am very busy during this competition, so I did not use any of them.</p> <p>Hey <a href="https://www.kaggle.com/init27" target="_blank">@init27</a>, StyleGAN2, I tried many models in <a href="https://www.kaggle.com/c/generative-dog-images" target="_blank">https://www.kaggle.com/c/generative-dog-images</a> competition and followed progresses after that. The main reason I entered this competition is to check whether GAN and image blending technologies can help solve this kind of problems. </p> <p>Congrats to all the winners, and thanks to organizers.</p> <p>The following ideas and methods helped me survive from this challenging competition.<br> 1.Copy COTS box and paste into background image, apply poisson blending.<br> 2.Several detection models training and inference with different image sizes.<br> 3.Appending boxes to detection results by finding homography matrix.<br> 4.Trust Local CV.</p> <h2>Data</h2> <p>As the dataset is relative small, if we have more samples we can benefit from them.<br> We can generate more star fishes and put them in different under-sea images.<br> <img src="https://i.imgur.com/eRzAF1s.png" alt="poisson.png"></p> <p>The simplest way is to crop the star fishes in train-set and paste it to other images, but we can find two obvious problems.<br> 1.The boundary and color is not real.<br> 2.The COTS is not on reasonable places.</p> <p>I solved these two problems by:</p> <p>1.Poisson blending<br> 2.Training a classification model to predict whether a box with COTS is real or fake.</p> <p>This idea helped me to improve the score.</p> <p>I did not stop my experiments, I tried to train GAN models to generate starfishes, although the generated samples were quite real, the score did not improve. <br> I think it’s limited by the count of unique star fishes are small, if we can use more COTS downloaded from internet, I guess we could improve the score. I am not sure whether an image is allowed to use or not, so I did not use any of them. <br> Other methods such as image harmonization did not bring improvement. <br> Anyway, despite quite a lot of time spent without too many gains in this competition, I quite enjoyed it.</p> <h2>Model training</h2> <p><strong>Validation Strategy</strong><br> Split 5 folds by sequence</p> <p><strong>Models</strong><br> YOLOv5-S6, YOLOv5-M6, YOLOv5-L6, YOLOX-L, YOLOR-P6 and HRNetV2P-W18</p> <p><strong>Training details</strong><br> YOLOv5: <br> Network: YOLOv5-S6, YOLOv5-M6, YOLOv5-L6<br> Training-size: 3600<br> Inference-Size: 4800<br> Optimizer: SGD<br> Scheduler: Warm Up + Linear LR + lr=0.01 + 15 epochs<br> Augmentation: hsv, translate, scale, flipud, fliplr, mosaic, mixup, water-augment, transpose</p> <p>YOLOX:<br> Network: YOLOX-L<br> Training-size: 1280<br> Inference-Size: 1600<br> Optimizer: SGD<br> Scheduler: yoloxwarmcos + 20 epochs<br> Augmentation: hsv, flip, degrees, translate, shear, mosaic, mixup, no_aug_epochs=5</p> <p>YOLOR:<br> Network: YOLOR-P6<br> Training-size: 2560<br> Inference-Size: 2560<br> Optimizer: SGD<br> Scheduler: Warm Up + Linear LR + lr=0.005 + 15 epochs<br> Augmentation: hsv, translate, scale, flipud, fliplr, mosaic, mixup, water-augment, transpose</p> <p>HRNet:<br> Network: HRNetV2P-W18<br> Training-size: 3600<br> Inference-Size: 3600<br> Optimizer: SGD<br> Scheduler: Warm Up + linear LR + lr=0.02 + 10 epochs<br> Augmentation: Resize, RandomFlip</p> <h2>Tracking</h2> <p>If the detection model has found a box in previous frames and we can predict the box in the current frame by the following way:<br> 1.Kalman Filter<br> 2.Optic Flow<br> 3.Finding homography matrix and then apply the matrix to get the box in the current frame.</p> <p>I find the third method is the best for this dataset.<br> We can get keypoint descriptors by using SuperPoint/SuperGlue and then find homography matrix.</p> <p><strong>The Tracking pipeline is:</strong><br> 1.If there is a box detected a frame t-10, then append a box using the homography matrix to frame t-9 unless there already a box is there(IOU greater than 0.4). Then repeat this procedure from t-9 until current frame.<br> 2.Apply DeepSort to get a track.<br> 3.Determining a box on a track is kept or not by the ratio of the model predicted boxes. If the ratio is too low, the track may be False Positive.</p> <h2>Ensemble</h2> <p>Calculating IOU between the boxes which predicted by different models in an image.<br> If max IOU with other boxes of a box is less than 0.55,then drop this box.<br> The remaining boxes are ensembled using WBF.</p> <h2>Results</h2> <p><img src="https://i.imgur.com/p2iHvpi.png" alt="result1.png"><br> <img src="https://i.imgur.com/6Hm3GRQ.png" alt="result2.png"></p> <ol> <li><p>The bbox in training set is real, so the target is 1, the poisson blending COTS boxes is fake, the target is 0.</p></li> <li><p>Any classification model is OK for this task, such resnet34.</p></li> <li><p>bbox, resize to 64x64.</p></li> <li><p>1.4k fake images</p></li> </ol> <p><a href="https://www.cs.jhu.edu/~misha/Fall07/Papers/Perez03.pdf" target="_blank">https://www.cs.jhu.edu/~misha/Fall07/Papers/Perez03.pdf</a><br> <a href="https://github.com/PPPW/poisson-image-editing" target="_blank">https://github.com/PPPW/poisson-image-editing</a></p> <p>I generated many many fake images by using copy&amp;paste then poisson blending, then selected 1.4k of them by the model.<br> Since the fake image is generated by pasting COTS to a background, the backgroud itself can be patch with size 64x64 or large, after we paste the COTS to it ,then resize to 64x64.</p> <p>The box containing real COTS as positive sample, the box containing fake/blending COTS as negative sample, training a binary classification model(Resnet34). Then use this model to predict a fake sample, if the score as positive sample is high, then use this fake sample.</p> <p>yes! Then I selected the generated COTS with different backgroud using the classification model.</p> <p>I think we should fill the box containing COTS with 255 to generate the mask. <br> btw, I did not use the lib indeed, I gave you as reference.</p> <p><a href="https://www.kaggle.com/w3579628328" target="_blank">@w3579628328</a> <br> yes, bigger bbox to include some background…</p>
TensorFlow - Help Protect the Great Barrier Reef
63rd place solution - Single Cascade R-CNN + Norfair
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow - Help Protect the Great Barrier Reef <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to thank organizers and all participants.</p> <p>Detector: Cascade R-CNN, Res2Net-101, 2560x1440<br> Tracker: Norfair</p> <p>Training config:<br> <a href="https://github.com/shinya7y/UniverseNet/blob/master/configs/datasets/kaggle_gbr_cots/cascade_rcnn_r2_101_fpn_fp16_4x2_mixup_affine_hsv_1440_lr002_7e_cots_val2.py" target="_blank">https://github.com/shinya7y/UniverseNet/blob/master/configs/datasets/kaggle_gbr_cots/cascade_rcnn_r2_101_fpn_fp16_4x2_mixup_affine_hsv_1440_lr002_7e_cots_val2.py</a></p> <p>Inference notebook:<br> <a href="https://www.kaggle.com/shinya7y/great-barrier-reef-cascade-r-cnn-private-0-694" target="_blank">https://www.kaggle.com/shinya7y/great-barrier-reef-cascade-r-cnn-private-0-694</a><br> This notebook version: 0.694 (trained on video 0, 1)<br> Submission version: 0.685 (trained on video 0, 1, 2; I should have tuned hyperparameters 😢)</p>
TensorFlow - Help Protect the Great Barrier Reef
69th Place - WBF with Threshold
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow - Help Protect the Great Barrier Reef <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>UPDATE: This technique helped win 2nd place in Kaggle NLP competition <a href="https://www.kaggle.com/c/feedback-prize-2021/discussion/313389" target="_blank">here</a></p> <p>Ah, gotcha thanks. I just submitted my models using <code>IOU=0.9</code> and WBFT (with IOU=0.4) afterward, i will report back in 9 hours.</p> <p>I don't understand the following line, can you explain this more?</p> <blockquote> <p>we used weights as well to create "more" bboxes from one model</p> </blockquote> <p>Good idea (IOU=0.9), i will try this.</p> <blockquote> <p>Having threshold is really great way to controll FP prediction in multi model inference.</p> </blockquote> <p>Yes, during the comp my best LB was just single model (1 fold) so I didn't focus on ensemble. But with <code>WBFT</code> we can benefit from ensemble. Without <code>WBFT</code> then using multiple model ensemble adds too many low conf FP and hurts competition metric of F2</p> <p>UPDATE 2 (after comp ended): Instead of adding TTA, I trained a Yolov5M6 at 3072 with same settings as my Yolov5L6. Then I submit 5 folds out of 10 Yolov5M6 (without TTA) and 5 folds out of 10 Yolo5L6 (without TTA). When using <code>WBF_CONF</code>, the result is private LB 0.709 and 20th place! This confirms that <code>WBF_CONF</code> is a powerful idea. If I add even more models (with different backbones and use different image sizes), I assume the LB will keep climbing!</p> <p><a href="https://www.kaggle.com/lukaszborecki" target="_blank">@lukaszborecki</a> We should read the code in GitHub WBF. I'm not sure how it computes the new <code>conf</code> value when one model has multiple boxes (as result of using <code>model.conf=0.9</code>). For example, let's say we ensemble 4 models. And model 1 has 2 boxes (with conf 0.7 and 0.8) that overlap with IOU = 0.6. And model 2 has 1 box (with conf 0.6) that overlaps with those box with IOU = 0.5. And model 3 and 4 has zero boxes.</p> <p>How does WBF compute the new <code>conf</code>? For example, I do not think it does <code>(0.7 + 0.8 + 0.6)/3</code> because the denominator should be 4 for 4 models. So does WBF somehow combine the two boxes from model 1 first? And then do <code>(0.75 + 0.6 + 0 + 0)/4</code>? I'm not sure</p> <p>UPDATE: I updated post to say "WBFT decreases FP". It does not decrease FN. Using more than 1 model, i.e. ensembling a diversity of models is what decreases FN. But we must use WBFT (instead of WBF) so that the decrease in FN does not add too many FP.</p> <p><a href="https://www.kaggle.com/kmizunoster" target="_blank">@kmizunoster</a> thanks for pointing this out. I updated my discussion to say "too many FP". I incorrectly said "too many FN". Using WBFT decreases FP not FN. Using more than 1 model (i.e. ensembling) helps decrease FN but we must balance the addition of boxes with WBFT so that we don't have too many FP.</p> <p>Thanks Lukasz. Congratulations to you and your team!</p> <p>WBFT works great. I guess as an alternative some teams may have just increased <code>model.conf</code>. For example, if you want <code>WBF_CONF = 0.1</code>, you could probably use <code>model.conf = 0.5</code> and use 5 models in ensemble. Then the smallest <code>conf</code> you will get after WBF without WBFT is <code>0.5/5 = 0.1</code>. But using WBFT with <code>model.conf = 0.05</code> and <code>WBF_CONF = 0.1</code> should achieve a better CV LB.</p> <p>Hi Remek, congratulations to you and team on achieving 34th place Silver. Thanks for all your sharing.</p> <p>For CV, I used public notebook subsequences (not sequences) <a href="https://www.kaggle.com/julian3833/reef-a-cv-strategy-subsequences" target="_blank">here</a> and did stratified group K fold. The following code gives each fold the same number of annotated frames and keeps subsequences in their own fold:</p> <pre><code>from sklearn.model_selection import StratifiedGroupKFold sgkf = StratifiedGroupKFold(n_splits=10) for fold, (t_idx, v_idx) in enumerate( sgkf.split(df, df.has_annotations, df.subsequence_id) ): df.loc[v_idx,'fold'] = fold </code></pre> <p>In retrospect, since the test data was different videos, it may have been better to use 3 folds with each video in its own fold. Then after finding the best hyparameters, train with all 3 videos.</p> <p>For my experiments, i mainly ran the same <code>fold=5</code> over and over with different settings. I mainly tried different Yolo backbone, image size, batch size, number of epochs. I didn't have too much time to adjust augmentations nor try models other than Yolov5.</p> <p>Congratulations Editoxic. You did well.</p> <p>Hi, i used sheep's augmentations in his notebook <a href="https://www.kaggle.com/steamedsheep/yolov5-high-resolution-training/notebook" target="_blank">here</a>. Check out code cell 5. I experimented by changing the default anchors in <code>yolov5l6.yaml</code> and changed some augmentations but my final solution uses sheep's script as is and uses yolo default anchors. (I joined late and didn't have enough time to find better).</p> <p>The main things i changed were model backbone, image size, batch size, number of epochs, and how many frames without cots to add. I picked one fold and kept training that fold over and over with different settings.</p> <p>Thanks vad13irt!</p> <p>Thanks Aruna</p> <p>UPDATE 1 (after comp ended): Instead of inferring 10 folds (in 7 hours), I just submitted with 5 of the 10 folds and added TTA to each fold (which takes 9 hours). The public LB boost 0.018 and the private LB boost 0.011. The private rank boost from 75th place to 40th place!</p> <p>Yolo TTA is very powerful because it infers the image at 3 different image sizes which adds more diversity than adding more folds. TTA also does flips. So using TTA is better than using more of the original folds. From Yolo's page <a href="https://github.com/ultralytics/yolov5/issues/303" target="_blank">here</a></p> <blockquote> <p>Note that inference with TTA enabled will typically take about 2-3X the time of normal inference as the images are being left-right flipped and processed at 3 different resolutions, with the outputs merged before NMS. Part of the speed decrease is simply due to larger image sizes (832 vs 640), while part is due to the actual TTA operations.</p> </blockquote> <p>I trained for 10 epochs using cosine schedule with 1 epoch warm up. I used the epoch model weights with the best F2 score which was usually epoch 7. For CV, i used group KFold on subsequences <a href="https://www.kaggle.com/julian3833/reef-a-cv-strategy-subsequences" target="_blank">here</a> (which are different than sequences).</p> <p>It is the same idea as the following. If we set <code>model.conf = 0.01</code>, does that help or hurt? This hurts because the model predicts too many boxes.</p> <p>When we ensemble 10 models, if the majority of models predict the same box, then it is most likely a TP. But what happens when only 1 model predicts a box, and the other 9 models do not predict that box? (It is most likely FP).</p> <p>In this case, WBF takes the average of the first model's box conf with 9 zeros. So the resultant box conf becomes very small like <code>conf = 0.02</code> or something. The library WBF does not remove this box even though the resultant <code>conf</code> is very small. You must write your own code like mine above to remove boxes like this.</p> <h1>Single Model Yolov5L6</h1> <p>Thanks Kaggle for a fun competition. Thank you Kagglers for sharing many great discussions and notebooks! My final solution is a single 10-fold Yolo5L6 without TTA without tracking. Train 3072, infer 3072 img size.</p> <h1>WBF with Threshold</h1> <p>Just this morning, I realized that the awesome WBF GitHub repository <a href="https://github.com/ZFTurbo/Weighted-Boxes-Fusion" target="_blank">here</a> doesn't apply a threshold after fusing boxes. So with 10 hours until competition deadline, I updated my final submission and submitted again! It finished just in time and jumped me into Silver!</p> <p>If you have 10 fold models. And each model is inferred box with <code>conf = 0.1</code>. Then afterward when we apply WBF, it is possible that 3 models find a starfish with confidences <code>0.3, 0.2, 0.1</code> and the other 7 fold models do not find that box. Then the average confidence is <code>0.06 = (0.3 + 0.2 + 0.1 + 0 + 0 + 0 + 0 + 0 + 0 + 0)/10</code>. So we need to remove this box (if we don't wish to have boxes under <code>conf = 0.1</code>) as follows:</p> <pre><code>WBF_CONF = 0.1 boxes, scores, labels = weighted_boxes_fusion(ALL_MODELS) filtered boxes=[], filtered_scores=[] for k, box in enumerate(boxes): if scores[k]&lt;WBF_CONF: continue filtered_boxes.append(box) filtered_scores.append(scores[k]) </code></pre> <p>Using this code boosted my final submission Private LB 0.609 to <strong>Private LB 0.684</strong>! Wow. This is important to do if the competition metric is F2 because we are penalized with too many FP (so it's important to establish a conf threshold). However if the competition metric is <code>mAP</code> then it would not matter since with <code>mAP</code> we are not penalized for FP (explained <a href="https://www.kaggle.com/c/vinbigdata-chest-xray-abnormalities-detection/discussion/229637" target="_blank">here</a>)</p> <h1>Skip_Box_Thr is not WBF_CONF</h1> <p>Note that <code>weighted_boxes_fusion</code> has a parameter called <code>skip_box_thr</code>. This filters input boxes not output boxes. In general, this parameter makes no difference because we already use <code>model.conf = 0.1</code> which means that we only input boxes with <code>conf&gt;=0.1</code> into our WBF. There is no parameter for filtering output boxes. So when competition metric is <code>F1</code> or <code>F2</code>, we must write our own code like above.</p> <h1>Model Training - 10 Folds</h1> <p>For my object detection model, I trained 10 folds of <code>Yolov5L6</code> with <code>image size = 3072</code>, <code>batch_size = 8</code>, and <code>epochs = 10</code> using 4xV100 32GB GPU. Thanks Nvidia! Training took 2.5 hours per fold (i.e. 15 minutes per epoch). I tried a few image sizes and this achieved the best CV score.</p> <p>I used <a href="https://www.kaggle.com/steamedsheep" target="_blank">@steamedsheep</a> training script <a href="https://www.kaggle.com/steamedsheep/yolov5-high-resolution-training" target="_blank">here</a> and <a href="https://www.kaggle.com/hjulian3833" target="_blank">@hjulian3833</a> subsequence CV folds <a href="https://www.kaggle.com/julian3833/reef-a-cv-strategy-subsequences" target="_blank">here</a>. I used the best model each epoch as determined by Yolo F2 metric described <a href="https://www.kaggle.com/sanchitvj" target="_blank">here</a> and <a href="https://www.kaggle.com/c/tensorflow-great-barrier-reef/discussion/300638#1658343" target="_blank">here</a>. I trained using positive frames only.</p> <h1>Model Inference - 10 Folds</h1> <p>For inference, I inferred 10 fold models with <a href="https://www.kaggle.com/awsaf49" target="_blank">@awsaf49</a> notebook <a href="https://www.kaggle.com/awsaf49/great-barrier-reef-yolov5-infer" target="_blank">here</a>. I inferred at <code>image size = 3072</code> with <code>iou = 0.4</code> and <code>conf = 0.05</code>. I inferred at the same size as training since this achieved the best CV score. I ensembled fold models using <a href="https://www.kaggle.com/zfturbo" target="_blank">@zfturbo</a> awesome WBF GitHub library <a href="https://github.com/ZFTurbo/Weighted-Boxes-Fusion" target="_blank">here</a> and I added my own WBF threshold shown above at <code>WBF_CONF = 0.1</code> (since this was best <code>conf</code> for CV score).</p> <h1>No Tracking, No TTA</h1> <p>Using the public notebook tracking code did not improve my CV score, so I did not use it. In order to infer 10 folds of Yolov5L6 in 7 hours, I did not use TTA.</p> <h1>CV 0.732, Public LB 0.591, Private LB 0.684</h1> <p>To compute competition metric, I used <a href="https://www.kaggle.com/bamps53" target="_blank">@bamps53</a> notebook <a href="https://www.kaggle.com/bamps53/competition-metric-implementation" target="_blank">here</a>. My final single model achieved 10 fold CV 0.732, Public LB 0.591, and Private LB 0.684</p> <h1>Thank You</h1> <p>Thank you everyone for sharing helpful discussions and notebooks. My solution was possible because of everyone's generous sharing.</p> <h1>UPDATE 1 (after comp ended):</h1> <p>Instead of inferring 10 folds. I just submitted with 5 of the 10 folds and added TTA. The public LB boost 0.018 and the private LB boost 0.011. The private rank boost from 75th place to 40th place! </p> <h1>UPDATE 2 (after comp ended):</h1> <p>Instead of adding TTA, I trained a Yolov5M6 at 3072 with same settings as my Yolov5L6. Then I submit 5 folds out of 10 Yolov5M6 (without TTA) and 5 folds out of 10 Yolo5L6 (without TTA). When using <code>WBF_CONF</code>, the result is private LB 0.709 and 20th place! This confirms that <code>WBF_CONF</code> is a powerful idea. If I add even more models (with different backbones and use different image sizes), I assume the LB will keep climbing!</p> <p>Awesome work <a href="https://www.kaggle.com/markpeng" target="_blank">@markpeng</a> . Congrats on strong Silver solo finish! Using <code>RandomSizedBBoxSafeCrop</code> is a great idea. That let's you train with large images and have large batches. What does <code>erosion_rate</code> do?</p> <p>I think since you used a high <code>conf = 0.5</code> for each model, then if only 1 of your 7 models predicted a bbox, it would become <code>0.5 / 7 = 0.07</code> probability which is still acceptable. Alternatively, you could lower each model to <code>conf = 0.1</code> and use <code>WBF_CONF = 0.1</code> like the code in my post above.</p> <p>Note that <code>skip_box_thr=0.01</code> filters input boxes which in your case does nothing since each input box is already <code>&gt;=0.5</code>.</p> <p>Yes <a href="https://www.kaggle.com/awsaf49" target="_blank">@awsaf49</a> , I never realized this before either until just this morning. In previous competitions with <code>mAP</code> metric it didn't matter (like our SIIM-FISABIO-RSNA COVID-19 Detection comp) but in this competition with <code>F2</code> (or <code>F1</code>) it does matter.</p> <p>I discovered when I increased my inference from using 6 out of 10 folds to 9 out of 10 folds. I couldn't understand why the LB score decreased. Then all of a sudden, it occurred to me.</p> <p>Thanks <a href="https://www.kaggle.com/trushk" target="_blank">@trushk</a> . Congrats to you and your team for 6th place Gold. Fantastic!</p> <p>I train on just images with cots. I tried using <code>1:1</code> frames <code>without:with</code>. And <code>2:1</code> and <code>3:1</code>. But using <code>1:0</code> was best. </p> <p>Note that even when we train with 100% frames with cots, the model still learns background without cots. Many people don't realize this. For each training image, the model will place thousands of different sized rectangles on the train image. Then some rectangles will have cots and some will not. The model will learning to predict which boxes have cots and which boxes do not. So the model learns what ocean without cots looks like (even when we use all positive images).</p>
TensorFlow - Help Protect the Great Barrier Reef
6th place solution , yolov5m6/s6, trust cv, post classification.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow - Help Protect the Great Barrier Reef <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes. How to construct post cls training samples is a problem. </p> <p>Thank you. You &amp; your teammate shared a lot great idea, we all benifit from it .</p> <p>We team did a good job!</p> <p>Thanks you! Congrats on the SOLO silver medal</p> <p>Congratulations to all team which trust your CV , the Private board shake huge. Thanks to my teammates <a href="https://www.kaggle.com/khyeh0719" target="_blank">@khyeh0719</a> <a href="https://www.kaggle.com/trushk" target="_blank">@trushk</a> I have a greate journey in this competition.</p> <h2>Pipeline</h2> <p><img src="https://raw.githubusercontent.com/EvilPsyCHo/kaggle-detect-starfish/main/asset/arch.png" alt="pipeline"></p> <h2>Trust your CV</h2> <p>We split data by video_id, and calculate the f2-score on the whole oof data, and fund something interesting.</p> <ol> <li>CV shows model is work great when infer img resolution = train img resolution, but LB need X2~X2.5 infer resolution to get nice score.</li> <li>Track improve the score about 0.002 on both CV and LB .</li> <li>Post classification improve the score about 0.006~0.01 on both CV and LB.</li> <li>Infer with low conf and use Wbf ensemble give use about 0.02 boost.</li> </ol> <p>finally we choose our 4 sub on this strategy:</p> <ol> <li>best cv sub w/ normal resolution -&gt;LB 0.616 / Private 0.723</li> <li>best lb sub -&gt;LB 0.716 / Private 0.692 (big shake)</li> <li>best cv X1.5 resolution. -&gt;LB 0.668/ Private 0.727 </li> <li>best cv X2.0 resolution -&gt;Lb 0.700/ Private 0.711</li> </ol> <h2>Best CV Model</h2> <ol> <li>post cls use swin-transformer</li> <li>track use nofair public setting</li> <li>wbf ensemble</li> <li>single model as follow show:</li> </ol> <table> <thead> <tr> <th>name</th> <th>train config</th> <th>infer config</th> </tr> </thead> <tbody> <tr> <td>s1</td> <td>yolov5m6_e5_bs1_lr01_img3000_public</td> <td>sz3600_nmsconf0.01_nmsiou0.2</td> </tr> <tr> <td>s3</td> <td>yolov5m6_e5_bs1_lr01_img3000_public</td> <td>sz2412_nmsconf0.01_nmsiou0.2</td> </tr> <tr> <td>s15</td> <td>yolov5s6_e5_bs2_lr01_img3000_public</td> <td>sz3000_nmsconf0.01_nmsiou0.2</td> </tr> <tr> <td>s61</td> <td>yolov5m6_e10_bs8_lr01_img3600_cp_nb8</td> <td>sz3600_conf0.01_iou0.2</td> </tr> <tr> <td>s62</td> <td>yolov5s6_e10_bs8_lr01_img3000_cp_nb8</td> <td>sz3000_conf0.01_iou0.2</td> </tr> <tr> <td>s63</td> <td>yolov5m6_e10_bs8_lr01_img3600_cp_nb8</td> <td>sz2412_conf0.01_iou0.2</td> </tr> </tbody> </table>
TensorFlow - Help Protect the Great Barrier Reef
7th Place Solution - Cascade RCNN+Tracking ( Public LB is not all you need.)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow - Help Protect the Great Barrier Reef <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We sincerely thank the Tensorflow organization for hosting this awesome competition. Please allow me to send my best regards to my teammates for their dedicated efforts. <br> This is the first detection competition for our team and it is a great honor for us to obtain the final score of Rank 7. <br> The overall solution is based on Highly Customized Cascade RCNN with Tracking as a post-process method. Moreover, the good fortune and the robustness of the overall design play a vital role in this competition. </p> <h1>Summary</h1> <p>The main idea of our method is based on the stronger baseline <strong>cascade rcnn model design</strong> with lots of customized features, the <strong>careful-crafted data augmentation strategy</strong> and <strong>object tracking post-procedure</strong>. Our method is a single model approach and it does not rely on any ensemble strategy. </p> <h2>Data Split. [Update]</h2> <p>At the beginning of the competition, we split the dataset with a randomly split training set by video ids. <br> We use train-val set split to select the appropriate model. After selecting an appropriate model design, We switch to the full trainset and perform the comparison on Public LB. </p> <h2>Stronger Baseline of Cascade RCNN.</h2> <p>The team has designed a customized Cascade RCNN baseline. The majority of model improvements are listed as follows. </p> <ol> <li>Stronger Backbone: ResNet -&gt; ResneXt -&gt; Res2Net -&gt; CBNet; </li> <li>Enhanced FPN: FPN -&gt; PAFPN; </li> <li>Customized Detection Heads: Cascade RCNN Head -&gt; Double Head Cascade RCNN Head;</li> <li>Loss function: Smooth L1 loss -&gt; IoU Based Loss;</li> </ol> <h2>Careful-crafted Data Augmentation Strategy.</h2> <ol> <li>Weak Aug: Flip, RandomBrightnessContrast, RGBShift, HueSaturationValue, Noise, CLAHE, Affine, Rotate</li> <li>Strong Aug: Copy-Paste, Mosaic, AutoAugmentation V1 policy, Mixup, Cutout</li> <li>MS Training and Testing: [0.8 * image_size, 1.2 * image_size]</li> </ol> <h2>Object Tracking as Post Process.</h2> <p>Inspired by <a href="https://www.kaggle.com/parapapapam" target="_blank">@parapapapam</a> (<a href="https://www.kaggle.com/parapapapam/yolox-inference-tracking-on-cots-lb-0-539)" target="_blank">https://www.kaggle.com/parapapapam/yolox-inference-tracking-on-cots-lb-0-539)</a>, We adopt Norfair tracking as a post-process and gain a performance boost. Thanks for the kind sharing.</p> <h2>Public LB is not all you need.</h2> <p>Since our method is based on the above-mentioned methodology, the proposed method suffers from a significant performance drop in public LB. The team is in deep desperation when the rank of public LB goes to No. 1000. Thankfully, the proposed method demonstrates its robustness in the private LB. </p> <p>We have learned a lot from the competition, and our method can get an additional performance boost if we have more time on exploring model ensemble strategies. </p> <h2>Most Commonly Asked Questions:</h2> <p>Thank you for everyone who shows great respect to this competition. Our implementation of Cascade RCNN are build on [1]. Since kaggle notebook is not friendly for MMDetection codebases (All custom codes turns to private_datasets… ) and some private code issues (Company limitations), we will clean up the codebase, construct a public notebook and share detailed configurations in the future. <br> [1] <a href="https://github.com/shinya7y/UniverseNet" target="_blank">https://github.com/shinya7y/UniverseNet</a></p> <ol> <li>It might be confusion with the poor writing. We only use offline validation set in model_selection procedure(Train on splited trainset and Validate on splited valset). When we switch to full dataset training, we only rely on Public LB. </li> </ol> <ol> <li>As for PAFPN, you can refer to [1] for detailed implementations. </li> <li>The Weak augmentation indicates image augmentation operations that always work on the majority of OD tasks. The Strong augmentation is not stable and requires lots of experiments.</li> <li>AutoAugmentation V1 policy: You can refer to [2] for auto-augmentation policy. </li> </ol> <p>[1] <a href="https://github.com/open-mmlab/mmdetection/blob/master/configs/pafpn/faster_rcnn_r50_pafpn_1x_coco.py" target="_blank">https://github.com/open-mmlab/mmdetection/blob/master/configs/pafpn/faster_rcnn_r50_pafpn_1x_coco.py</a><br> [2] Learning Data Augmentation Strategies for Object Detection <a href="https://arxiv.org/pdf/1906.11172" target="_blank">https://arxiv.org/pdf/1906.11172</a></p> <ol> <li>Sorry about the record is missing. We are trying to find the detailed information of CV Score on different video split. </li> <li>We follow 1x training schedule as commonly mmdet configurations.</li> </ol> <p>We will clean the codebase and ask for public permission from the company to publish solution as soon as possible. You can refer to <a href="https://github.com/shinya7y/UniverseNet" target="_blank">https://github.com/shinya7y/UniverseNet</a> and the majority of features is implemented.<br> We find that stronger backbones did benefit the two-stage design and loss function with iou loss also works.</p> <p>We will clean the codebase and ask for public permission from the company to publish solution as soon as possible. You can refer to <a href="https://github.com/shinya7y/UniverseNet" target="_blank">https://github.com/shinya7y/UniverseNet</a> and the majority of features is implemented. <br> E,g. Stronger Backbone, Enhanced FPN and Loss function for bbox regressivion. <br> We did offline validation on train-val split to select appropriate model. (Unfortunately, after dropping to rank #200, we follow popular notebooks and did not improve our 2stage-model).</p> <p>It is a great help of your repo. I wish to provide more implementation for your codebase in the near future.</p>
TensorFlow - Help Protect the Great Barrier Reef
8th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow - Help Protect the Great Barrier Reef <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! I used multiscale training from 2560 to 3584.</p> <p>Congrats to all the winners, and thanks to hosts for interesting competition.</p> <h1>Training</h1> <p>data: annotated images<br> train: video_id 0, 1 (final submission is all data)<br> val: video_id 2 (including background images)</p> <h1>Models</h1> <h2>yolox-s</h2> <p>training size: 2560~3584<br> augmentation: default parameter<br> inference size: 2560, 3072, 3584, 4096<br> tta: WBF conf 0.05<br> epoch: 12<br> cv: 0.76</p> <h2>Cascade-RCNN</h2> <p>backbone: convnext base<br> training size: 2048~2816<br> augmentation: RandomGamma, CLAHE, RandomBrightnessContrast, ShiftScaleRotate, Blur,<br> MotionBlur, GaussNoise<br> inference size: 2816<br> epoch: 4<br> cv: 0.78</p> <h1>ensemble</h1> <p>WBF conf 0.2<br> cv: 0.795<br> public: 0.611<br> private: 0.726</p> <p>Only bounding boxes were used. </p> <p>I used the official convnext implementation, code is here. <a href="https://github.com/facebookresearch/ConvNeXt/tree/main/object_detection" target="_blank">https://github.com/facebookresearch/ConvNeXt/tree/main/object_detection</a></p>
TensorFlow - Help Protect the Great Barrier Reef
9th place solution: Tiled Training + Seq-NMS
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow - Help Protect the Great Barrier Reef <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Much appreciated for organizing the competition, and thanks to all people who committed to this project.</p> <p>I will share my solution with you.</p> <h1>Pipeline</h1> <h2>Preprocessing</h2> <ul> <li>Tiled Training<ul> <li>Split original image into 320x320 sized tiles, with each tiles have intersections</li> <li>Stride is chosen as (sx, sy) = (240, 200), which gives us 5x3 tiles for each image. This setting is according to [6].</li></ul></li> <li>Relabeling<ul> <li>I noticed a lot of inconsistent labels and missing labels on the train dataset, so I thought relabeling will improve the model's performance.</li> <li>I relabeled with convining original lagbels and pseudo labels generated by trained model:<ul> <li>NMS(original(with 0.501) + pseudo label)</li> <li>threshold(conf&gt;0.5)</li></ul></li></ul></li> </ul> <h2>Fold Split</h2> <ul> <li>Split sequences into 400 frames of continuous chunks, then allocate to each folds to balance the sum of COTS is almost the same. Allocating algorithm is greedy algorithm similar to [4]. I know it's leaky split, but I thought to balance the target counts is more important. That is because ensembling models with various performance produces poor prediction when using WBF according to the paper[5].</li> </ul> <h2>Data Loader</h2> <h3>Sampler</h3> <ul> <li>Class Balanced Sampling<ul> <li>I controled foreground/background ratio by implementing class-balanced sampler which is inspired by [1].<ul> <li>pos/neg ratio = 1:0.3</li> <li>1epoch = 1.3 * pos images</li></ul></li></ul></li> </ul> <h3>Augumentation</h3> <ul> <li>Heavy Distortion<ul> <li>Scale Shift: 0.3-1.7</li> <li>Mosaic, Mixup, perspective transform, color shift etc.</li></ul></li> <li>Perspective Transform with Elipse mask<ul> <li>In perspective transform, I used elipse mask which is shared by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> . It firmly reduced box loss when using with strong distortion.</li></ul></li> </ul> <h2>Training</h2> <ul> <li>input image size: 640px (x2 of original size)</li> <li>model: yolov5l[2]</li> <li>epochs: 45-60</li> <li>machine: Tesla T4, 16GB (EC2 instance <code>g4dn.xlarge</code>)</li> </ul> <h2>Inference</h2> <ul> <li>image size: 2560px (x2 of original size)<ul> <li>Note: even though training with tiles, YOLOv5 model accurately detects the object when input with entire image.</li></ul></li> <li>TTA<ul> <li>(scale, transform) = (0.7, None), (1.0, hflip), (1.3, None) [^1]</li></ul></li> <li>PostProcessing<ul> <li>Clear out low-confident, near-edge boxes<ul> <li>Since I trained the model with tiles, it's predictions is much more likely to predict boxes around edges compared to training with entire images. This causes bad effect when using Seq-NMS, because the model continue to predict FPs around the edges after the target object framed out.</li></ul></li> <li>Seq-NMS[3]<ul> <li>Original Seq-NMS algorithm is not designed for real-time processing. So I used past 20 frames with exponential decay of factor 0.9. Metrics is <code>max</code>.</li></ul></li></ul></li> </ul> <p>[^1]: original YOLOv5 TTA algorithm is kind of tricky, since the scale is set <code>(1.0, 0.83, 0.67)</code>. The mean scale is smaller than original size. In the official documentation, it says we should scale input image by x1.3 when using TTA. But this process is slower. So I fixed to <code>(0.7, 1.0, 1.3)</code>, adjusting mean value is equel to 1.0.</p> <h1>Experiment Result</h1> <p>Figures shown in the table are <strong>private LB</strong> scores.</p> <table> <thead> <tr> <th>Id</th> <th>Label</th> <th>Split</th> <th>model</th> <th>ensemble</th> <th>TTA</th> <th>w/o Seq-NMS</th> <th>w Seq-NMS</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>original</td> <td>all data</td> <td>yolov5l</td> <td>none</td> <td>scale(0.7, 1, 1.3)</td> <td>0.713</td> <td>0.722</td> </tr> <tr> <td>1</td> <td>original + pseudo label</td> <td>all data</td> <td>yolov5l</td> <td>none</td> <td>scale(0.7, 1, 1.3)</td> <td>0.723</td> <td>0.724</td> </tr> <tr> <td>2</td> <td>original</td> <td>frame chunks 10 fold</td> <td>yolov5l</td> <td>WBF(fold5-9)</td> <td>None</td> <td>0.726</td> <td><strong>0.737</strong> (late sub.)[7]</td> </tr> </tbody> </table> <h1>Discussion</h1> <h2>Seq-NMS</h2> <p>Seq-NMS gives me gain of <strong>2-5%</strong> when infer with most difficult validation sets (much bubbles and blurly images). So I showed this algorithms is robust to background noise.</p> <p>On the contraly, the effect of Seq-NMS is not the same on the LB scores. The Seq-NMS gains <strong>1.1%</strong> when using with the model 2, which is tie score to the 2nd team. However it gives only 0.1% gain when uses with the model 1.</p> <h1>Reference</h1> <ul> <li>[1] <a href="https://github.com/ufoym/imbalanced-dataset-sampler" target="_blank">https://github.com/ufoym/imbalanced-dataset-sampler</a></li> <li>[2] <a href="https://github.com/ultralytics/yolov5" target="_blank">https://github.com/ultralytics/yolov5</a></li> <li>[3] <a href="https://arxiv.org/abs/1602.08465" target="_blank">https://arxiv.org/abs/1602.08465</a></li> <li>[4] <a href="https://www.kaggle.com/tatamikenn/balanced-fold-splitting-algorithm" target="_blank">https://www.kaggle.com/tatamikenn/balanced-fold-splitting-algorithm</a></li> <li>[5] <a href="https://arxiv.org/abs/1910.13302" target="_blank">https://arxiv.org/abs/1910.13302</a></li> <li>[6] <a href="https://openaccess.thecvf.com/content_CVPRW_2019/html/UAVision/Unel_The_Power_of_Tiling_for_Small_Object_Detection_CVPRW_2019_paper.html" target="_blank">https://openaccess.thecvf.com/content_CVPRW_2019/html/UAVision/Unel_The_Power_of_Tiling_for_Small_Object_Detection_CVPRW_2019_paper.html</a></li> <li>[7] <a href="https://www.kaggle.com/code/tatamikenn/infer-l-640-5x3-heavyx2-wbf-fold-5-9" target="_blank">https://www.kaggle.com/code/tatamikenn/infer-l-640-5x3-heavyx2-wbf-fold-5-9</a></li> </ul> <h1>Update Note</h1> <ul> <li>2022/02/16: updated experiment result: add result of no relabeled data to show the effect of relabeling.</li> </ul> <p>For me, I just customized some open source implementations, so I think it’s not as skillful as you thought. But thanks for saying so.</p> <p>I also published Seq-NMS code.<br> <a href="https://www.kaggle.com/tatamikenn/example-code-of-seq-nms" target="_blank">https://www.kaggle.com/tatamikenn/example-code-of-seq-nms</a></p> <p>Of course this trick is only applicable if the target shape is close to ellipse.<br> If the target shape are close to rectangle (e.g. bus, car etc.), ellipse mask will generate over-tighten boxes after rotation.</p> <p>Like the picture below:<br> <a href="https://ibb.co/Z6W2x8x"><img src="https://i.ibb.co/kgQmX8X/Screen-Shot-2022-02-27-at-23-30-11.png" alt="Screen-Shot-2022-02-27-at-23-30-11"></a></p> <p>For a quick explanation, most of the target (i.e. COTS) shapes are approximately ellipse which is inscribed to the boxes.</p> <p>YOLOv5's default mask is rectangle shape.<br> This will loosen the boxes tightness if applied by heavy geometric distortions (e.g. rotations).<br> On the contrary, using mask of ellipse shape, the tightness of the boxes are almost the same even if applied by heavy distortions.</p> <p>This is the link for his/her source code:<br> <a href="https://www.kaggle.com/c/tensorflow-great-barrier-reef/discussion/300405" target="_blank">https://www.kaggle.com/c/tensorflow-great-barrier-reef/discussion/300405</a></p>
TensorFlow - Help Protect the Great Barrier Reef
Lost my silver medal and my first lesson on trusting public LB 😥
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow - Help Protect the Great Barrier Reef <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><img src="https://drive.google.com/file/d/1LSvpkQHzLCNZRbFoMmS_MvkOeXpYlcY-/view?usp=sharing" alt="img"></p> <p>I'm a newbie to kaggle but recently try a lot of submit in order to get my first medal but turn out my method is wrong. </p> <ul> <li>I chose my best lb instead of my cv, because i only test my model on training time.</li> <li>I didn't test my ensembles to get cv score.<br> Result: I'm got kicked out of medal zone even though I got some kernels goes up to silver zone.<br> After high score kernels got publish, I'm like going on roller coaster of emotion because of rapidly changes in lb, now i got my lesson on create a proper validating your cv.</li> </ul>