Title of Competition
stringclasses 343
values | Title of Writeup
stringlengths 6
139
| User
stringclasses 1
value | Writeup
stringlengths 419
80.4k
|
---|---|---|---|
Right Whale Recognition | 1st place solution - deepsense.io | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Right Whale Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>87%</p>
<p>Instead of producing a real number in [0; 256] you can divide this interval into, say, 20 different bins. The first one would be [0; 256 / 20), the second one [256 / 20, 2 * 256 / 20), and so on. Having these disjoint intervals, you can treat the problem as a multicass classification - instead of predicting the coordinate accurately, you just determine some range that it falls into (i.e. one of the bins). When using a neural network, adding a Softmax layer, and using cross-entropy loss would be the way to go.</p>
<p>We're working on it ;)</p>
<p>Here's a blog post outlining our approach: <a href="http://deepsense.io/deep-learning-right-whale-recognition-kaggle/">http://deepsense.io/deep-learning-right-whale-recognition-kaggle/</a>.</p>
<p>Source code (and some trained models) can be found at <a href="https://www.dropbox.com/s/rohrc1btslxwxzr/deepsense-whales.zip?dl=1">https://www.dropbox.com/s/rohrc1btslxwxzr/deepsense-whales.zip?dl=1</a>.</p> |
Right Whale Recognition | 2nd Place Solution Report | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Right Whale Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everyone,</p>
<p>Here's my blog post describing my approach: <a href="http://felixlaumon.github.io/2015/01/08/kaggle-right-whale.html">http://felixlaumon.github.io/2015/01/08/kaggle-right-whale.html</a></p>
<p>Source code can be found at <a href="https://github.com/felixlaumon/kaggle-right-whale">https://github.com/felixlaumon/kaggle-right-whale</a></p>
<p>Felix</p> |
Riiid Answer Correctness Prediction | #8 solution: Ensemble of 15 same NN models | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi all,<br>
Learned a lot from other's solutions! In this post, I would like to share some insights in my solution.</p>
<p>In summary, my best single NN model could achieve 813/815 public/private score, by ensemble of 5 folds and 3 snapshots in each fold, finally 15 nn models achieve the 814/816 public/private score.</p>
<p>With 15 nn models, the online inference cost <strong>less than 4 hours</strong>, thus it can ensemble at least 30 models in this pipeline.</p>
<h2>Dataset split</h2>
<p>I split the train and valid set by several steps:</p>
<ol>
<li>Calculate all unique users in dataset.</li>
<li>Select 5% users totally in the valid set, and 45% users totally in the train set.</li>
<li>For the least 50% users, random split the user data into train and valid set by time.</li>
</ol>
<p>Thus we have both individual user in train and valid set, and also have many users who appear in both train and valid set. This split can achieve less than 0.001 score difference compared with LB.</p>
<p>By changing different random seed, we can get different folds.</p>
<h2>Feature engineering</h2>
<p>Since there are many detailed FE in other's post, I will just share some important points.</p>
<h3>Basic features</h3>
<ol>
<li><p>Evaluate user ability</p>
<ul>
<li>We can evaluate user ability by his history action, including the correctness, time elapsed, lag time, and so on.</li>
<li>These features can be calculated on not only content level, but also on same part, same tags, same content and so on.</li>
<li>These features can also be extended based on time, for example, we can make a feature which is the user correctness in last 60 seconds.</li></ul></li>
<li><p>Evaluate content difficulty</p>
<ul>
<li>We can evaluate content difficulty by calculate its global accuracy, std, average time elapsed and so on.</li></ul></li>
<li><p>Evaluate User x Content features</p>
<p>Even a content is difficult, the user may still skill enough and can solve it correctly. Thus we also need to describe how the user could perform on this content.</p>
<ul>
<li><p>user_acc_diff: For a content, if the content has low global acc, but the user answered it correctly, the user might above the average of all users. We can use the $logloss(content_global_average_acc, user_answer)$ to evaluate the difference.</p></li>
<li><p>user_elapsed_diff: like the user_acc_diff, we can also evaluate user time elapsed difference in user history contents.</p></li></ul></li>
</ol>
<p>We can dig many features on above three fields, for example, user's average lag time on history content/ history same part content/ history same tag content.. could also be a useful features.</p>
<h3>Other features/tricks</h3>
<ol>
<li>Time related features: last_content_timestamp_diff, last_lag_time and its statistical information in history.</li>
<li>Abnormal Users: If a user answered every content less in 4 seconds, and all his chose answer are same (such as C), then if the correct answer of next content is C, we can believe he will correctly answered next content.</li>
<li>Learned lectures for a specific content: If there is a content-lecture-content pattern in user history, and the two content are same contents. It might the user learned a specific lecture for this content, which means in the second time, the probability he answered it correctly is high.</li>
<li>Wrong answer ratio: There might be a pattern like "select C as default for hard contents". Thus calculating the ratio of user choice on his incorrectly content can tell as weather the user could lucky guess current content even he don't know he correct answer.</li>
</ol>
<p>Also, features such as current content id/current timestamp/current part are also used. Finally, I get 120 dim features, which can get public 0.806 by a single lgb model in single fold.</p>
<p>P.S. The categorical feature (set on content id) of lightgbm can boost my score about 0.003.</p>
<p>There are also many useful hint which could improve the speed and save the memory:</p>
<ol>
<li>Do not use pandas to calculate features, transfer it to numpy or just python.</li>
<li>Using a class to store user information, and save each user information in individual file.</li>
<li>In the inference phase, we can only read the user information for who appeared in the test set, the total number of users in test set are less than 10,000. This is the key to reduce memory. If the memory still not enough, the LRU-cache could be used to remove unused users.</li>
<li>In my experiment, using numpy array to store information cost more disk space compared with python variable.</li>
</ol>
<h2>NN</h2>
<p>Since it is very late when I notice the key to the top is NN model, I don't have much time analysis the NN models, especially design a specific features or structures. To save the time, I use the lightgbm features as the NN input (Time axis is added and seq len is 128).</p>
<h4>Robust Standard Normalization</h4>
<p>There are many outlier in the features from lightgbm, thus simply utilize standard normalization can hardly get desired results. I utilized the robust standard normalization to normalize all the features:</p>
<pre><code>def robust_normalization(column):
cur_mean = np.nanmedian(column)
cur_qmin, cur_qmax = np.nanpercentile(cur,[2.5, 97.5])
cur_std = np.nanstd(column[(column>=cur_qmin) & (column<=cur_qmax)])
column = np.clip(column, a_min = cur_qmin, a_max = cur_qmax)
column = (column-cur_mean)/cur_std
return column
</code></pre>
<h4>Models</h4>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1458993%2F4ae543902ffae52e87dfaf0484477c7d%2FWeChat%20Image_20210111165953.png?generation=1610355676642083&alt=media" alt=""></p>
<p>My NN model is very simple, just a transformer encoder with a fc classifier. The encoder has 4 transformer layers and embed dim is 128, no other modifications. The size of model file is only 7M, that's why I can ensemble many models on inference phase.</p>
<p>The model can achieve 813/815 on single model/single fold. and 814/816 on 5 folds ensemble. There are much time left on inference, I use 3 snapshot ensemble on each fold, which can also boost some scores. </p>
<p>My LGB and NN are from the same feature group, thus I can get only 0.0002 boost after my NN score higher than 0.810, that's why I give up the LGB model in the later submission.</p> |
Riiid Answer Correctness Prediction | 11th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Very little, maybe ~0.001, but would have caused places in the leaderboard. Intuitively it made a lot of sense for me as I was confused how the model got information of the LSTM… when it does a forward pass on the LSTM, it did not know what the question asked at the end was (only uni-directional). </p>
<p>Thanks to my teammate <a href="https://www.kaggle.com/jusco11" target="_blank">Akihiko</a> for competing with me on this, there was great learning from the community for both of us. And thanks to our hosts for a wonderful challenge. Congrats all who competed. </p>
<p>Solution has heavily inspired by Bestfitting's <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56262" target="_blank">TalkingData solution</a> which we noticed a lot of the people in this competition took part in :) </p>
<p>We used around 35 features including the raw, in a 2 LSTM layer model, each layer single direction trained on sequence length 256, infer on 512. <br>
Batchsize for training 2048. Hidden layer size of 512. <br>
Example below. </p>
<p>First layer,</p>
<ul>
<li>Used features below in <code>embcatq</code>, no continuous features. </li>
<li>No label in first layer, like in the SAINT paper.</li>
<li>Added the difference of some of the embedding to the final embedding. This gives the model info on how similar each historical question was to the question in the sample. </li>
</ul>
<p>Second layer, </p>
<ul>
<li>Outputs of first layer and included continuous features. </li>
<li>Added embedding for interaction of question and chosen answer. </li>
<li>Continuous features generated using <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> great book - this is why I would like to be able to give multiple upvotes.</li>
<li>One important feature was the answer ratio, what percentage of students picked the same answer as the chosen answer. </li>
</ul>
<p>How to handle the histories in memory was a problem, but there was plenty of space for it on the GPU, so loaded that first to numpy then to a torch tensor on GPU (history features took around 6GB); then loaded the other objects to RAM. <br>
Attention did not work for us - looked promising on validation though - should have persisted with it. We just took the final hidden cell from the LSTM as output. <br>
The below got ~ 0.811 public, by make some changed to the architecture and bagging four models, lifted to 0.813 public, 0.816 private. </p>
<pre><code>class LearnNet(nn.Module):
def __init__(self, modcols, contcols, padvals, extracols,
dropout = 0.2, hidden = args.hidden):
super(LearnNet, self).__init__()
self.dropout = nn.Dropout(dropout)
self.modcols = modcols + extracols
self.contcols = contcols
self.emb_content_id = nn.Embedding(13526, 32)
self.emb_content_id_prior = nn.Embedding(13526*3, 32)
self.emb_bundle_id = nn.Embedding(13526, 32)
self.emb_part = nn.Embedding(9, 4)
self.emb_tag= nn.Embedding(190, 8)
self.emb_lpart = nn.Embedding(9, 4)
self.emb_prior = nn.Embedding(3, 2)
self.emb_ltag= nn.Embedding(190, 16)
self.emb_lag_time = nn.Embedding(301, 16)
self.emb_elapsed_time = nn.Embedding(301, 16)
self.emb_cont_user_answer = nn.Embedding(13526 * 4, 5)
self.tag_idx = torch.tensor(['tag' in i for i in self.modcols])
self.cont_wts = nn.Parameter( torch.ones(len(self.contcols)) )
self.cont_wts.requires_grad = True
self.cont_idx = [self.modcols.index(c) for c in self.contcols]
self.embedding_dropout = SpatialDropout(dropout)
self.diffsize = self.emb_content_id.embedding_dim + self.emb_part.embedding_dim + \
self.emb_bundle_id.embedding_dim + self.emb_tag.embedding_dim * 7
IN_UNITSQ = self.diffsize * 2 + \
self.emb_lpart.embedding_dim + self.emb_ltag.embedding_dim + \
self.emb_prior.embedding_dim + self.emb_content_id_prior.embedding_dim + \
len(self.cont_idxcts)
IN_UNITSQA = ( self.emb_lag_time.embedding_dim + self.emb_elapsed_time.embedding_dim + \
self.emb_cont_user_answer.embedding_dim) + len(self.contcols)
LSTM_UNITS = hidden
self.diffsize = self.emb_content_id.embedding_dim + self.emb_part.embedding_dim + \
self.emb_bundle_id.embedding_dim + self.emb_tag.embedding_dim * 7
self.seqnet1 = nn.LSTM(IN_UNITSQ, LSTM_UNITS, bidirectional=False, batch_first=True)
self.seqnet2 = nn.LSTM(IN_UNITSQA + LSTM_UNITS, LSTM_UNITS, bidirectional=False, batch_first=True)
self.linear1 = nn.Linear(LSTM_UNITS * 2 + len(self.contcols), LSTM_UNITS//2)
self.bn0 = nn.BatchNorm1d(num_features=len(self.contcols))
self.bn1 = nn.BatchNorm1d(num_features=LSTM_UNITS * 2 + len(self.contcols))
self.bn2 = nn.BatchNorm1d(num_features=LSTM_UNITS//2)
self.linear_out = nn.Linear(LSTM_UNITS//2, 1)
def forward(self, x, m = None):
## Continuous
contmat = x[:,:, self.cont_idx]
contmat = self.bn0(contmat.permute(0,2,1)) .permute(0,2,1)
contmat = contmat * self.cont_wts
content_id_prior = x[:,:,self.modcols.index('content_id')] * 3 + \
x[:,:, self.modcols.index('prior_question_had_explanation')]
embcatq = torch.cat([
self.emb_content_id(x[:,:, self.modcols.index('content_id')].long()),
self.emb_part(x[:,:, self.modcols.index('part')].long()),
self.emb_bundle_id(x[:,:, self.modcols.index('bundle_id')].long()),
self.emb_tag(x[:,:, self.tag_idx].long()).view(x.shape[0], x.shape[1], -1),
self.emb_prior(x[:,:, self.modcols.index('prior_question_had_explanation')].long() ),
self.emb_lpart(x[:,:, self.modcols.index('lecture_part')].long()),
self.emb_ltag(x[:,:, self.modcols.index('lecture_tag')].long()) ,
self.emb_content_id_prior( content_id_prior.long()),
], 2)
embcatqdiff = embcatq[:,:,:self.diffsize] - embcatq[:,-1,:self.diffsize].unsqueeze(1)
# Categroical embeddings
embcatqa = torch.cat([
self.emb_cont_user_answer(x[:,:, self.modcols.index('content_user_answer')].long()),
self.emb_lag_time(x[:,:, self.modcols.index('lag_time_cat')].long()),
self.emb_elapsed_time(x[:,:,self.modcols.index('elapsed_time_cat')].long())
] , 2)
#embcatqadiff = embcatqa - embcatqa[:,-1].unsqueeze(1)
embcatq = self.embedding_dropout(embcatq)
embcatqa = self.embedding_dropout(embcatqa)
embcatqdiff = self.embedding_dropout(embcatqdiff)
# Weighted sum of tags - hopefully good weights are learnt
xinpq = torch.cat([embcatq, embcatqdiff], 2)
hiddenq, _ = self.seqnet1(xinpq)
xinpqa = torch.cat([embcatqa, contmat, hiddenq], 2)
hiddenqa, _ = self.seqnet2(xinpqa)
# Take last hidden unit
hidden = torch.cat([hiddenqa[:,-1,:], hiddenq[:,-1,:], contmat[:, -1]], 1)
hidden = self.dropout( self.bn1( hidden) )
hidden = F.relu(self.linear1(hidden))
hidden = self.dropout(self.bn2(hidden))
out = self.linear_out(hidden).flatten()
return out
</code></pre> |
Riiid Answer Correctness Prediction | 126th place solution - LGBM with B-IRT features (LB 0.795) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h4>Overview:</h4>
<p>My solution is a single LightGBM model with a total of 48 features.</p>
<h4>Best Features:</h4>
<p>Unfortunately, I didn't keep track of exactly how much each feature affected my model's CV, but from empirical evidence I noticed the best features to be those based on:</p>
<ul>
<li>Elo rating system</li>
<li>boolean determining whether or not a student has already seen a question</li>
<li>previous timestamps</li>
<li>Question difficulty based on the parameters of a Bayesian IRT model.</li>
</ul>
<p>A total list of the features is shown below (in no particular order):</p>
<pre><code>analysis_columns = [
'answered_correctly',
'timestamp',
'prior_question_elapsed_time',
'prior_question_had_explanation',
'task_container_id',
'bundle_id',
'has_seen_question',
'user_correct_ratio',
'content_correct_ratio',
'correct_streak',
'incorrect_streak',
'user_total_question_count',
'part_correct_ratio',
'time_since_last_timestamp',
'time_since_last_lecture',
'tag_correct_ratio',
'previous_question_same_bundle',
'previous_correct_and_same_bundle',
'previous_lecture_type',
'previous_lecture_id',
'user_part_correct_ratio',
'user_part_avg_elapsed_time',
'user_part_timestamp_diff',
'user_tag_correct_ratio',
'user_tag_avg_elapsed_time',
'user_tag_timestamp_diff',
'question_history',
'lecture_count',
'questions_since_last_lec',
'hardest_question_correct_ratio',
'easiest_question_correct_ratio',
'smallest_pqet',
'largest_pqet',
'user_rating',
'content_rating',
'highest_elo',
'lowest_elo',
'elo_user_content',
'elo_content_user',
'tag',
'irt_difficulty',
'irt_discrimination',
'timestamp_n1',
'timestamp_n2',
'timestamp_n3',
'tag_difficulty',
'tag_discrimination',
'pqhe_true_correct_ratio',
'pqhe_false_correct_ratio',
]
</code></pre>
<p>Feature importance graph (Note: the names somewhat vary from above):<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3785972%2F3078d4cb7724c798c7091f7dbb48f66e%2F.png?generation=1610411000057392&alt=media" alt=""></p>
<h4>Things I tried that didn't work:</h4>
<p><strong>Factorization Machine:</strong><br>
As others have stated, the goal of this task can be, to some extent, compared to that of a recommendation system where the recommendation score can be interpreted as a student's ability to answer some question. Although not without criticism, Riiid! has also used or referenced Matrix Factorization in their own work (see <a href="https://educationaldatamining.org/files/conferences/EDM2020/papers/paper_91.pdf" target="_blank">[1]</a>, <a href="https://arxiv.org/pdf/2005.05021.pdf" target="_blank">[2]</a>, <a href="https://arxiv.org/pdf/2012.05031.pdf" target="_blank">[3]</a>).</p>
<p>The <a href="https://ieeexplore.ieee.org/document/5694074" target="_blank">Factorization Machine</a> algorithm is quite simple as is as follows:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3785972%2F137e8b1e4d9cb771c1c526f665801e65%2Fsagemaker-factorization-1.png?generation=1610412377437882&alt=media" alt=""></p>
<p>Unfortunately, I was unable to get a successful submission using existing Factorization Machine tools, as well as my own implementation. Instead, I tried incorporating the latent factors from the <em>V</em> interaction matrix (shown in the formula above), but they didn't have any noticeable positive effect on the outcome of the model.</p>
<p><strong>Bayesian Knowledge Tracing (BKT):</strong><br>
Bayesian Knowledge Tracing is the precursor to Deep Knowledge Tracing. The particular model I was interested in was the individualized variation that include contextualization of guessing and slipping estimates. My goal was to train a model offline and use the <em>guess</em> and <em>slip</em> parameters in the LGBM model. Unfortunately, these also showed very little positive effect on the model's overall CV. This may be due to the poor performance of the trained BKT model as well.</p>
<p><strong>Neural Halflife Regression:</strong><br>
Using Tensorflow, I also intended to implement forgetting of a question's tag into my model based on Duolingo's <a href="https://research.duolingo.com/papers/settles.acl16.pdf" target="_blank">Halflife regression</a> model, inspired by <a href="https://link.springer.com/chapter/10.1007/978-3-030-52240-7_65" target="_blank">this</a> paper.</p>
<p>The formula used is also quite simple and is as follows:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3785972%2Fb8e08c0ede4d0a3244dad4a882ea8cba%2F2021-01-12%20100012.png?generation=1610413465690518&alt=media" alt=""><br>
, where <em>h</em> and <em>C</em>, are encoders, representing a learner's strength and a question's complexity respectively.</p>
<p>However, in the short amount of time I had left, I was unable to ensemble the model with the LGBM model to test whether or not it may have had a positive effect on the outcome of the model.</p>
<h4>Conclusion</h4>
<p>Overall this was a great experience, I learned a lot and had a lot of fun. I hope to get more comfortable with TensorFlow and PyTorch so I can have a more competitive edge in future competitions.</p>
<p>All of the code I used for this project I have uploaded to the following github repository: <a href="https://github.com/jwnz/Riiid-AIEd-Challenge-2020" target="_blank">https://github.com/jwnz/Riiid-AIEd-Challenge-2020</a></p> |
Riiid Answer Correctness Prediction | 12th place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>NN details will be posted later by <a href="https://www.kaggle.com/sakami" target="_blank">@sakami</a> </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F843848%2F94885ef1fcce200915129d79e234b4ea%2Friiid-solution.png?generation=1610081122634658&alt=media" alt="riiid-stack"></p>
<p>First of all, I want to thank both the hosts(kaggle and riiid) for hosting this competition.<br>
It was a really challenging competition in many ways, and really tested my limits. <br>
The competition was also very clean (albeit the privateLB bug unrelated to this competition). A very clean and logical train/test split/correlation is what makes this competition shine.</p>
<hr>
<h1>Overall solution:</h1>
<p>For our single mode, NN scored around 0.806-810(privateLB), LGBM 0.804.<br>
We stacked these single models with NN and LGBM to achieve a score of 0.816.<br>
I suspect our single models were mediocre in terms of score, but the stacking got us to gold.</p>
<p>Regarding our 5place drop on the privateLB, we kind of expected it. We knew that one of our models(SAINT) had a bad private score because of the private score leak, but we couldn't quite fix it. Even after the competition has ended, we still have no clue as to why the model suffers in the privateLB.</p>
<hr>
<h1>Validation:</h1>
<p>We used tito-split throughout most of the competition period.<br>
We mostly experienced good Valid-LB correlation, which I believe is the same for others. The single models used a 95M/5M split, and the stack used the 5M for training/validation.<br>
We also tried user-split as our final submission for diversity, but this had no impact on our final score.</p>
<hr>
<h1>LGBM details:</h1>
<h4>Resource management</h4>
<p>(When generating features for submission)<br>
I used h5py for memory-hungry features like (user x content features, user x tag features).<br>
The h5py file uses user_id as the key, and read the whole user-feature when hitting new users while predicting.<br>
This creates a good balance between memory and runtime. Since there are not many unique users in the test-set (compared to train), the time to FileIO is not too much, and huge amounts of memory is saved.<br>
All the other features were pickled and loaded to memory in the beginning.</p>
<h4>Features:</h4>
<p>Question features:<br>
These were made by the full training-set(100M). <br>
Obviously this is leaky, but since every single question has >1000 counts, the leak is tolerable.<br>
Example features: mean question ac, mean user rating of who did not answer correctly.</p>
<p>User features:<br>
How good the user is, especially related to parts, tags, contents.<br>
Example features: user x contents mean question ac</p>
<p>Timestamp features: <br>
This was kind of a surprise to me. Not only was the timestamp diff of t and t-1 a good features, diff of t-1 and t-2 up to t-9 and t-10 improved my model.<br>
Example features: user user timestamp diff from last lecture</p>
<p>Rating features:<br>
Elo features from this <a href="https://www.kaggle.com/stevemju/riiid-simple-elo-rating" target="_blank">notebook</a> (thank you very much). Trueskill did not improve my model.<br>
With only mean ac, the model cannot determine if the user is challenging hard questions or easy ones, so rating questions and users makes a lot of sense.</p>
<p>SVD features:<br>
LGBM is bad at expressing category columns (compared to NN). So I took the question embedding layer of NN, and used the 20dimension SVD as features.</p>
<h4>Feature selection:</h4>
<p>I used about 70-80 features for my model.<br>
Since we were stacking a lot of models, I didn't want to use too much resource with my LGBM(runtime, memory), so I picked features which had a lot of impact, and made my model contribute to the stacking-model. There were a lot of features which didn't improve my model much, and all of them were thrown away. </p>
<h4>Hyper-param:</h4>
<p>I didn't change this too much, but increasing the num-leaf 127->1023 improved my score by 0.003, which was a surprise. This happened after I added lots of timestamp features, so there might be a very complicated interaction underlying in the timestamp.</p>
<h4>Machine:</h4>
<p>I used GCP, 64coreCPU 416GBmemory. Even with this monster machine, I ran out of memory a lot when generating the full features (which I avoided by processing in chunks).<br>
I believe this instance cost me around $1000 over the competition (this is all payed by my company, and we are hiring btw). A lot of this cost happened because I was lazy (never used a preemptive instance), and I believe you could still be competitive with a $100 budget, even on this HUGE data, so don't be discouraged by the cost if you are a Kaggle beginner (I would recommend a competition with smaller data though :) )</p>
<hr>
<h1>Stacking details:</h1>
<p>Stacking is always difficult. It often leads to overfitting.<br>
In this competition, there was a very good Valid-LB correlation, so I guessed(correctly) that stacking could work.<br>
I wanted to make sure this layer works, so I tried to do everything conservative.</p>
<h4>Validation:</h4>
<p>3M/2M train/valid setup. For the final subs, I made multiple models with a time-series-split (negligible gain).</p>
<h4>Input Models:</h4>
<p>Multiple models from Sakami(SAINT based and AKT based), which had different window size. We couldn't fit in all the models, so we chose 3 as our final sub.<br>
Owruby had another SAINT model which both added diversity and improved the stack a lot. Lyaka had a SSAKT model which also helped slightly.<br>
I also added my LGBM model, which surprisingly improved the stack, even with the features added.</p>
<h4>Input Features:</h4>
<p>LGBM stack model: Hand selected 15 features from my single LGBM model. There was literally no gain from the other 65.<br>
NN stack model: Selected features + the last layers from singleNNs. This was done by Lyaka.</p>
<h4>Stacking models:</h4>
<p>LGBM stack model: num_leaves was reduced 1023->127. <br>
NN stack model: Lyaka did both MLP and a Transformer. Both had similar scores, and we used MLP as the final sub.</p>
<h4>Final output:</h4>
<p>Mean of LGBM and NN stack model</p>
<hr>
<h1>Final thoughts:</h1>
<p>Although we couldn't quite achieve the goal we wanted (beating mamas), I think we tried our best and did our best.<br>
Everyone contributed to the final stack, everyone did a ton of work, and we had great teamwork to make the stacking happen. I am really happy with the team we had, like I always had throughout my kaggle history. Thank you all.</p>
<p>You can check <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/209683" target="_blank">our NN solution</a>, or ask <a href="https://www.kaggle.com/sakami" target="_blank">@sakami</a> </p>
<p>Additional resources:<br>
<a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/209683" target="_blank">NN solution</a><br>
<a href="https://github.com/backonhighway/gm_riiid" target="_blank">My code</a> (As is, and unorganized)<br>
<a href="https://github.com/sakami0000/kaggle_riiid" target="_blank">NN code</a></p> |
Riiid Answer Correctness Prediction | 149th solution write up (LGBM + Transformer ensemble) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Also available at <a href="https://github.com/yoonseok312/riiid-answer-correctness-prediction" target="_blank">https://github.com/yoonseok312/riiid-answer-correctness-prediction</a> with codes.</p>
<p>This is a solution write up for our model, which is an ensemble between a single Light Gradient Boosted Machine model, and a single Encoder-Decoder based Transformer model. It was our first time competing in a Kaggle competition and none of us had previous AI/ML/Stats experience so we learned a lot throughout this competition. Should you have any questions, feel free to contact me at [email protected].</p>
<h1>LightGBM</h1>
<h2>Features</h2>
<p>ts_delta: the gap between timestamp of current content with previous content of the same user.<br>
task_container_id<br>
prior_question_elapsed_time: in second, rounded<br>
prior_question_had_explanation<br>
part<br>
num_tag: number of tags in that question<br>
u_chance: the average correctness of the user until current time<br>
u_attempts: number of content the user have done<br>
u_attempt_c: number of times the user interacted with the specific content in the past (only counting from >1 interactions due to memory)<br>
c_chance: the average correctness of the question until current time<br>
c_attempts: number of encounter of that question (all user)<br>
u_part_chance: the average correctness of the user doing the same part as the question<br>
u_part_attempts: number of question of the same part the user have done<br>
u_skill_chance: the average correctness of the user doing the same skill as the question (part < 5: listening, part >= 5: reading)<br>
u_skill_attempts: number of question of the same skill the user have done<br>
t_chance: the average correctness of the user of questions with specific tag until current time<br>
t_attempts: user's number of encounter of that tag<br>
total_explained: number of times explanation was provided to the user until current time (all contents)<br>
10_recent_correctness: user correctness of the most recent questions (up to 10)<br>
10_recent_mean_gap: mean ts_delta of the most recent questions (up to 10)<br>
bundle_elapsed: the mean elapsed time of the bundle, up until the abs time.<br>
mean_elapsed: the mean elapsed time of the user until now.<br>
prev_t1: tag of the last question<br>
prev_cor: correctness of the last question<br>
trueskill_possibility: possibility of the user 'beating' the question (getting the question correct) based on trueskill<br>
mu: mu value (mean of trueskill ratings) of user<br>
sigma: sigma value (standard deviation of trueskill ratings) of user<br>
Columns with NaN value was filled with -1.</p>
<h2>Cross Validation and train strategy</h2>
<ol>
<li>Define an absolute time for the whole database (abs_time = user_id//50 + timestamp//1000)</li>
<li>Sort by abs_time. All features mentioned above were engineered such that we will not take data from the future (higher abs_time) into account.</li>
<li>Drop first 25% of the data. This data contains some noise, i.e. when nobody studied plenty of questions yet.</li>
<li>Take last 25% of data as validation set.</li>
<li>Train the model with remaining 50% of the data.</li>
</ol>
<h2>Single Model AUC</h2>
<p>We used less than 30 features, but considering that most of the single LGBM models above 0.79 AUC used 40+ features, we did a decent work on focusing on imoportant features.<br>
LB score: AUC 0.789<br>
Number of epochs: 6650 (around 15 hours of training in total)</p>
<h1>Transformer</h1>
<h2>Encoder</h2>
<p>Added below layers with positional encoding.</p>
<p>1) Excercise Related<br>
min_delta: minute difference from between this question and the previous. Cap at 1443 (1 day)<br>
day_delta: day difference from between this question and the previous. Cap at 30<br>
month_delta: month difference from between this question and the previous. Cap at 9<br>
tid: task container id<br>
is_with: if the question is presented with another question. Usually have the same task container<br>
c_part: part, one hot encode and denote skill (listening, reading, part1,2,…)<br>
tag1…6: tags of question (t1 to t6 are the tag of one question, t1 being the most important tag.) Above embeddings or Dense layer concatenated.</p>
<p>2) Content id (cid)<br>
Dense layer</p>
<h2>Decoder</h2>
<p>Added below layers with positional encoding.</p>
<p>1) Response Related<br>
prev_answered_correct: correctness of previous answer.<br>
prior_elapsed: prior elapsed tiem<br>
prior_explained: prior has explanation Above embeddings or Dense layer concatenated.</p>
<p>2) Answered Correctly<br>
Concatenate Lecture related Embeddings/Dense<br>
num_lect: number of lecture the user have seen<br>
lec_type: 1 hot encode of most recent lecture type, (llecty1, 2…)<br>
lec_h_past: time since most recent lecture Above embeddings or Dense layer concatenated.</p>
<h2>Parameters</h2>
<p>WINDOW_SIZE: 100<br>
EMBED_DIM: 256<br>
NUM_HEADS: 16</p>
<h2>Cross Validation and train strategy</h2>
<p>Use first 80% of data as train set and last 20% as validation set.</p>
<h2>Single Model AUC</h2>
<p>AUC 0.786<br>
SAINT model has plenty of room for improvement, but as 1 epoch took more than 10 hours to train we decided to focus on improving LGBM.</p>
<h1>Inference</h1>
<h2>1. Ensembling two models</h2>
<p>Ensembled a single LGBM model and a Transformer model in 0.55 (LGBM) / 0.45 ratio.<br>
AUC: 0.793</p>
<h2>2. Ensembling three models</h2>
<p>Ensembled two LGBM and a Transformer model. 2nd LGBM was same as the first LGBM but except features related to Trueskill. When 2 models out of three models predicted that the user is likely to answer correctly, we used the max value among the 3 predictions. When 2 models out of three models predicted that the user is likely to answer wronly, we used the min value among the 3 predictions. For remaining cases, we mixed three models in 0.4 (Transformer) / 0.45 (First LGBM) / 0.15 (Second LGBM) ratio.<br>
AUC: 0.793 (slightly higher than the 1st Inference)</p>
<h1>Training Environment</h1>
<p>Our biggest mistake was thinking that all the feature engineering, training, and inferencing process must be done in the Kaggle environment. We were only using Kaggle environment until 2 weeks before the competiton ended, and from then we started to use Google Colab with GPU and 25GB of RAM. Still, there were several times when Colab took GPU from us and didn't give it for several hours as we were constanly using their GPU.</p>
<p>Thanks to my teammates <a href="https://www.kaggle.com/shhrkre" target="_blank">@shhrkre</a>, <a href="https://www.kaggle.com/ysgong" target="_blank">@ysgong</a>, <a href="https://www.kaggle.com/sanmaruum" target="_blank">@sanmaruum</a>, <a href="https://www.kaggle.com/kuraji" target="_blank">@kuraji</a>.</p> |
Riiid Answer Correctness Prediction | 158th Solution LGBM + FTRL, Private LB: 0793 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My notebook is now public. You can check them out <a href="https://www.kaggle.com/doctorkael/riiid-lgb-ftrl-data-gen-and-training-logic" target="_blank">here</a>!</p>
<h2>Huge thanks to:</h2>
<p>First and foremost I would like to thank the coordinators of RIIID for conducting this awesome competition and all the Kaggle Staffs who kept their constant vigilance for bugs and issues. Despite having data leaks and many such issues, the kaggle team handled the issue promptly and efficiently. The competition API was one of the most frustrating and yet necessary aspect of a competition such as this one. </p>
<p>The list of people I would like to thank is numerous and large, a few notable mentions but not limited to are:</p>
<ol>
<li><a href="https://www.kaggle.com/shoheiazuma/riiid-lgbm-starter:" target="_blank">https://www.kaggle.com/shoheiazuma/riiid-lgbm-starter:</a> The dataset was huge and intimidating, <a href="https://www.kaggle.com/shoheiazuma" target="_blank">@shoheiazuma</a> notebook gave the idea to train on just the last 24 records. I improvised on this idea a lot. Using final 30M records for training felt like I was missing out some users. Training on final 250 records of each user gave me comparable results to training on the entire dataset. </li>
<li><a href="https://www.kaggle.com/vopani" target="_blank">@vopani</a>'s classic notebooks: <a href="https://www.kaggle.com/rohanrao/tutorial-on-reading-large-datasets" target="_blank">this</a> and <a href="https://www.kaggle.com/rohanrao/riiid-ftrl-ftw" target="_blank">this</a>: These tutorials Introduced me to my now favorite feather format. It also introduced me to DataTable FTRL and Dask. </li>
<li><a href="https://www.kaggle.com/adityaecdrid" target="_blank">@adityaecdrid</a> discussions on repeated user questions and his ideas to use Bit Arrays. He contributed a ton of ideas and intrigued me to come with cleverer features. </li>
<li><a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/196009:" target="_blank">https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/196009:</a> Ditch pandas! All my user state updations were done in a loop. Easier on the pipeline and easier for my brain too :)</li>
<li><a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/196942:" target="_blank">https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/196942:</a> A kaggle GM <a href="https://www.kaggle.com/silogram" target="_blank">@silogram</a> using LGBM to hit high ranks inspired me not to give up so soon when my score stuck at local optimas.</li>
<li><a href="https://www.kaggle.com/anuragtr" target="_blank">@anuragtr</a> discussion <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/198245" target="_blank">here</a> on how to prevent ram spikes when starting LGB training.</li>
</ol>
<h2>The Model Idea and difficulties faced:</h2>
<p>So what did <em>I</em> do? My model idea is quite simple: <strong>8 bagged LGBM models fed into an FTRL model</strong>. Kaggle CPU had memory restrictions of 16GB. Even concatenating all the features together threw that horrible OOM error. I created features piece by piece and concatenated them into three different chunks to create the training data. </p>
<p>Now we have 100M data containing 38 Features. Training an LGB on this large data would still throw an OOM. So I decided to do bagging (which would still throw an OOM error if you didn't slowly convert the data into float numpy arrays yourself using loops). Now Bagging is not an option you'd go for if you wish to make more accurate predictions, but a discussion mentioned that training on larger dataset gave a score boost of ~0.01 and so I did. </p>
<p>Training on the first chunk (25M) gave a CV of 0.791. After 7 such chunks, I got a CV of 0.792. Frustrated by seeing my LB ranks drop and to improve the predictive power a bit, I decided to add another model to the mix, a FTRL. </p>
<p>I was also motivated to use FTRL for two reasons: </p>
<ol>
<li>I only had two days left before competition ends & FTRL was super fast!</li>
<li>FTRL model is best suited for applications were the predictions need to dynamic and evolving. </li>
</ol>
<p>FTRL by itself didn't perform very well on my CVs. So I decided to feed my LGB model predictions to it to help it make its predictions. If the FTRL sees the LGB making false predictions during inference, it can improve or alter its predictions accordingly. Doing this improved my LB from 0.792 to 0.793. I have no prior knowledge with FTRL models. Perhaps with better features FTRL might have worked even better.</p>
<h2>Some of my personal takeaways:</h2>
<ul>
<li><em>Don't shy away from newer unfamiliar concepts!</em> With such a huge time frame given, I could have learned and implemented a transformer model. Instead I chose to stick with what I knew best: Brute forcing my way with hand coded features. Although this approach taught me how to make the most out of the limited given resources.</li>
<li><em>Numba is awesome!</em> Ton of code speed up, all with a simple decorator! Numba although fast, may throw OOM if blindly used. My notebook (soon to be published) demonstrates how I leveraged this awesome library.</li>
<li><em>LB ranks doesn't matter!</em> To be honest, It was painful to see my LB ranks drop from 60 all the way down to 158. But in the end, I managed to accept it and even laugh about it! See <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/208356" target="_blank">this</a> hilarious meme thread.</li>
</ul>
<hr>
<p>I enjoyed participating in this competition. It gave me so much more than I imagined it would. To all the winners and participants of this competition: <em>My heartiest congratulations and a Happy new year!</em></p>
<hr> |
Riiid Answer Correctness Prediction | 16th Place - Single Model | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/yihdarshieh" target="_blank">@yihdarshieh</a> I'm maintaining the following stucture in memory. I've been inspired from different threads in forum and benchmarked multiple solutions before keeping this one. Per user and per content attempts dictionary costs too much memory if stored as integers (even <code>np.int8</code>) because it could be up to 393k x 13k. To lower memory usage I'm using 3xbitarray (index is content_id) to address 2x2x2=8 attempts. As we're using attempts are categories we have 0-7 values and all attempts beyond 7 fall into last category which is indeed 7 or higher. </p>
<p>We also need to store per user history for:</p>
<ul>
<li>Questions</li>
<li>Answers</li>
<li>Lags</li>
<li>Elapsed time</li>
<li>Attempts</li>
<li>Part (optional)</li>
<li>Had explanation (optional)</li>
</ul>
<p>Attempts history per user is computed on-fly based on the <code>cseen</code> values containing only the total (that's the trick).</p>
<p>Our different models (LGBM, Transfomer) can share this dictionary.</p>
<p><code>per_user_dict = defaultdict(CustomDictNP)</code></p>
<pre><code>def bit_array():
b = bitarray(13530, endian='little') # Higher than totals question if any new question in test set
b.setall(False)
return b
class CustomDictNP:
def __init__(self):
# Count/sum for further average
self.qc = 0 # answers count
self.qs = 0 # answers sum (correct)
self.s = 0 # session id
self.sc = 1 # session count
self.ss = 0 # total sessions
self.qes = 0.0 # total elapsed time
# History
self.ha = np.array([], dtype=np.bool) # answers
self.hp = np.array([], dtype=np.int8) # parts
self.hq = np.array([], dtype=np.int16) # questions
self.he = np.array([], dtype=np.float32) # elapsed time
self.hx = np.array([], dtype=np.bool) # prior question had explanation
self.hl = np.array([], dtype=np.float32) # lag
self.hat = np.array([], dtype=np.int8) # attempts
# Last timestamps
self.q = 0 # Questions
self.timestamp_u = []
self.timestamp_u_correct = []
# content_id seen
self.cseen0 = bit_array()
self.cseen1 = bit_array()
self.cseen2 = bit_array()
def add_attempt(custom, cid):
new_attempt = get_attempts(custom, cid) + 1
if new_attempt < 8:
if new_attempt == 1:
custom.cseen0[cid] = True
custom.cseen1[cid] = False
custom.cseen2[cid] = False
elif new_attempt == 2:
custom.cseen0[cid] = False
custom.cseen1[cid] = True
custom.cseen2[cid] = False
elif new_attempt == 3:
custom.cseen0[cid] = True
custom.cseen1[cid] = True
custom.cseen2[cid] = False
elif new_attempt == 4:
custom.cseen0[cid] = False
custom.cseen1[cid] = False
custom.cseen2[cid] = True
elif new_attempt == 5:
custom.cseen0[cid] = True
custom.cseen1[cid] = False
custom.cseen2[cid] = True
elif new_attempt == 6:
custom.cseen0[cid] = False
custom.cseen1[cid] = True
custom.cseen2[cid] = True
else:
custom.cseen0[cid] = True
custom.cseen1[cid] = True
custom.cseen2[cid] = True
def get_attempts(custom, cid):
return custom.cseen0[cid] + 2*custom.cseen1[cid] + 4*custom.cseen2[cid]
</code></pre>
<p><a href="https://www.kaggle.com/abdurrehman245" target="_blank">@abdurrehman245</a> The real reason was to limit memory usage for inference. We limit attempts from 0 to 7, everything above 7 falls in the last category which is indeed 7 and beyond.</p>
<p>We're going to upload the full pytorch model on GitHub with the weights.</p>
<p><a href="https://www.kaggle.com/returnofsputnik" target="_blank">@returnofsputnik</a> As you've noticed, the main difference is that I'm not using exactly the same features. I'm not using <code>attempts</code> at all. The configuration to get LB=0.795 with Pytorch model is below, it requires <code>seq_len=256</code>. We've noticed that <code>seq_len</code> is a quite important to get a better model. The training procedure is also important, I'm not using the smart window described by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>. I've spent some time to fix the <code>NaN</code> loss issue with Pytorch <code>MultiHeadAttention</code> + padding masks. Root cause was right padding instead of left padding. I've stopped to try improving it once we got better results with <a href="https://www.kaggle.com/rafiko1" target="_blank">@rafiko1</a> TensorFlow model which uses <code>seq_len=512</code>. Other difference is <code>sigmoid</code> vs <code>softmax</code> but I don't think it's really important.</p>
<pre><code>class raw_conf:
pad_mode = "token"
pad_right = False
flatten = True
sampler = None # "prob" # Option to give long user's sequence higher probability
seq_len = 256 # 100
embedding_dim = 256
exercices_id_size = 13523
exercices_part_size = 7
response_size = 2
elapsed_time_cat = True
elapsed_time_size = 73 # Categories after binning
lag_time_cat = True
lag_time_size = 366 # Categories after binning
explanation_size = 2
# Model
nhead = 8
num_encoder_layers = 4
num_decoder_layers = 4
dim_feedforward = 2048
dropout = 0.1
activation = None
num_classes = 1
loss = MaskedBCEWithLogitsLoss(num_classes)
post_activation = "sigmoid"
optimizer = "Noam" # "Adam"
scheduler = "Cosine" if optimizer == "Adam" else None
lr = 0.0001
min_lr = 0.00005
beta1 = 0.9
BATCH_SIZE = 128
</code></pre>
<p>The best model is with TensorFlow (<code>seq_len=512</code>), Public LB=0.812, Pytorch (<code>seq_len=256</code>) one is only Public LB=0.795.</p>
<p>We've tried it but results were close (little bit lower) than without.</p>
<p><a href="https://www.kaggle.com/returnofsputnik" target="_blank">@returnofsputnik</a> We've uploaded Pytorch full code here:<br>
<a href="https://github.com/rafiko1/Riiid-sharing/tree/main/pytorch" target="_blank">https://github.com/rafiko1/Riiid-sharing/tree/main/pytorch</a><br>
It's MODEL-PT-v14</p>
<p>Hi all,</p>
<p>Here is an insight of our 2 solutions that both scored public 0.812/private 0.815 and that reached 16th gold place.</p>
<p>This competition was both ML and engineering optimization to make everything work in 9h with 13GB RAM/16GB GPU. We spent almost 30% of time on optimization to keep the last 512 interactions per users + per content attempts in memory + required for our features.</p>
<p>We would like to thank Kaggle and RIIID organizers for this great competition! Congratulations to the top teams and all competitors for their motivation all along the challenge.<br>
I would like to thank my teammates <a href="https://www.kaggle.com/rafiko1" target="_blank">@rafiko1</a>, <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a> and <a href="https://www.kaggle.com/matthiasanderer" target="_blank">@matthiasanderer</a>. You've been amazing, I've learnt a lot from you. I really enjoyed this competition.</p>
<h2>Solution 1: Single transformer model</h2>
<p>The SAINT+ model is described here <a href="https://arxiv.org/pdf/2010.12042.pdf" target="_blank">https://arxiv.org/pdf/2010.12042.pdf</a><br>
The code for our SAINT+ adaptation is available here <a href="https://github.com/rafiko1/Riiid-sharing" target="_blank">https://github.com/rafiko1/Riiid-sharing</a>.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F698363%2Fa47293277b9e989ab5c81c269e6187a8%2Fdoc_saint.png?generation=1610121546239700&alt=media" alt=""></p>
<p>Our single model SAINT++ achieved, CV: 0.812 Public LB: 0.812, Private LB: 0.815</p>
<p>We trained with 95% of users first then fine tuned with all data using smart window technique (see diagram below for SAKT). The model is simple in terms of features. It only contains the four features of SAINT+ (pictured above), with one additional feature - the number of attempts of a user for specific content (hence SAINT++):</p>
<ul>
<li>Content id</li>
<li>Lag time</li>
<li>Prior question elapsed time</li>
<li>Previous responses</li>
<li>Number of attempts </li>
</ul>
<p>The greatest improvement in features compared to SAINT+ came from grouping lag time into seconds, unlike minutes as done in the paper. <br>
Then, we went bigger and bigger on the architecture and burned some GPU power 🔥. We increased on parameters of the model, most importantly the sequence length and number of layers. Final parameters of the model are as follows: </p>
<ol>
<li>Input Sequence length: 512</li>
<li>Encoding layers: 4</li>
<li>Decoding Layers: 4</li>
<li>Embedding size: 288</li>
<li>Dense Layer: 768</li>
<li>heads: 8</li>
<li>Dropout: 0.20</li>
</ol>
<p>We used the Noam learning rate scheduler: with initial warmup and exponential decrease down to 2e-5. </p>
<p>Final improvement came from our <strong><em>recursive trick</em></strong> during inference. Here, we rounded predictions that came from the same bundle to <strong><em>0 or 1 </em></strong>- as their true response is unknown in time yet. The rounded predictions are then fed back to the model to predict the next response within the same bundle. This trick boosts CV LB +0.0025, but requires a batch size of 1, so we couldn’t ensemble multiple transformer models.</p>
<h3>Solution 2: Ensemble of transformer, modified SAKT and LGB</h3>
<ul>
<li><p>LightGBM model scored CV=0.793, public LB=0.792 with 44 features. <br>
Our main features: <br>
Question correctness per content and per user, tags 1 and 2, part, elapsed time, had explanation, number of attempts, multiple lags, running average (answer), multiple rolling means/median (answers, lags, elapsed time) + weighted mean, mean after/before 30 interactions, multiple momentums (lag, answers), per part correctness, per session (8 hours split) running average. Only 3 categories: part, tags1, tags2. Train/valid split from <a href="https://www.kaggle.com/its7171/cv-strategy" target="_blank">Tito</a>. </p></li>
<li><p>Pytorch SAKT modified scored CV=0.786, LB=0.789 with additional features. Training procedure with a smart window. </p></li>
<li><p>When a user’s sequence length is larger than model input, i.e. N>W, then using random crops gives +0.002 CV versus tiled crops. And using smart random crops gives +0.003 CV versus tiled crops. Basic random crops have a low probability of selecting the early or late questions from a user’s sequence whereas smart random crops have an equally likely probability of selecting all questions from a user.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F698363%2Fa1239fda69769669432879b2b1a4ae39%2Fdoc_window.png?generation=1610121604322940&alt=media" alt=""></p></li>
<li><p>TensorFlow Transformer model alone scored CV=0.811, LB=0.811</p>
<p>Same as solution#1 but with sequence length = 256</p></li>
</ul>
<h3>What did not work:</h3>
<ul>
<li>TabNet</li>
<li>Features with lectures for LGB. It worked on CV but not on LB (might be an issue in inference).</li>
<li>Post processing using absolute position of question aka. question sequence number. Plotting mean(answered_correctly) vs question number looked like the image below. We can see that the 30 first questions have a different distribution compared with the rest. Also looks like there are subsequently batches of 30 questions (becomes visible if we zoom in the plot below). PP using that information worked in CV improving by around 0.0009, but didn’t work on LB. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F698363%2Fe4b8abca67c11258a438304b8fc67db5%2Fdoc_pp.png?generation=1610121577542081&alt=media" alt=""></li>
</ul>
<h3>What worked partially:</h3>
<p>But was not applicable for us within the 9h runtime limit:</p>
<ul>
<li>More than 3 models ensemble</li>
<li>Level 2 model (XGB) could boost by +0.001</li>
</ul>
<h3>Lessons learnt:</h3>
<ul>
<li>Start inference Kernel as soon as possible when you need to deal with an API.</li>
<li>Try to simulate API locally to understand how data will be handled. <a href="https://www.kaggle.com/its7171/time-series-api-iter-test-emulator" target="_blank">Tito</a>’s simulator was perfect for that purpose.</li>
<li>Push your inference (with the simulator) to the limits to debug it, it will avoid the frustrating “submission scoring error”. </li>
<li>Team-up at some point, your teammates always have good ideas.</li>
</ul>
<p>One additional word to Kaggle <a href="https://www.kaggle.com/sohier" target="_blank">@sohier</a> I loved your API and the way it hides private data, it’s more realistic as in real world usage/production and it avoided chaotic blending. Congratulations for that, however, even if I guess you want to prevent probing, you should find a solution to provide better error feedback. If it is not possible (the more error codes the more probing) then you need to provide a simulator and guidelines allowing competitors to troubleshoot locally.</p>
<p>Thanks! Our best single model + features is with TensorFlow. <a href="https://www.kaggle.com/rafiko1" target="_blank">@rafiko1</a> if you get a chance to share it …</p>
<p>In the other <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/195632" target="_blank">thread</a>, it was a Pytorch model (with simple features) that only reached public LB 0.795. Here it is if it can help. </p>
<pre><code># Model
class RIIIDModel(nn.Module):
def __init__(self, cfg, verbose=False):
super().__init__()
self.response_size = cfg.response_size
self.lag_time_size = cfg.lag_time_size
self.elapsed_time_size = cfg.elapsed_time_size
self.explanation_size = cfg.explanation_size
self.attempt_size = cfg.attempt_size
self.seq_len = cfg.seq_len
self.embedding_dim = cfg.embedding_dim
self.elapsed_time_cat = cfg.elapsed_time_cat
self.lag_time_cat = cfg.lag_time_cat
self.num_classes = cfg.num_classes
self.verbose = verbose
self.pad_mode = cfg.pad_mode
self.pos_encoder1 = None
self.pos_encoder2 = None
additional_token_dim = 1 if self.pad_mode == "token" else 0
# Exercices embeddings
self.exercices_id_embedding = nn.Embedding(cfg.exercices_id_size + additional_token_dim, self.embedding_dim)
self.exercices_part_embedding = nn.Embedding(cfg.exercices_part_size + additional_token_dim, self.embedding_dim) if cfg.exercices_part_size is not None else None
# Response embeddings
self.response_embedding = nn.Embedding(cfg.response_size + 1 + additional_token_dim, self.embedding_dim) # +1 to include start token
if self.elapsed_time_cat is True:
self.elapsed_time_embedding = nn.Embedding(cfg.elapsed_time_size + 1 + additional_token_dim, self.embedding_dim) if cfg.elapsed_time_size is not None else None # +1 to include start token
else:
self.elapsed_time_embedding = nn.Linear(1, self.embedding_dim, bias=False) if cfg.elapsed_time_size is not None else None # Continuous embedding
if self.lag_time_cat is True:
self.lag_time_embedding = nn.Embedding(cfg.lag_time_size + 1 + additional_token_dim, self.embedding_dim) if cfg.lag_time_size is not None else None # +1 to include start token
else:
self.lag_time_embedding = nn.Linear(1, self.embedding_dim, bias=False) if cfg.lag_time_size is not None else None # Continuous embedding
self.explanation_embedding = nn.Embedding(cfg.explanation_size + 1 + additional_token_dim, self.embedding_dim) if cfg.explanation_size is not None else None # +1 to include start token
input_features_dim = self.embedding_dim
# Position encoder (relative or absolute position of the tokens in the sequence)
if cfg.position_encoding_enabled is True:
self.pos_encoder1 = PositionalEncoding(input_features_dim, cfg.dropout)
self.pos_encoder2 = self.pos_encoder1
# Transformer with default encoder/decoder
self.transformer = nn.Transformer(d_model=input_features_dim,
nhead=cfg.nhead,
num_encoder_layers=cfg.num_encoder_layers,
num_decoder_layers=cfg.num_decoder_layers,
dim_feedforward=cfg.dim_feedforward,
dropout=cfg.dropout,
activation='relu',
custom_encoder = None,
custom_decoder = None)
# Decoder
self.fc = nn.Linear(input_features_dim, self.num_classes)
# If a BoolTensor is provided, the positions with the value of True will be ignored while the position with the value of False will be unchanged.
# tensor([[False, True, True, True],
# [False, False, True, True],
# [False, False, False, True],
# [False, False, False, False]])
def generate_mask(self, size, diagonal=1):
return torch.triu(torch.ones(size, size)==1, diagonal=diagonal)
def forward(self, data, src_mask=None, tgt_mask=None, mem_mask=None, src_key_padding_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None):
# Each input is (BS, seq_len)
# Content
data_content_id = data[CONTENT_ID].long()
# Answers
data_response = data[TARGET].long()
# Optional features
data_part = data[PART].long() if self.exercices_part_embedding is not None else None
if self.elapsed_time_cat is True:
data_elapsed_time = data[PRIOR_QUESTION_ELAPSED_TIME].long() if self.elapsed_time_embedding is not None else None
else:
data_elapsed_time = data[PRIOR_QUESTION_ELAPSED_TIME].float().unsqueeze(2) if self.elapsed_time_embedding is not None else None
if self.lag_time_cat is True:
data_lag_time = data[LAG].long() if self.lag_time_embedding is not None else None
else:
data_lag_time = data[LAG].float().unsqueeze(2) if self.lag_time_embedding is not None else None
data_explanation = data[PRIOR_QUESTION_HAD_EXPLANATION].long() if self.explanation_embedding is not None else None
# Start token(s)
# Add start token to correctness
data_response = torch.roll(data_response, shifts=(0, 1), dims=(0, 1)) # Shift right the sequence
data_response[:,0] = self.response_size # Start token (2)
# Add start token to lag time
if data_lag_time is not None:
data_lag_time = torch.roll(data_lag_time, shifts=(0, 1), dims=(0, 1)) # Shift right the sequence
data_lag_time[:,0] = self.lag_time_size # Start token
# Add start token to elapsed time
if data_elapsed_time is not None:
data_elapsed_time = torch.roll(data_elapsed_time, shifts=(0, 1), dims=(0, 1)) # Shift right the sequence
if self.elapsed_time_cat is True:
data_elapsed_time[:,0] = self.elapsed_time_size # Start token
else:
data_elapsed_time[:,0] = 0.0
# Add start token to explanation
if data_explanation is not None:
data_explanation = torch.roll(data_explanation, shifts=(0, 1), dims=(0, 1)) # Shift right the sequence
data_explanation[:,0] = self.explanation_size # Start token
# Questions, Part, Elapsed time, Lag embeddings
x_content_id = self.exercices_id_embedding(data_content_id) # (BS, seq_len, embedding_dim)
x_exercices_part = self.exercices_part_embedding(data_part) if self.exercices_part_embedding is not None else None # (BS, seq_len, embedding_dim)
x_elapsed_time = self.elapsed_time_embedding(data_elapsed_time) if self.elapsed_time_embedding is not None else None # (BS, seq_len, embedding_dim)
x_lag_time = self.lag_time_embedding(data_lag_time) if self.lag_time_embedding is not None else None # (BS, seq_len, embedding_dim)
x_explanation = self.explanation_embedding(data_explanation) if self.explanation_embedding is not None else None # (BS, seq_len, embedding_dim)
# Response embeddings
x_correctness = self.response_embedding(data_response) # (BS, seq_len, embedding_dim)
x_position = None
# Ei (sum of embeddings)
x_exercices = x_content_id
if x_exercices_part is not None:
x_exercices = x_exercices + x_exercices_part # (BS, seq_len, embedding_dim)
x_position_exercices = self.pos_encoder1(x_exercices) if self.pos_encoder1 is not None else x_exercices # (BS, seq_len, embedding_dim)
# Ri (sum of embeddings) [S, R1, Rk-1], S is start token
x_responses = x_correctness
if x_lag_time is not None:
x_responses = x_responses + x_lag_time
if x_elapsed_time is not None:
x_responses = x_responses + x_elapsed_time # (BS, seq_len, embedding_dim)
if x_explanation is not None:
x_responses = x_responses + x_explanation # (BS, seq_len, embedding_dim)
if x_attempt is not None:
x_responses = x_responses + x_attempt # (BS, seq_len, embedding_dim)
if x_exercices_task is not None:
x_responses = x_responses + x_exercices_task # (BS, seq_len, embedding_dim)
x_position_responses = self.pos_encoder2(x_responses) if self.pos_encoder2 is not None else x_responses # (BS, seq_len, embedding_dim)
# Transformer src: (S,N,E), tgt:(T,N,E), src_mask:(S,S), tgt_mask:(T,T)
# where S is the source sequence length, T is the target sequence length, N is the batch size, E is the feature number
# output: (T,N,E)
# src_key_padding_mask: (N,S), tgt_key_padding_mask: (N,T), memory_key_padding_mask: (N,S)
x_position_exercices = x_position_exercices.transpose(1,0) # (seq_len, BS, embedding_dim)
x_position_responses = x_position_responses.transpose(1,0) # (seq_len, BS, embedding_dim)
x_transformer = self.transformer(src=x_position_exercices, tgt=x_position_responses, src_mask=src_mask, tgt_mask=tgt_mask, memory_mask=mem_mask,
src_key_padding_mask=src_key_padding_mask, tgt_key_padding_mask=tgt_key_padding_mask, memory_key_padding_mask=memory_key_padding_mask) # (seq_len, BS, embedding_dim)
x_transformer = x_transformer.transpose(1,0) # (BS, seq_len, embedding_dim)
output = self.fc(x_transformer)
output = output.squeeze(dim=2)
return output
</code></pre>
<p><a href="https://www.kaggle.com/tikutiku" target="_blank">@tikutiku</a> Thank you! </p> |
Riiid Answer Correctness Prediction | 17th place solution - 4 features model | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Well who is learning, me or the model? :)</p>
<p>Thanks, you were pushing me to limit, sorry I couldn't keep up :) I was really tired last 2-3 weeks (not that it's an excuse) and couldn't make almost no progress.</p>
<p>Ofcourse I struggled, out of first 13 submission, 4 were successful :) All three of my models preprocess the data in the same way, so I had to do it only once!</p>
<p>I preprocessed the data locally into python dictionary with the following function, and uploaded it as a kaggle dataset. During inference, I was simply updating the inner dictionaries.</p>
<pre><code>def csv_to_dict(df):
"""maps the training data from csv to dictionary"""
# separate question events
questions_df = df[df["content_type_id"] == False]
# fill nans
questions_df["prior_question_elapsed_time"] = questions_df["prior_question_elapsed_time"].fillna(301000)
# questions history container
questions_container = dict(tuple(questions_df.groupby(["user_id"])))
# df -> tensors
for user_id in questions_container.keys():
tmp_df = questions_container[user_id]
questions_container[user_id] = {
"content_id": torch.LongTensor(tmp_df["content_id"].values),
"timestamp": torch.LongTensor(tmp_df["timestamp"].values),
"prior_question_elapsed_time": torch.LongTensor(tmp_df["prior_question_elapsed_time"].values),
"answered_correctly": torch.ShortTensor(tmp_df["answered_correctly"].values),
}
return questions_container
</code></pre>
<p>No, it's a vector repsesentation: <a href="https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html" target="_blank">Embedding</a></p>
<p>Part and tags improved my score on 10% of the data, but with 100% I found that transformer learns them :)<br>
Thanks!</p>
<p>My solution is based on a stack of the 3 models:</p>
<p>Edit: <a href="https://github.com/NikolaBacic/riiid" target="_blank">Code</a></p>
<p>Transformer(Encoder) - Validation: 0.8100<br>
LSTM - Validation: 0.8062<br>
GRU - Validation: 0.8060</p>
<p>LightGBM for stack: Validation: 0.8119 LB: 0.814x</p>
<p>I validated on new users (2.5M rows). <a href="https://www.kaggle.com/its7171/cv-strategy" target="_blank">This one</a> is great, but it was computationally expensive for me. </p>
<p>All three models used 4 features (embeddings):</p>
<ul>
<li>question id embedding</li>
<li>response of the previous question (1-correct 0-incorrect)</li>
<li><strong>ln(lag+1) * minute_embedding : taking a ln(x+1) of the lag initialy improved my score by ~0.01. I'd be very interested to hear whether it'd improve your models too</strong></li>
<li>ln(prior_elapsed_time+1) * minute_embedding</li>
</ul>
<p>Input is sum of these 4 embeddings.</p>
<p><strong>Parameters</strong></p>
<p>Shareable parameters (all three models):<br>
max_quest = 300 (window size)<br>
slide = 150<br>
Adam optimizer<br>
cosine lr scheduler<br>
BCE loss<br>
<strong>xavier_uniform_ weight initialization (0.004 improvement over PyTorch's default one)</strong></p>
<p>I used Optuna for hyperparameter search on 20% of the data. It was my first time using it, and it's a great tool!</p>
<p>Transformer hyperparameters:<br>
nhead = 8<br>
head_dim = 60<br>
dim_feedforward = 2048<br>
num_encoder_layers = 8<br>
epochs = 6<br>
batch_size = 64<br>
lr = 0.00019809259513409007<br>
warmup_steps = 150*5</p>
<p>LSTM hyperparameters:<br>
input_size_lstm = 384<br>
hidden_size_lstm = 768<br>
num_layers_lstm = 4<br>
epochs = 4<br>
batch_size = 64<br>
lr = 0.0007019926812886481<br>
warmup_steps = 100</p>
<p>GRU hyperparameters:<br>
input_size_gru = 320<br>
hidden_size_gru = 512<br>
num_layers_gru = 3<br>
epochs = 4<br>
batch_size = 64<br>
lr = 0.0008419253431185227<br>
warmup_steps = 80</p>
<p>What didn't work:<br>
-lectures<br>
-questions metadata<br>
-position encoding: both learnable and hard-coded<br>
-predicting user_answer instead of correctness<br>
-gradient clipping<br>
-label smoothing<br>
-dropout<br>
-…</p>
<p>gg</p>
<blockquote>
<p>-Response of the previous question is for quesiton_id or for task_container_id ?</p>
</blockquote>
<p>It's for quesiton_id. Yes, there is some leakage, but when I tried to fix that I got some funny results. Maybe my implementation wasn't right :? </p>
<blockquote>
<p>-Did you try to sum mutiple previous responses ? Like a rolling window</p>
</blockquote>
<p>I didn't. I think that transformer can pick that up, right?</p>
<p>This is related to the sampling strategy. <code>max_quest</code> is maximum number of questions processed at a time i.e. maximum input sequence length.<br>
Now, some users might have <code>> max_quest</code> questions in their histories so you need to make multiple samples from one user. This is when <code>slide</code> comes into play. The simplest way to handle this is to set <code>slide = max_quest</code> and get samples from 0-300, 300-600, 600-900…<br>
But this is suboptimal because questions close to the left hand side of the sample, like 300:350, or 600-650 can't see recent history. So I set <code>slide = 150</code> and make samples like this:</p>
<p>sample1: 0-300 -> calculate loss on all positions<br>
sample2: 150-450 -> calculate loss on 300-450 positions<br>
sample3: 300-600 -> calculate loss on 450-600 positions<br>
etc.</p>
<p>Hopefully this is clear enough.</p>
<p>Yes, but embedding is not shared. Not that it makes too much of a difference; I remember trying that and it giving me negligible worse result.</p> |
Riiid Answer Correctness Prediction | 18th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>thanks for your care,we found the bug is from changing some features from float64 to float32 by mistake in inference notebook,if it is normal we still can not get gold medal,no regret! </p>
<p>the reason we use catboost is just speed,I think lightgbm can get a similar score and feature gains</p>
<p>Congrats to all medal teams and new Grandmasters,Masters,Experts.Thanks to Organizers and kaggle for such a good competition,it shows that kaggle competition is not just a game but also can be a useful machine learning project.</p>
<h1>Team</h1>
<ul>
<li>At first we have three teams individually, tomoyo and me, ethan and qyxs , wrb0312.We focus on feature engineering and optimization before wrb0312 joined us,I think we made many good features but neural network dominated this competition.wrb0312 did a great job even he use transformer for the first time.After wrb0312 joined us there are only ten days left,we focus on ensembling our models for inference,and also improved transformer very much.Our team members are from china and japan, it's very interesting to see we use chinese,japanese,english mixed-language to communicate.Greate job everyone!</li>
</ul>
<h1>Optimization</h1>
<ul>
<li><p>For GBM features, rather than using many dictionary to save features' data, we developed a nubma-based framework to speed up feature engineering process and online calculation. Firstly, the data are sorted by ['user_id', 'timestamp', 'content_id'] and split into different arrays. Then we created features in different array via self-designed rolling function or self-designed cumlative function. Actually, it provides us a very flexible way to create features and test it. In 10m data, the feature engineering process needs only 5 minutes to finish it.</p></li>
<li><p>Some examples are listed as below. </p></li>
</ul>
<pre><code>from tqdm import tqdm
from numba import jit,njit
from joblib import Parallel, delayed
from tqdm import tqdm
import gc
from multiprocessing import Process, Manager,Pool
from functools import partial
from numba import prange
import numpy as np
import pandas as pd
from numba import types
from numba.typed import Dict
import functools, time
from numba.typed import List
def timeit(f):
def wrap(*args, **kwargs):
time1 = time.time()
ret = f(*args, **kwargs)
time2 = time.time()
print('{:s} function took {:.3f} s'.format(f.__name__, np.round(time2-time1, 2)))
return ret
return wrap
def rolling_feat_group(train, col_used):
a = train[col_used].values
ind = np.lexsort((a[:,2],a[:,1],a[:,0]))
a = a[ind]
g = np.split(a, np.unique(a[:, 0], return_index=True)[1][1:])
return g, ind, col_used
@jit(nopython = True, fastmath = True)
def rolling_cal(arr, step, window = 5, shift_ = 1):
m = 2
arr_ = np.concatenate((np.full((window, ), np.nan), arr))
ret = np.zeros((arr.shape[0], m))
beg = window
for i in step:
tmp = arr_[beg-window:beg]
ret[beg - window:(beg - window + i), 0] = np.nanmean(tmp)
ret[beg - window:(beg - window + i), 1] = np.nansum(tmp)
beg += i
return ret
@jit(nopython = True, fastmath = True)
def rolling_time_cal(arr, window = 5, shift_ = 1):
m = 1
arr_ = np.concatenate((np.full((window, ), np.nan), arr))
ret = np.zeros((arr.shape[0], m))
for i in range(0,arr.shape[0], 1):
tmp = arr_[i:i+window+1]
ret[i, 0] = np.nanmean(tmp)
return ret
def rolling_cal_wrap(tmp_g, shift_period):
m = 2
tmp_res = []
step = np.unique(tmp_g[:, 1], return_counts=True)[1]
for window_size in shift_period:
tmp = rolling_cal(tmp_g[:, 2], step, window_size)
tmp_res.append(tmp)
tmp_res = np.concatenate(tmp_res, axis = 1)
return tmp_res
def rolling_time_cal_wrap(tmp_g, shift_period):
m = 2
tmp_res = []
for window_size in shift_period:
tmp = rolling_time_cal(tmp_g[:, 2], window_size)
tmp_res.append(tmp)
tmp_res = np.concatenate(tmp_res, axis = 1)
return tmp_res
def rolling_feat_cal(tmp_g, name_dict, global_period):
answer_idx = name_dict.index('answered_correctly')
prior_idx = name_dict.index('prior_question_elapsed_time')
item_mean_idx = name_dict.index('item_mean')
task_set_idx = name_dict.index('task_set_distance')
tmp_res1 = rolling_cal_wrap(tmp_g[:,[0,1, answer_idx]], global_period)
tmp_res2 = rolling_time_cal_wrap(tmp_g[:,[0,1, prior_idx]], global_period)
tmp_res3 = rolling_time_cal_wrap(tmp_g[:,[0,1, item_mean_idx]], global_period)
tmp_res4 = rolling_time_cal_wrap(tmp_g[:,[0,1, task_set_idx]], global_period)
tmp_res = np.concatenate([tmp_res1, tmp_res2, tmp_res3, tmp_res4], axis = 1)
return tmp_res
</code></pre>
<ul>
<li>If anyone interested in how to create features via numba-framework, Tomoyo publiced his full GBM pipeline in github(<a href="url" target="_blank">https://github.com/ZiwenYeee/Riiid-numba-framework</a>)</li>
</ul>
<h1>Catboost(LB 0.807)</h1>
<h3>summary</h3>
<ul>
<li>We created 183 features for final catboost model,including some original features,global statistics(item base),cumulative and rolling statistics(user base),tfidf-svd(base on question's user list),word2vec(base on user's question list, wrong and correct tag list ),timedelta from many perspective,last same part groups features.</li>
</ul>
<h3>gbm benchmark</h3>
<ul>
<li>We compared lightgbm ,xgboost,catboost,catboost is the best for the training and inference speed,and memory consuming.When train the full data,lightgbm need over 100 hours with my AMD Ryzen ThreadRipper 3970X,xgboost always have out of memory error even using dask with 4 RTX 3090.</li>
</ul>
<h3>strong features and interesting finding by qyxs</h3>
<ul>
<li>1. the history difficulty statistics features of user who had correct/wrong answers, boost almost 0.003</li>
</ul>
<pre><code>tmp_df = for_question_df.groupby('content_id')['answered_correctly'].agg([['corr_ratio', 'mean']]).reset_index()
tmp_fe = for_question_df[for_question_df['answered_correctly']==0].merge(tmp_df, on='content_id').groupby('user_id')['corr_ratio'].agg(['min', 'max', 'mean', 'std']).reset_index()
for_train = for_train.merge(tmp_fe, on='user_id', how='left')
</code></pre>
<ul>
<li>2. focus on the records about the current part of user connect with last same part, generate the features include answer correct ratio, time diff, frequency etc, boost almost 0.002</li>
</ul>
<pre><code>for_question_df['rank_part'] = for_question_df.groupby(['user_id', 'part'])['timestamp'].rank(method='first')
for_question_df['rank_user'] = for_question_df.groupby(['user_id'])['timestamp'].rank(method='first')
for_question_df['rank_diff'] = for_question_df['rank_user'] - for_question_df['rank_part']
for_question_df['part_times'] = for_question_df.groupby(['user_id', 'part'])['rank_diff'].rank(method='dense')
for_question_df['rank_diff'] = for_question_df.groupby(['user_id', 'part'])['rank_diff'].rank(method='dense', ascending=False)
last_part = for_question_df[for_question_df['rank_diff']==1]
part_times = for_question_df.groupby(['user_id', 'part'])['part_times'].agg([['part_times', 'max']]).reset_index()
last_part_df = last_part.groupby(['user_id', 'part'])['answered_correctly'].agg([['last_continue_part_ratio', 'mean'], ['last_continue_part_cnt', 'count']]).reset_index()
last_part_time = last_part.groupby(['user_id', 'part'])['timestamp'].agg([['last_continue_part_time_start', 'min'], ['last_continue_part_time_end', 'max']]).reset_index()
last_part_df = last_part_df.merge(last_part_time, on=['user_id', 'part'], how='left')
last_part_df = last_part_df.merge(part_times, on=['user_id', 'part'], how='left')
last_part_df['part_time_diff'] = last_part_df['last_continue_part_time_end'] - last_part_df['last_continue_part_time_start']
last_part_df['part_time_freq'] = last_part_df['last_continue_part_cnt']/last_part_df['part_time_diff']
for_train = for_train.merge(last_part_df, on=['user_id', 'part'], how='left')
for_train['last_continue_part_time_start'] = for_train['timestamp'] - for_train['last_continue_part_time_start']
for_train['last_continue_part_time_end'] = for_train['timestamp'] - for_train['last_continue_part_time_end']
</code></pre>
<ul>
<li>3. the answer correctly ratio of each question under differenct user abilititys (split for 11 bins), boost almost 0.001</li>
</ul>
<pre><code>for_question_df['user_ability'] = for_question_df.groupby('user_id')['answered_correctly'].transform('mean').round(1)
tmp_df = for_question_df.pivot_table(index='content_id', columns='user_ability', values='answered_correctly', aggfunc='mean').reset_index()
tmp_df.columns = ['content_id'] + [f'c_mean_{i}_ratio' for i in range(11)]
for_train = for_train.merge(tmp_df, on='content_id', how='left')
</code></pre>
<ul>
<li>Some interseting points:<ul>
<li>1.they would watch lecture after users had wrong answers, so we could generated some features from this. LB is not improved caused by the lectures info in next group maybe.</li>
<li>2.the content_id such as 0-195, 7851-7984 etc, then are all same in one part and continuous with each other,we could build a new bundle to generate features</li></ul></li>
</ul>
<h3>strong features and interesting finding by ethan</h3>
<ul>
<li>1. user's behavior in last 1,5,…,60 minutes, 0.001 boost</li>
</ul>
<pre><code>for w in [1, 5, 10, 15, 30, 45, 60]:
print(w)
tmp = q_logs[q_logs['timestamp']>=(q_logs['end_time']-w*60*1000)].copy()
group_df = tmp.groupby(['user_id'])['content_id'].agg([['user_content_nunique_in_last{}mins'.format(w), 'nunique']]).reset_index()
train = train.merge(group_df, on=['user_id'], how='left')
group_df = tmp.groupby(['user_id'])['part'].agg([['user_part_nunique_in_last{}mins'.format(w), 'nunique']]).reset_index()
train = train.merge(group_df, on=['user_id'], how='left')
group_df = tmp.groupby(['user_id'])['answered_correctly'].agg([['user_correct_raito_in_last{}mins'.format(w), 'mean']]).reset_index()
train = train.merge(group_df, on=['user_id'], how='left')
</code></pre>
<ul>
<li>2. "users' ablility" statistics in each question, seperately by "answered_correctly"(0/1), 0.002 boost</li>
</ul>
<pre><code>cc = q_logs.groupby(['user_id'])['answered_correctly'].agg([['corr_ratio', 'mean']]).reset_index()
gg = q_logs[['user_id', 'content_id', 'answered_correctly']].merge(cc, on=['user_id'], how='left')
group_df1 = gg[gg['answered_correctly']==1].groupby(['content_id'])['corr_ratio'].agg([['question_correct_user_ablility_min', 'min'],
['question_correct_user_ablility_max', 'max'],
['question_correct_user_ablility_mean', 'mean'],
['question_correct_user_ablility_skew', 'skew'],
['question_correct_user_ablility_med', 'median'],
['question_correct_user_ablility_std', 'std']]).reset_index()
group_df2 = gg[gg['answered_correctly']==0].groupby(['content_id'])['corr_ratio'].agg([['question_wrong_user_ablility_min','min'],
['question_wrong_user_ablility_max','max'],
['question_wrong_user_ablility_mean','mean'],
['question_wrong_user_ablility_skew','skew'],
['question_wrong_user_ablility_med','median'],
['question_wrong_user_ablility_std','std']]).reset_index()
</code></pre>
<ul>
<li>3. "lagtime" statistics in each question, seperately by "answered_correctly"(0/1), means the distribution of users' preprare time for answering this question correctly, about 0.001 boost</li>
</ul>
<pre><code>user_task_timestamp = q_logs[['user_id', 'task_container_id', 'timestamp']].drop_duplicates()
user_task_timestamp['lag_time'] = user_task_timestamp['timestamp'] - user_task_timestamp.groupby(['user_id'])['timestamp'].shift(1)
tmp = q_logs[['user_id', 'task_container_id', 'content_id', 'answered_correctly']].merge(user_task_timestamp.drop(['timestamp'], axis=1), on=['user_id', 'task_container_id'], how='left')
group_df = tmp[tmp['answered_correctly']==1].groupby(['content_id'])['lag_time'].agg([['c_lag_time_mean', 'mean'],
['c_lag_time_std', 'std'],
['c_lag_time_max', 'max'],
['c_lag_time_min', 'min'],
['c_lag_time_median', 'median']]).reset_index()
train = train.merge(group_df, on=['content_id'], how='left')
group_df = tmp[tmp['answered_correctly']==0].groupby(['content_id'])['lag_time'].agg([['w_lag_time_mean', 'mean'],
['w_lag_time_std', 'std'],
['w_lag_time_max', 'max'],
['w_lag_time_min', 'min'],
['w_lag_time_median', 'median']]).reset_index()
train = train.merge(group_df, on=['content_id'], how='left')
</code></pre>
<h3>feature list</h3>
<pre><code>['content_id',
'prior_question_elapsed_time',
'prior_question_had_explanation',
'correct_answer',
'user_count',
'user_sum',
'user_mean',
'item_count',
'item_sum',
'item_mean',
'answer_ratio_0',
'answer_ratio_1',
'answer_ratio_2',
'bundle_id',
'part',
'le_tag',
'question_correct_user_ablility_mean',
'question_correct_user_ablility_median',
'question_wrong_user_ablility_mean',
'question_wrong_user_ablility_median',
'word2vec_0',
'word2vec_1',
'word2vec_2',
'word2vec_3',
'word2vec_4',
'svd_0',
'svd_1',
'svd_2',
'svd_3',
'svd_4',
'tags_w2v_correct_mean_0',
'tags_w2v_wrong_mean_0',
'tags_w2v_correct_mean_1',
'tags_w2v_wrong_mean_1',
'tags_w2v_correct_mean_2',
'tags_w2v_wrong_mean_2',
'tags_w2v_correct_mean_3',
'tags_w2v_wrong_mean_3',
'tags_w2v_correct_mean_4',
'tags_w2v_wrong_mean_4',
'real_time_wrong_mean',
'real_time_wrong_median',
'real_time_correct_mean',
'real_time_correct_median',
'task_set_distance_wrong_mean',
'task_set_distance_wrong_median',
'task_set_distance_correct_mean',
'task_set_distance_correct_median',
'mean_0_ratio',
'mean_1_ratio',
'mean_3_ratio',
'mean_4_ratio',
'mean_5_ratio',
'mean_6_ratio',
'mean_7_ratio',
'mean_8_ratio',
'mean_9_ratio',
'mean_10_ratio',
'user_d1',
'user_d2',
'task_set_distance',
'user_diff_mean',
'user_diff_std',
'user_diff_min',
'user_diff_max',
'task_set_item_mean',
'task_set_item_min',
'task_set_item_max',
'task_set_distance2',
'task_distance_shift',
'task_set_distance_diff',
'task_distance_diff_shift',
'container_mean_1',
'container_mean_5',
'container_std_5',
'container_mean_10',
'container_std_10',
'container_mean_20',
'container_std_20',
'container_mean_30',
'container_std_30',
'container_mean_40',
'container_std_40',
'prior_question_elapsed_time_mean_1',
'prior_question_elapsed_time_mean_5',
'prior_question_elapsed_time_mean_10',
'prior_question_elapsed_time_mean_20',
'prior_question_elapsed_time_mean_30',
'prior_question_elapsed_time_mean_40',
'item_mean_mean_30',
'item_mean_mean_40',
'task_set_distance_mean_1',
'task_set_distance_mean_5',
'task_set_distance_mean_10',
'task_set_distance_mean_20',
'task_set_distance_mean_30',
'begin_time_diff',
'end_time_diff',
'part_time_diff_mean',
'part_session_mean',
'part_session_sum',
'part_session_count',
'full_group0_item_mean_mean',
'full_group0_item_mean_median',
'full_group0_task_set_distance_median',
'full_group0_timestamp_mean',
'full_group0_timestamp_median',
'full_group1_item_mean_mean',
'full_group1_item_mean_median',
'full_group1_task_set_distance_median',
'full_group1_timestamp_median',
'part_sum',
'part_count',
'part_mean',
'part_sum_global_ratio',
'part_sum_1',
'part_sum_5',
'part_mean_5',
'part_sum_10',
'part_mean_10',
'cum_answer0_mean_item_mean',
'cum_answer0_median_item_mean',
'cum_answer0_median_task_set_distance',
'cum_answer1_mean_item_mean',
'cum_answer1_median_item_mean',
'cum_answer1_mean_task_set_distance',
'cum_answer1_median_task_set_distance',
'cum_answer0_time_diff',
'cum_answer1_time_diff',
'global_task_set_shift1',
'global_task_set_shift2',
'global_task_set_shift4',
'global_task_set_shift5',
'cum_answer0_mean_wrong_time_diff',
'cum_answer0_median_wrong_time_diff',
'cum_answer1_mean_right_time_diff',
'content_correct_mean',
'content_correct_sum',
'content_correct_count',
'hard_answer0_time',
'hard_answer1_time',
'full_bundle_item_mean_mean',
'full_bundle_item_mean_median',
'full_bundle_task_set_distance_mean',
'full_bundle_task_set_distance_median',
'full_bundle_timestamp_mean',
'full_bundle_timestamp_median',
'bundle_sum',
'bundle_mean',
'bundle_count',
'user_trend_mean',
'user_trend_median',
'user_trend_roll_user_ans_sum',
'user_trend_roll_user_ans_mean',
'user_trend_roll_user_ans_count',
'user_trend_roll_item_ans_mean',
'user_trend_roll_item_ans_count',
'div_ratio1',
'div_ratio2',
'div_ratio3',
'new_Feat0',
'new_Feat1',
'new_Feat2',
'new_Feat3',
'part_time_wrong_div',
'part_time_right_div',
'diff_lag_median_div',
'diff_item_median_div',
'diff_time_median_div',
'diff_item_mean_div',
'diff_task_set_mean_div',
'diff_timestamp_mean_div',
'last_20_frequent_answer',
'last_20_frequent_answer_count',
'last_20_frequent_answer_mean',
'last_20_frequent_answer_sum',
'last_user_same_answer_tf',
'last_item_same_answer_tf',
'last_right_time_diff',
'last_wrong_time_diff',
'last_5_part_time_div',
'last_10_part_time_div',
'last_20_part_time_div']
</code></pre>
<h1>Transformer(LB 0.808)</h1>
<ul>
<li><p>Used questions only. Lectures did't improve our validation score.</p></li>
<li><p>In training window size 800 (the bigger,the better), in inference window size 300(the bigger,the better,but time consuming).</p></li>
<li><p>Optimizer : Adam with lr = 8e-4, beta1 = 0.9, beta2 = 0.999 with warmup steps to 4000.</p>
<ul>
<li>Without warmup steps, didn't converge.</li></ul></li>
<li><p>Number of layers = 4, dimension of the model = 256, dimension of the FFN = 2048.</p></li>
<li><p>Batch size=80</p>
<ul>
<li>With smaller size, didn't converge.</li></ul></li>
<li><p>Dropout = 0</p></li>
<li><p>Postion encoding : Axial Positional Embedding</p>
<ul>
<li><a href="https://arxiv.org/abs/1912.12180" target="_blank">https://arxiv.org/abs/1912.12180</a></li>
<li><a href="https://github.com/lucidrains/axial-positional-embedding" target="_blank">https://github.com/lucidrains/axial-positional-embedding</a></li>
<li>This encoding improved score significantly.</li></ul></li>
<li><p>Augmentation : Mixup-Transformer</p>
<ul>
<li><a href="https://arxiv.org/abs/2010.02394" target="_blank">https://arxiv.org/abs/2010.02394</a></li></ul></li>
<li><p>Inputs</p>
<ul>
<li>question_id</li>
<li>part</li>
<li>prior_question_elapsed_time / 1000</li>
<li>lagtime<ul>
<li>log1p((timestamp_t - timestamp_(t-1)) /1000 / 60)</li>
<li>This feature improved the score significantly</li></ul></li>
<li>answered_correctly</li>
<li>GBT feats (imortance top N)</li></ul></li>
<li><p>model image<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F715257%2F6743aa3ef1403f4084dd4fa3d8372702%2FScreen%20Shot%202021-01-08%20at%2011.34.11.png?generation=1610073332748004&alt=media" alt=""></p></li>
</ul>
<h1>Ensemble(LB 0.812)</h1>
<ul>
<li>Finally we use one catboost and two transformers for ensemble due to the inference time limitation.Unfortunately our inference notebook has bug although we improved our model about 0.0006 at last day.</li>
</ul> |
Riiid Answer Correctness Prediction | 20th solution, transformer encoder only model (SERT) with code and notebook | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/neilgibbons" target="_blank">@neilgibbons</a> no problem! Glad it was of help.</p>
<p>Thanks. Mostly a 2x3090 machine with a 24 core threadripper</p>
<p>You're welcome! Glad it is useful </p>
<p>That's a good question <a href="https://www.kaggle.com/abdurrehman245" target="_blank">@abdurrehman245</a> . Because of the autoregressive mask, the items in the sequence can only see items that come before. For instance, the 1st question can only see itself, and the 2nd can only see itself and the 1st. Therefore, the early questions have less context compared to the later questions. If I let all positions have equal weight, the model will overfit to earlier positions with little context. Another consideration is that during inference, I'm always using the last position to make predictions, so it doesnt make sense to give equal weighting to everything. Using this loss weight gave me 0.002 boost.</p>
<p>As for question_cmnts.csv, here's the notebook <a href="https://www.kaggle.com/spacelx/2020-r3id-clustering-question-tags" target="_blank">https://www.kaggle.com/spacelx/2020-r3id-clustering-question-tags</a></p>
<p>I always prefer less complexity. To me I though the decoder was not necessary, so I got rid of it. Most of my changes was also just to reduce complexity (e.g. learnable pos encoding -> fixed)</p>
<p>Thx and nice suggestion. I will try to add that information</p>
<p>Thanks gold was within reach but alas, mistakes were made</p>
<p>Thanks to the organizers for hosting a comp with very little shakeup. This is always nice to see. I wanna thank <a href="https://www.kaggle.com/wangsg" target="_blank">@wangsg</a> and <a href="https://www.kaggle.com/leadbest" target="_blank">@leadbest</a> for the starter kernels. Without those I would not have known what to do in this difficult comp.</p>
<h1>Architecture</h1>
<p>I use the transformer encoder only SERT (SIngle-directional Encoder Representation from Transformers), to make predictions just one linear layer after the last encoder layer. This is probably a mistake</p>
<h1>Embeddings</h1>
<ol>
<li>Question_id</li>
<li>Prior question correctness</li>
<li>Timestamp difference between bundles</li>
<li>Prior question elapsed time</li>
<li>Prior question explanation</li>
<li>Tag cluster thanks to <a href="https://www.kaggle.com/spacelx" target="_blank">@spacelx</a> </li>
<li>Tag vector</li>
<li>Fixed pos encoding, same as in Attention is All You Need</li>
</ol>
<h1>Key modifications</h1>
<ol>
<li>encoder only, this is kind of stupid and a mistake. I think this cost me a few places</li>
<li>fixed pos encoding, which allows retraining the model with longer sequences</li>
<li>layer norm and dropout after embedding layers</li>
<li>Loss weight favoring later positions, np.arange(0,1,1/seq_length)*loss</li>
</ol>
<h1>Mistakes I couldn't resolve</h1>
<ol>
<li>Intra-bundle leakage, I made a nice task mask implementation, but it only made my score worse, probably because of the lack of a decoder. I ended using just an autoregressive mask</li>
<li>First few positions during inference and training may have wrong timestamp difference since i simply do t[1:]-t[:-1]</li>
</ol>
<h1>Code and submission kernel</h1>
<p><a href="https://github.com/Shujun-He/Riiid-Answer-Correctness-Prediction-20th-solution" target="_blank">https://github.com/Shujun-He/Riiid-Answer-Correctness-Prediction-20th-solution</a></p>
<p><a href="https://www.kaggle.com/shujun717/fork-of-tag-encoding-with-loss-weight?scriptVersionId=51272583" target="_blank">https://www.kaggle.com/shujun717/fork-of-tag-encoding-with-loss-weight?scriptVersionId=51272583</a></p>
<h2>Feel free to ask questions. This is a very brief write-up</h2>
<p>Yes, so there are 188 tags in total, so for each question I use a tag vector of size 188 of 0's and 1's. For instance, it a question has tags 1 and 140, the 1th and 140th element would be 1's and the rest would be 0's, and then a linear layer to transform it to embedding dim</p> |
Riiid Answer Correctness Prediction | 21st Place Solution - Ensemble of 3 Saint++ Models | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you!</p>
<p>Yes the ensemble was a simple average of all models, all models were all too close individually to weight one over another.</p>
<p>The problem with Noam was that because it's a linear increase that needs to be parametrised (you have to decide number of warmup steps) - the network really learns nothing at the beginning. Then the exponential decay hits immediately after reaching your target lr. While this might be good to warmup very deep Transformers, we found that using a simple lr and starting directly at the target lr was much faster. </p>
<p>I don't remember the exact difference, but probably something like 1.5x faster to <strong>not</strong> use Noam and no loss/auc difference. It would obviously depend on how you parametrize the Noam scheme. The default and advertised Noam scheme is 4k warm up steps but that should also depend on your batch size as well.</p>
<p>Thanks for noticing! The parenthesis was wrongly included in the url - fixed it! </p>
<p><code>val_2500000_lec_True.npy</code> and <code>val_2500000_lec_False.npy</code> are just numpy arrays that describe the generated val set - its just an array of length 2.5mil of row ids that belong to val. </p>
<p>You don't actually need them since it will get generated randomly when you run the code.</p>
<p><code>user_weights.npy</code> is not used anymore.</p>
<p>If you want to reproduce, it will be a little trickier than just running the notebook. You'll have to use the python <code>main.py</code> script. The notebook is unfortunately out of date.</p>
<p><code>padding_idx</code> is used because we pass batches of variable lengths sequences to the network, so for instance, if we have two sequences (one length 6 and one length 3):</p>
<p>[[1, 2, 3, 4, 5, 6],<br>
[1, 2, 3, 0, 0, 0]]</p>
<p>Then when we pass them through embedding we don't care about the embedding of the padding token so we can just set the output vector to 0 directly.</p>
<p><a href="https://www.kaggle.com/abdurrehman245" target="_blank">@abdurrehman245</a> you can find the generation of some of the <code>.npy</code> files in the <code>preprocessing.py</code> file. Some are also included in the repo itself in the <code>data</code> folder. </p>
<p>Apologies if some are missing, if you let me know in particular which one you are referring to I can rectify it. </p>
<p>There is one I know is missing: <code>questions_lectures_pct.npy</code>, which is a content based feature one, but it is not actually needed to reproduce our results since we did not use it in the end.</p>
<p>Finetuning was done by simply resuming our best checkpoint after training on (90%) to a window size double of what it had seen (100 -> 200) and the remainder of the training set (10%). This made training faster (since we could train on smaller windows) and models with bigger windows tended to be a little bit more robust, but it did not greatly improve our AUC. </p>
<p>Thanks! </p>
<p>Well the problem is that I never got any of the alternative attentions to even converge. I used public implementations of them so I'm quite certain it wasn't from a bug. I spent about a week on testing a lot of them (I didn't try the Performer architecture). I think they are much harder to train for some reason on this dataset, this could be due to me not being patient enough in training or having some wrong hyperparameter/initialization scheme. </p>
<p>Hi all!<br>
After seeing everyone share their solution we have decided to also share ours in the same spirit.</p>
<p>We ran a modified SAINT+ that <strong>included lectures, tags and a couple of aggregate features.</strong></p>
<p>All our code is available on github ( <a href="https://github.com/gautierdag/riiid" target="_blank">https://github.com/gautierdag/riiid</a> ). We used Pytorch/Pytorch Lightning and Hydra.</p>
<p>Our single model public LB was 0.808, and we were able to push that to LB 0.810 (Final LB 0.813) through ensembling.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F583548%2Fa19776338ca1ba57bd1260bf8e0ec678%2Friiid_model.png?generation=1610188697948445&alt=media" alt=""></p>
<h2>Modifications to original SAINT:</h2>
<ul>
<li><p><strong>Lectures</strong>: lectures were included and had a special trainable embedding vector instead of the answer embedding. The loss was not masked for lecture steps. We tested both with and without, and the difference wasn't crazy. The big benefit they offered was that they made everything much simpler to handle since we did not need to filter anything.</p></li>
<li><p><strong>Tags</strong>: Since questions had a maximum of six tags, we passed sequences of length 6 to a tag embedding layer and summed the output. Therefore our tag embeddings were learned for each tag but then sum across to obtain an embedding for all the tags in a question/lecture. The padding simply returned a 0 vector.</p></li>
<li><p><strong>Agg Feats</strong>: This is what gave us a healthy boost of 0.002 on public LB. By including 12 aggregate features in the decoder we were able to use some of our learnings in previous LGBM implementations. </p></li>
</ul>
<h4>Agg feats</h4>
<p>We used the following aggregate features:</p>
<ul>
<li>attempts: number of previous attempts on current question (clamped to 5 and normalized to 1)</li>
<li>mean_content_mean: average of all the average question accuracy (population wise) seen until this step by the user</li>
<li>mean_user_mean: average of user's accuracy until this step </li>
<li>parts_mean: seven dimensional (the average of the user's accuracy on each part)</li>
<li>m_user_session_mean: the average accuracy in the current user's session </li>
<li>session_number: the current session number (how many sessions up till now)</li>
</ul>
<p>Note: session is just a way of dividing up the questions if a difference in time between questions is greater than two hours.</p>
<h4>Float vs Categorical</h4>
<p>We switched to use a float representation of answers in the last few days. This seemed to have little effect on our LB auc unfortunately, but had an effect locally. The idea was that since we auto-regress on answers we would propagate the uncertainty that the model displayed into future predictions.</p>
<p>All aggs and time features are floats that are ran through a linear layer (without bias). They were all normalized to either [-1,1] or [0, 1].</p>
<p>All other embeddings are categorical.</p>
<h2>Training:</h2>
<p>What made a <strong>big</strong> difference early on was our sampling methodology. Unlike most approaches in public kernels, we did not take a user-centric approach to sample our training examples. Instead, we simply sampled based on row_id and would load in the previous window_size history for every row.</p>
<p>So for instance if we sampled row 55. Then we would find the user id that row 55 corresponds to, and load in their history up until that row. The size of the window would then be the min(window_size, len(history_until_row_55)).</p>
<p>We used Adam with 0.001 learning rate, early stopping and lr decay based on val_auc during training.</p>
<h2>Validation</h2>
<p>For validation, we kept a holdout set of 2.5mil (like the test set) and generated randomly with a certain proportion guaranteed of new users. We used only 250,000 rows to actually validate on during training.</p>
<p>For every row in validation, we would pick a random number of inference steps between 1 and 10. We would then infer over these steps, not allowing the model to see the true answers and having to use its own.</p>
<h2>Inference</h2>
<p>During inference, when predicting multiple questions for a single user, we fed back the previous prediction of our model as answers in the model. This auto-regression was helpful in propagating uncertainty and helped the auc.</p>
<p>I saw a writeup that said that this was not possible to do because it constrains you to batch size = 1. That is wrong, you can actually do this for a whole batch in parallel and it's a little tricky with indexes but it is doable. You simply have to keep track of each sequence lengths and how many steps you are predicting for each.</p>
<p>Unfortunately since some of our aggs are also based on user answers, these do not get updated until the next batch because they are calculated on CPU and not updated in that inference loop.</p>
<h2>Parameters</h2>
<p>We ensemble three models.</p>
<p>First model:</p>
<ul>
<li>64 emb_dim</li>
<li>4/6 encoder/decoder layers</li>
<li>256 feed foward in Transformer</li>
<li>4 Heads</li>
<li>100 window size</li>
</ul>
<p>Second model:</p>
<ul>
<li>64 emb_dim</li>
<li>4/6 encoder/decoder layers</li>
<li>256 feed foward in Transformer</li>
<li>4 Heads</li>
<li>200 window size (expanded by finetuning on 200)</li>
</ul>
<p>Third model:</p>
<ul>
<li>256 emb_dim</li>
<li>2/2 encoder/decoder layers</li>
<li>512 feed foward in Transformer</li>
<li>4 Heads</li>
<li>256 window size</li>
</ul>
<h2>Hardware</h2>
<p>We rented three machines but we could have probably gotten farther with using a larger single machine:</p>
<ul>
<li>3 X 1 Tesla V100(16gb VRAM)</li>
</ul>
<h2>Other things we tried</h2>
<ul>
<li>Noam lr schedule (super slow to converge)</li>
<li>Linear attention / Reformer / Linformer (spent way to much time on this)</li>
<li>Local Attention</li>
<li>Additional Aggs in output layer and a myriad of other aggs</li>
<li>Concatenating Embeds instead of summing</li>
<li>A lot of different configurations of num_layers / dims / .. etc</li>
<li>Custom attention based on bundle </li>
<li>Ensembling with LGBM</li>
<li>Predicting time taken to answer question as well as answer in Loss</li>
<li>K Beam decoding (predicting in parallel K possible paths of answers and taking the one that maximized joint probability of sequence)</li>
<li>Running average of agg features (different windows or averaging over time)</li>
<li>Causal 1D convolutions before the Transformer</li>
<li>Increasing window size during training</li>
<li>…</li>
</ul>
<p>Finally, thanks to <a href="https://www.kaggle.com/eeeedev" target="_blank">@eeeedev</a> for the adventure!</p> |
Riiid Answer Correctness Prediction | 22nd Place Solution: Just Encoder Modules | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks mate. I've added the link in post now.</p>
<p><a href="https://www.kaggle.com/abdurrafae/22nd-solution-just-encoder-blocks" target="_blank">https://www.kaggle.com/abdurrafae/22nd-solution-just-encoder-blocks</a></p>
<p>Thanks and likewise.<br>
I'll add the link to the notebook in this post in 10-15 mins. Just cleaning it up a bit. Haven't added comments in it though</p>
<p>Thanks for all the discussion and support from fellow Kagglers. I've learned a lot in the competition and tried a lot of things. Here a simple list of the stuff that I believe worked well. I'll try to cover the whole model but may make updates to clarify and add explanations later. </p>
<p><strong><a href="https://www.kaggle.com/abdurrafae/22nd-solution-just-encoder-blocks" target="_blank">Link to notebook </a></strong></p>
<p><strong>Architecture</strong><br>
My architecture was just 3 Encoder modules stacked together using 512 as d_model and 4 Encoder layers in each block.</p>
<p>My Questions, Interactions and Response sequences were all padded on the left by a unique vector made using historical stats from the user. This ensured that all 3 sequences were aligned on the sequence number</p>
<ol>
<li><p>Questions Only Block (Q-Block):<br>
This one was just self attention over the questions.</p></li>
<li><p>Interactions Only Block (I-Block):<br>
This one was just self attention over the interactions.</p></li>
<li><p>Questions, Responses, Interaction Block (QRI Block):<br>
I used output from Q-Block as query, I-Block as keys and Responses/(QRI Block) as values. The residual connection after attention module was made using value vector (instead of the default query vector). Self Attention Mask was used in first 3 layers and for the last layer I used a custom mask that only attended to prior interactions and didn't have the residual connection.</p></li>
</ol>
<p>Concatenated the output of Q-Block and QRI-Block, feed it into 3 Linear layers.</p>
<p><strong>Data Usage</strong><br>
I split the longer sequences into windows of 256 length and 128 overlap. (0-256, 128-384, 256-512).<br>
I saved them in tf.records and during inference I took a uniform random 128 window from each 256 length sequences. This ensures any event would be placed at 0-127 location in my training sequence with equal probability (excluding the last 128 events in the last window of a user). For users having smaller sequence length I padded with 0 to make the length equal 128 and used those 128 each time.</p>
<p><strong>Compute Resources</strong><br>
I initially only used Kaggle GPUs to train the model. Started setting up a TPU pipeline in the last 3 weeks of the competition and in the end was able to use the TPUs as well. I think I've exhausted my complete TPU quota for last 3 weeks.</p>
<p>I'd like to thank <a href="https://www.kaggle.com/yihdarshieh" target="_blank">@yihdarshieh</a> (<strong>TPU Guru</strong>) for this <a href="https://www.kaggle.com/yihdarshieh/tpu-track-knowledge-states-of-1m-students" target="_blank">TPU notebook</a>. It helped a lot in setting up the TPU pipeline.</p>
<p><strong>Sequential Encodings</strong><br>
I used 2 encodings that captured the sequence of events. I subtracted the first value in each sequence from the rest to ensure that all encodings started with a 0.</p>
<p><strong>Temporal Encoding</strong><br>
For this I converted the timestamp into minutes and used the same scheme as that of positional encoding with power (60 x 24 x 365 = 1 year). I believe this enabled the model to know how far apart in time were 2 questions/interactions. I believe this to be a better implementation of the lag time variable used in SAINT+ as it captures the difference in time between all of the events simultaneously rather than just between 2 adjacent events.</p>
<p><strong>Positional Encoding</strong><br>
For this I used the task_container_id as the position and a power of 10,000.</p>
<p><strong>Proxy for Knowledge</strong><br>
Instead of just using 0/1 from responses I added an new feature which is a heuristic for knowledge in case the response was incorrect. If a user selects option 2 when 1 is the correct one. I would calculate (total number of times 2 was selected for that questions)/(total number of times the question has been answered incorrectly). This ratio was maintained and updated during inference as well. I checked the correlation of mean of proxy_knowledge for incorrect answers and overall user accuracy, it was around 0.4.</p>
<p><strong>Question difficulty</strong><br>
Simple feature that is calculated as (total number of times the question has been answered correctly)/(total number of times the question has been answered). This ratio was maintained and updated during inference as well.</p>
<p><strong>Custom Masks</strong><br>
I used self attention mask that didn't attend on events of the same bundle.<br>
For the last layer of QRI block I removed the mask entries along the diagonal to ensure it only attends to prior values.</p>
<p><strong>Starting Vector</strong><br>
For the starting vector I used counts of the time each tag was seen/answered correctly in questions and lectures. Just used a dense layer to encode this into the first vector of the sequence. </p>
<p><strong>Other Details</strong></p>
<ul>
<li>Used Noam LR provided on the Transformers page on TF documentation. </li>
<li>Batch size 1024 (Trained on TPUs - 1 Epoch took 4-6 mins)</li>
<li>For Validation I just separated around 3.4% of users initially and used their sequences.</li>
<li>Model converged around 20-30 Epochs. (4 Hours training time at max)</li>
<li>Sequence length of 128 was used.</li>
</ul>
<p><strong>Question Embeddings</strong><br>
I added up embeddings for content id, part id, (a weighted average of) tags ids and type_of (from lectures), then concatenated it with both Sequential embeddings and then into a dense layer with d_model dimensions.</p>
<p><strong>Response Embeddings</strong><br>
I concatenated answered correctly, proxy knowledge, question difficulty, time elapsed and question had explanation and feed into a dense layer with d_model dimensions.</p>
<p><strong>Interaction Embeddings</strong><br>
I just took the first d_model//2 units from both Questions/Response embeddings and concatenated them for this.</p>
<p>It made more intuitive sense to only use encoders for this since we are only enhancing the vector space of each sequence (questions/interactions/responses). Plus my last encoder block simply put is saying "having this question (query), find similarity with all the relevant past interactions (keys) and give a weighted average of the previous responses (values)". </p>
<p>The SAINT architecture didn't make intuitive sense to me at all. </p> |
Riiid Answer Correctness Prediction | 24th Place Solution saintpp + lgb private0.810 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you <a href="https://www.kaggle.com/tangshuyun" target="_blank">@tangshuyun</a> sorry for late reply</p>
<p>Firstly, thanks to Kaggle hold such a good competition, almost no shake. Congrats tops and very thanks to my teammates <a href="https://www.kaggle.com/jaideepvalani" target="_blank">@jaideepvalani</a> <a href="https://www.kaggle.com/ekffar" target="_blank">@ekffar</a>. Finally our team finally public score 0.808, private 0.810, and final rank 24. we missed gold, it's ok, now i want to share our method</p>
<p>because of jet lag, i have no read discuss in detail, so i just see my teammate <a href="https://www.kaggle.com/jaideepvalani" target="_blank">@jaideepvalani</a> had shared our <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/209625" target="_blank">solution</a> right now. As a complementary, not repeat, i just expand the detail based on <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/209625" target="_blank">jaideepvalani's solution</a></p>
<h1><strong>Architecture</strong></h1>
<pre><code>SaintPlusPlus, lgb
</code></pre>
<h1>Embeddings</h1>
<h2>Exercise:</h2>
<pre><code>Question_id
Part
Community id (tried, but no boosting, maybe need tuning)
Task container id (tried, but no boosting, maybe need tuning)
</code></pre>
<h2>Interaction:</h2>
<pre><code>lag time (same container id, keep same with first one) (discrete embedding)
prior question elapsed time(continuous embedding)
prior had explain (discrete embedding)
Prior lecture(discrete embedding)
</code></pre>
<h2>Response:</h2>
<pre><code>Prior question correctness(discrete embedding)
</code></pre>
<h1>Data sample optimization</h1>
<p>Data sampling is important in our experiment<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F347724%2F8884cd3f5fca04b3232adb038b131f0f%2Friiid.png?generation=1610088827710995&alt=media" alt=""></p>
<h1>optimization</h1>
<p>## SAINTPP<br>
1 skip connection, we add skip connection for multi encoder layers inner, and multi decoder layers inner, the origin implement i use from github have no this skip connection <br>
2 add dropout for encoder, decoder, FFN<br>
3 try TCN causal conv after embedding, no boost, but now, i think it should tune and will work<br>
4 big seq len for saintpp give our team boosting, i think it maybe caused by too short seq make <br>
transformer link model struggle into local minest point </p>
<h2>LGB</h2>
<p>updating</p>
<h1>Conclusion</h1>
<p>Our best saintpp(seqlen 320, embdim224, 2 x encder+ 2 x decoder) get cv 0.8016 lb 0.806, and lgb near get 0.796. Finally submission is merge lgb(lb 796) + saintpp cv1(cv 8016) + saintpp cv3(cv 0.7992) = public lb 0.808, and private 0.810.<br>
should do things:<br>
1 should continue optimize our net with casual conv(because transformer like is a global attention, some local correlation maybe ignored) and gru, but finally have no time<br>
2 should try big net continue</p>
<p>Thanks again to my teammates, they give me a lots of help and let me learn a lot. Going for gold next Compete, keep moving.</p> |
Riiid Answer Correctness Prediction | 28th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for the question! <br>
The cost of creating the graph is small, but calculating node metrics (networkX implementation, single thread CPU) and node embeddings (gensim-based implementation, multi-threads CPU) took several dozen hours (I don't remember exactly) in the implementation I used. I didn't care so much about the cost as it was only needed for the training phase.</p>
<p>I want to thank the hosts for organizing a meaningful competition that reflected the critical constraints of real-world difficulties (time series API, limited RAM, and inference time). Also, I thanks my teammate NARI who worked hard with me until the end of the competition.</p>
<h1>Overview</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1360119%2F5cfa0a830e6e45584a9088a794720ef7%2FOverview.jpg?generation=1610195752423547&alt=media" alt=""></p>
<p>Our final solution is an ensemble of Catboost and Transformer. Although Transformer could not create a high CV model in time like the top teams, it contributes a lot to boosting the stacking model score. Our solution's validation strategy and feature engineering pipeline were heavily influenced by tito's notebooks (<a href="https://www.kaggle.com/its7171/cv-strategy" target="_blank">validation strategy</a>, <a href="https://www.kaggle.com/its7171/lgbm-with-loop-feature-engineering" target="_blank">feature engineering</a>). We definitely would not have reached this score if he had not been shared early in the competition.</p>
<h1>Features</h1>
<p>We extracted 160 features for training Catboost. </p>
<p>The main features are as follows. The detailed contribution to the score of each feature has not been confirmed, but the feature importance by CatBoost is available <a href="https://github.com/haradai1262/kaggle_riiid-test-answer-prediction/blob/main/notebook/check_feature_importance.ipynb" target="_blank">here</a></p>
<ul>
<li><p>Content-related features</p>
<ul>
<li><p>Aggregation features</p>
<ul>
<li>mean, std for answered_correctly</li>
<li>mean for elapsed time</li>
<li>We also used aggregation in records only for the second and subsequent answers to the same question by the user.</li></ul></li>
<li><p>Word2vec features<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1360119%2Fa381717acbf8460946ef6455aef948c4%2Fword2vec%20figure.jpg?generation=1610196796564277&alt=media" alt=""></p>
<ul>
<li>We extract Word2vec features using each user's answer history as sentences, each question_id and its associated part, tag_id, and lecture_id as words.</li>
<li>We also extracted cases in which users' answer histories for only correct answers, and only incorrect answers were treated as separate sentences.</li>
<li>The similarity and clustering by the word2vec feature was also used for other feature extraction.</li></ul></li>
<li><p>Graph features<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1360119%2Fb4fd67c9bad9534451cf405bc27d04ef%2FGraph%20figure.jpg?generation=1610196761780811&alt=media" alt=""></p>
<ul>
<li>Based on the users' answer histories, we created a directed graph, whose nodes are question_ids and edges' weights are defined by the number of users' transitions, and used the following node feature as question features.<ul>
<li>node metrics (eigenvector_centrality, betweenness_centrality, trophic_levels <a href="https://networkx.org/documentation/stable/reference/algorithms/centrality.html" target="_blank">NetworkX document</a>)</li>
<li>node embedding (DeepWalk <a href="http://www.perozzi.net/publications/14_kdd_deepwalk.pdf" target="_blank">paper</a>, struc2vec <a href="https://arxiv.org/pdf/1704.03165.pdf" target="_blank">paper</a>, <a href="https://github.com/shenweichen/GraphEmbedding" target="_blank">implementation</a>)</li>
<li>SVD for adjacency matrix</li></ul></li></ul></li></ul></li>
<li><p>User history features</p>
<ul>
<li>Answer count, Correct answer rate<ul>
<li>We use the user's correct answer rate and the correct answer rate in the last N times.</li>
<li>We also used the correct answer rate in the first ten times, one day and one week.</li></ul></li>
<li>Time-related<ul>
<li>difftime (timestamp - previous_timestamp) worked especially well</li>
<li>We used some features related to difftime, including difftime with the most recent 5 step timestamp and their statistics.</li></ul></li>
<li>Question-related<ul>
<li>Whether the user has answered the target question in the past or not, and the number of times the user has answered the question in the past were worked.</li>
<li>We also use correct answer rate for questions in the same cluster (clustered by k-means using similarities based on Word2vec features described above, with the number of clusters set to 100)</li>
<li>We also used some features related to the similarity between the target question and the recent questions or these parts (the similarities were based on Word2vec features described above).</li></ul></li>
<li>Part-related<ul>
<li>The correct answer rate in the part of the question users are answering was worked.</li>
<li>We also used the count and correct answer rate of each part.</li></ul></li>
<li>Tag-related<ul>
<li>The correct answer rate for each tag of the user is kept. The statistics (max, min, mean) of the user's correct answer rate for each tag in the question being answered are used.</li></ul></li>
<li>Lecture-related<ul>
<li>The number of answers from the user's most recent lecture worked best for lecture-related features.</li></ul></li></ul></li>
</ul>
<h1>Model</h1>
<h3>Catboost</h3>
<ul>
<li>Data split: tito CV</li>
<li>Input<ul>
<li>160 features</li></ul></li>
<li>CV: 0.803~0.804, PublicLB: 0.801~0.802</li>
</ul>
<h3>Transformer (SAINT-like model)</h3>
<ul>
<li>Data split: tito CV</li>
<li>Input (sequence length 120)<ul>
<li>Encoder<ul>
<li>question_id, part, tag, difftime</li></ul></li>
<li>Decoder<ul>
<li>answered_correctly, elasped_time</li></ul></li></ul></li>
<li>CV: 0.797~0.798</li>
</ul>
<h1>Stacking</h1>
<ul>
<li>Model: Catboost</li>
<li>Data split: k-fold CV (k=9) of 2.5M records</li>
<li>Input<ul>
<li>Predictions of Catboost (6 models, random seeds) and Transformer (2 models, a slight variation of hyperparameters and structure)</li>
<li>Top 75 higher importance features based on feature importance of CatBoost</li></ul></li>
<li>3 Seed average</li>
<li>CV: 0.810, PublicLB: 0.807, PrivateLB: 0.809</li>
</ul>
<hr>
<p>Thank you for your attention! Our code is available in</p>
<ul>
<li><a href="https://github.com/haradai1262/kaggle_riiid-test-answer-prediction" target="_blank">https://github.com/haradai1262/kaggle_riiid-test-answer-prediction</a></li>
<li><a href="https://www.kaggle.com/haradataman/riiid-28th-solution-inference-only" target="_blank">Inference (kaggle notebook)</a></li>
</ul> |
Riiid Answer Correctness Prediction | 2nd Place Solution (LSTM-Encoded SAKT-like TransformerEncoder) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Nimra, </p>
<ol>
<li>Please just run my notebook. I'm sorry I didn't provide the training code. </li>
<li>Yes, please check <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/218148" target="_blank">https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/218148</a></li>
<li>I remember ii is the actual submission, which is the ensemble of 6 models. iii is the submission with single model.</li>
</ol>
<p>1) I uploaded <a href="https://www.kaggle.com/mamasinkgs/submission-with-7000-limit?scriptVersionId=45368594" target="_blank">https://www.kaggle.com/mamasinkgs/submission-with-7000-limit?scriptVersionId=45368594</a>, so please check it. I tried binary search by changing the limit.<br>
2) I thought models will learn some useless patterns. But, judging from others' solutions, precise masking is not necessarily needed. I think it depends on architectures, though.</p>
<p>Thank you! I used one-hot encoding (like multilabel problem) for tags and concatenate them with other features, then applied MLP.</p>
<p>Thank you, I uploaded it now!</p>
<p>Yes we can control <code>tgt_mask</code> in decoder. but if we use the same input of the decoder as SAINT+ paper in this image, I think the problem cannot be fixed just by controlling <code>tgt_mask</code>, because the <strong>query</strong> of decoder produces leakage (i.e. uses the information of the sample in the same task_container_id). In my understanding, the data leakage caused by <strong>query</strong> is unavoidable by controlling masking, while the data leakage caused by <strong>key/value</strong> is avoidable by masking.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F762938%2Fb29c0c24c02cb5b71707c2d378bc5a13%2Finput_of_saint.jpg?generation=1610226842294243&alt=media" alt=""></p>
<p><a href="https://www.kaggle.com/bacicnikola" target="_blank">@bacicnikola</a> <br>
When using encoder-only model, there is no problem. However, when using decoder of SAINT+ model, the sequence is <strong>shifted</strong> like the image above. So, by simply using tgt_mask you showed (actually the mask you showed is inherently same as the mask I showed in my solution), the problem cannot be solved.</p>
<pre><code>To do so in training, I feel it becomes more difficult. Because we are not just dealing the last bundle in the truncated sequence, we extract exact one question from each budle in that sequence. Maybe not so difficult, but I kind feel we also loss information that given by bundle.
</code></pre>
<p>Now I understand what you mean. it seems the problem cannot be solved by simply using fancy indexing. Hmm…</p>
<p>Yes, it can be called "incosistency", as the problem doesn't happen at least in inference phase. </p>
<p><a href="https://www.kaggle.com/bacicnikola" target="_blank">@bacicnikola</a><br>
For example, suppose task_container_id of a specific user is [0, 0, 0, 1, 1, 2, 2, 3, 3, 4, 5, 8, 7, 7, 6]. Then, <br>
position [0, 1, 2] cannot use any information.<br>
position [3, 4] can use the information of [0, 1, 2].<br>
position [5, 6] can use the information of [0, 1, 2, 3, 4].<br>
position [7, 8] can use the information of [0, 1, 2, 3, 4, 5, 6].</p>
<p>However, when simply using the input of the SAINT paper, position 1 can use the information (e.g. correctness) of position 0 and position 2 can use the information of position [0, 1]. I think this is problematic as the model uses the information that should not be utilized, during training.</p>
<p>I use V100 * 4 (VRAM 64GB) and training takes about 30h. I think I spent more than 500 hours for this competition. I did nothing other than this competition since 10/16.</p>
<p>Yes of course the leakage doesn't happen during inference, but I think it's problematic that leakage happens in training phase, because the model can learn meaningless patterns during training.</p>
<p><code>I replace the values by the value obtained at the first question in the bundle under prediction.</code><br>
Hmm, I didn't notice this idea but this looks great. Why don't you try this idea in the training phase? I think it is possible by fancy indexing. </p>
<ul>
<li><p>What is your cross-validation and model evaluating strategy?<br>
My CV strategy is completely different from tito's one. First, I probed the number of new users in test set and I found there is about 7000 new users. as we know there is 2.5M rows in test set and we know the average length of the history of all users, we can estimate the number of rows of the new users and the number of rows of the existing users. After the calculation, I found <br>
<code># rows for user split (i.e. new users): # rows for timeseries split (i.e. existing users) = 2 : 1</code>. <br>
So, For validation, I decided to use 1M rows for timeseries split and 2M rows for user split. <br>
When making validation set of timeseries split, I was so careful that inter-container leakage problem can't happen. I mean, my training dataset and validation dataset never shares same task_container_id for a specific user.</p></li>
<li><p>Did your task_container-aware masking improved CV score (and how much)?<br>
I don't know how much it improves CV/LB score, because I was really careful about inter-container leakage problem from the beginning and have never tried task_container-unaware masking and indexing (I hate leakage). So, there is no evidence that shows task_container-aware masking is important. That's just my guess from LB score.</p></li>
</ul>
<p>Thanks! I'm not familiar with tensorflow, but I'll try it in future competitions.</p>
<p>I mean, in my opinion, the input of the decoder of SAINT+ architecture cannot be implemented without any information loss or any data leakage, due to task_container_id. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F762938%2F8467453b525282d6754cbac4f2c9473e%2Finput_of_saint.jpg?generation=1610223284475779&alt=media" alt=""></p>
<p>Thank you all teams who competed with me, all the people who participated in this competition, and the organizer that hosts such a great competition with the well-designed API!<br>
Congrats <a href="https://www.kaggle.com/keetar" target="_blank">@keetar</a>, who defeats me and becomes the winner in this competition.</p>
<p>I'm happy because it's my first time I get solo prize!</p>
<p>It's my 4th kaggle competition and it was fun to compete with my past teammates ( <a href="https://www.kaggle.com/nyanpn" target="_blank">@nyanpn</a>, <a href="https://www.kaggle.com/pocketsuteado" target="_blank">@pocketsuteado</a>) and people who I competed with past competitions (e.g. <a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a>, <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a>). <br>
Here, I will explain the summary of my model and my features. </p>
<p>I uploaded 2 kaggle notebooks for the explanation. As the ensemble is not so important in my solution, I will only explain my single model. </p>
<ol>
<li><a href="https://www.kaggle.com/mamasinkgs/public-private-2nd-place-solution" target="_blank">6 similar models weighted average</a> : 0.817 public/0.818 private</li>
<li><a href="https://www.kaggle.com/mamasinkgs/public-private-2nd-place-solution-single-fold" target="_blank">single model</a> : 0.814 public/0.816 private</li>
</ol>
<h1>Models</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F762938%2F39ec58b701e6d37701cc794763d3a473%2F2nd_place.png?generation=1610213712807228&alt=media" alt=""></p>
<h2>Overview</h2>
<p>my model is similar to SAKT, with 400 sequence length, 512 dimension, 4 nheads. <br>
I don't use lecture information for the input of transformer model, which I guess is why I lost in this competition. For the query and key/value of SAKT-like model, I used LSTM-encoded features, whose input is as follows.</p>
<ul>
<li><p><strong>"Query" features</strong><br>
content_id<br>
part<br>
tags<br>
normalized timedelta<br>
normalized log timestamp <br>
correct answer<br>
task_container_id delta<br>
content_type_id delta<br>
normalized absolute position </p></li>
<li><p><strong>"Memory" features</strong><br>
explanation<br>
correctness<br>
normalized elapsed time<br>
user_answer</p></li>
</ul>
<h2>Detailed Explanation of Training/Inference Process</h2>
<p>I tried a very precise indexing/masking technique to avoid data leakage in training process, I guess which is partly why I became 2nd place. OK, suppose a very simple model of the sequence length = 5, and a task_container_id history of a specific user (without lecture) is like this.<br>
<code>[0, 0, 0, 1, 1, 2, 2, 3, 3, 4, 5, 8, 7, 7, 6]</code><br>
As there is 15 measurements in this user, I made 15/5 = 3 training samples and loss mask.</p>
<p>(1) input task_container_id: <code>[pad, pad, pad, pad, pad, 0, 0, 0, 1, 1]</code> <br>
(1) loss_mask: <code>[False, False, False, False, False, True, True, True, True, True]</code><br>
(2) input task_container_id: <code>[0, 0, 0, 1, 1, 2, 2, 3, 3, 4]</code> <br>
(2) loss_mask: <code>[False, False, False, False, False, True, True, True, True, True]</code><br>
(3) input task_container_id: <code>[2, 2, 3, 3, 4, 5, 8, 7, 7, 6]</code> <br>
(3) loss_mask: <code>[False, False, False, False, False, True, True, True, True, True]</code></p>
<p>In this competition, the handling of the task_container_id is very important, as <strong>it is not allowed to use "memory" features of the same task container id</strong> to avoid leakage.<br>
So, after applying LSTM to the features, such fancy indexing is required to avoid the leakage for 3 training samples, where -1 means this position can't attend any position.</p>
<p>(1) indices: <code>[-1, -1, -1, -1, -1, 4, 4, 4, 7, 7]</code><br>
(2) indices: <code>[-1, -1, -1, 2, 2, 4, 4, 6, 6, 8]</code><br>
(3) indices: <code>[-1, -1, 1, 1, 3, 4, 5, 6, 6, 8]</code></p>
<p>To get this indices very fast in the training/inference phase, I wrote such cython (#1) function (vectorized implementation):</p>
<pre><code>%%cython
import numpy as np
cimport numpy as np
cpdef np.ndarray[int] cget_memory_indices(np.ndarray task):
cdef Py_ssize_t n = task.shape[1]
cdef np.ndarray[int, ndim = 2] res = np.zeros_like(task, dtype = np.int32)
cdef np.ndarray[int] tmp_counter = np.full(task.shape[0], -1, dtype = np.int32)
cdef np.ndarray[int] u_counter = np.full(task.shape[0], task.shape[1] - 1, dtype = np.int32)
for i in range(n):
res[:, i] = u_counter
tmp_counter += 1
if i != n - 1:
mask = (task[:, i] != task[:, i + 1])
u_counter[mask] = tmp_counter[mask]
return res
</code></pre>
<p>After applying such fancy indexing to the output of LSTM features, I concatenated them with "Query" features and apply MLP, then we can get query for SAKT model. Then, I obtained key/value for SAKT model by applying MLP to query concatenated with "Memory" features. </p>
<p>To train SAKT-like model, precise memory masking is also required to avoid leakage. I used <a href="https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html" target="_blank">3D attention mask</a> of torch.nn.MultiheadAttention. It should be noted <a href="https://pytorch.org/docs/stable/generated/torch.repeat_interleave.html" target="_blank">torch.repeat_interleave</a> must be leveraged to make 3D mask (batchsize * nhead, sequence length, sequence length). <br>
The memory mask for these 3 samples are like this. Here, 1 means True and 0 means False. Please remember the <a href="https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html" target="_blank">documentation</a> says </p>
<pre><code>attn_mask ensure that position i is allowed to attend the unmasked positions. If a BoolTensor is provided, positions with True is not allowed to attend while False values will be unchanged.
</code></pre>
<p>(1) memory_mask:</p>
<pre><code>array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[1, 1, 1, 0, 0, 0, 0, 0, 1, 1],
[1, 1, 1, 0, 0, 0, 0, 0, 1, 1]])
</code></pre>
<p>(2) memory_mask:</p>
<pre><code>array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 0, 0, 0, 1, 1, 1],
[1, 1, 0, 0, 0, 0, 0, 1, 1, 1],
[1, 1, 1, 1, 0, 0, 0, 0, 0, 1]])
</code></pre>
<p>(3) memory_mask:</p>
<pre><code>array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 1, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[1, 0, 0, 0, 0, 0, 1, 1, 1, 1],
[1, 1, 0, 0, 0, 0, 0, 1, 1, 1],
[1, 1, 0, 0, 0, 0, 0, 1, 1, 1],
[1, 1, 1, 1, 0, 0, 0, 0, 0, 1]])
</code></pre>
<p>To make this memory mask very fast in the training phase, I wrote such cython (#2) function (not vectorized implementation):</p>
<pre><code>%%cython
import numpy as np
cimport numpy as np
cpdef np.ndarray[int] cget_memory_mask(np.ndarray task, int n_length):
cdef Py_ssize_t n = task.shape[0]
cdef np.ndarray[int, ndim = 2] res = np.full((n, n), 1, dtype = np.int32)
cdef int tmp_counter = 0
cdef int u_counter = 0
for i in range(n):
tmp_counter += 1
if i == n - 1 or task[i] != task[i + 1]:
res[i - tmp_counter + 1 : i + 1, :u_counter] = 0
if u_counter == 0:
res[i - tmp_counter + 1 : i + 1, n - 1] = 0
if u_counter > n_length:
res[i - tmp_counter + 1: i + 1, :(u_counter - n_length)] = 1
u_counter += tmp_counter
tmp_counter = 0
return res
</code></pre>
<p>In the inference phase, memory mask is not required. For more details, please look at my <a href="https://www.kaggle.com/mamasinkgs/public-private-2nd-place-solution-single-fold" target="_blank">single model notebook</a>.</p>
<h1>My Features</h1>
<p>Transformer is great, but it suffers from a problem that the sequence length cannot be infinite. I mean, it cannot consider the information of very old samples. To tackle this problem and to leverage the lecture information, I made simple features and concatenated them with the output features of SAKT-like model, and applied MLP and sigmoid.<br>
I uploaded the names of 90 features as attachments. For the implementation of these features, I didn't use any pd.merge or df.join and most of the implementation are done using numpy. To make user-content features, I used scipy.sparse.lil_matrix to spare memory usage. I think my feature is not so good as other competitors (about 0.795 when using GBDT), but still improved score 0.001 ~ 0.002.<br>
I made some tricky features (e.g. obtained by SVD), but it did not improve the score of NN model (improved GBDT model, though.), probably because the information of NN features includes that of such tricky features.</p>
<h1>CV Strategy</h1>
<p>My CV strategy is completely different from tito( <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> )'s one. First, I probed the number of new users in test set and I found there are about 7000 new users. as we know there is 2.5M rows in test set and we know the average length of the history of all users, we can estimate the number of rows of the new users and the number of rows of the existing users. After the calculation, I found</p>
<pre><code>the number of rows of new users (i.e. user split): the number of rows of existing users (i.e. timeseries split) = 2 : 1.
</code></pre>
<p>So, For validation, I decided to use 1M rows for timeseries split and 2M rows for user split.<br>
When making validation set of timeseries split, I was so careful that leakage can't happen. I mean, my training dataset and validation dataset never shares same task_container_id for a given user.</p>
<h1>What worked</h1>
<ol>
<li>increase length from 100 to 400 worked.</li>
<li>using normalized timedelta is better than digitized timedelta (mentioned in SAINT+ paper).</li>
<li>concatenating embeddings is better than adding embedding, as mentioned in <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/201798" target="_blank">https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/201798</a>. </li>
<li>dropout = 0.2 is very important.</li>
<li>StepLR with Adam is good. I trained my model for about 35 epoch with lr = 2e-3, then trained it for 1 epoch with lr = 2e-4.</li>
</ol>
<h1>What didn't work</h1>
<ol>
<li>Random masking of sequences didn't improve the score.</li>
<li>bundle_id and normalized task_container_id is not needed for the input of LSTM.</li>
<li>As I didn't make diverse models, weighted averaging is enough and blending using GBDT didn't work well.</li>
<li>Full data training (without any validation data) didn't improve the score.</li>
<li>In my implementation, SAINT didn't work well. In my opinion, it's logically very hard to implement SAINT without data leakage in training due to task_container_id.</li>
</ol>
<h1>Comments</h1>
<p>I was very sad to see the private score bug problem of kaggle. Of course this competition is very great, but I think this competition could have been one of the most successful competition in kaggle without this problem.<br>
Anyway, I really enjoyed my 4th kaggle competition. I'll continue kaggle and will surely become the winner in the next competition!</p>
<h1>Thanks all, see you again!</h1>
<p>P.S. The word <code>memory_mask</code> may be confusing because it is different from pytorch <a href="https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html#torch.nn.Transformer" target="_blank">Transformer</a>'s <code>memory_mask</code>.<br>
The word <code>memory_mask</code> is more like <code>mask</code> in pytorch's <a href="https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoder.html" target="_blank">TransformerEncoder</a>.<br>
The reason my word is confusing is, I used Encoder-Decoder model at first, then gave up it and started using Encoder-only model, but I didn't change the function names. :(</p>
<p>I'm a student 😄</p>
<p>Nice question! In the beginning of the competition, I used LightGBM as usual and got to 0.792 (1st place) soon. Then, I started using Transformer by reading this <a href="http://jalammar.github.io/illustrated-transformer/" target="_blank">blog</a>, soon I got to 0.801 using Transformer and 0.803 by ensembling GBDT and Transformer. I remember I wrote this comment then. (so, The fact is I was using both GBDT and Transformer). After that, my Transformer improved a lot and I decided to throw away GBDT and use my hand-crafted features for the additional input of Transformer. "Is Attention All You Need?" means I was wondering whether Transformer is better than GBDT when I wrote this comment.</p>
<p>I simply used powerpoint.</p> |
Riiid Answer Correctness Prediction | 37th place solution (Transformer part) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank oganizer and Kaggle for hosting the competition. In addition, thanks for <a href="https://www.kaggle.com/vk00st" target="_blank">@vk00st</a> inviting us to merge as a team, and other teammates <a href="https://www.kaggle.com/unkoff" target="_blank">@unkoff</a> <a href="https://www.kaggle.com/arvissu" target="_blank">@arvissu</a> <a href="https://www.kaggle.com/syxuming" target="_blank">@syxuming</a>, we have great team work.</p>
<p>I also want to thank <a href="https://www.kaggle.com/manikanthr5" target="_blank">@manikanthr5</a> <a href="https://www.kaggle.com/wangsg" target="_blank">@wangsg</a> <a href="https://www.kaggle.com/leadbest" target="_blank">@leadbest</a> great SAKT starter notebook. I can’t implement the Transformer model by myself without these.</p>
<p>I summarize the detail of our SAINT+ like model as follows.</p>
<h2>Input features</h2>
<ol>
<li>content_id</li>
<li>response (answered_correctly)</li>
<li>part</li>
<li>prior_question_elapsed_time</li>
<li>prior_question_had_explanation</li>
<li>lag_time1 - convert lag time to seconds. if lag_time1 >= 300 than 300.</li>
<li>lag_time2 - convert lag time to minutes. if lag_time2 >= 1440 than 300 (one day).</li>
<li>lag_time3 - convert lag time to days. if lag_time3 >= 365 than 365 (one year).</li>
</ol>
<p>Lag time split to different time format boosting score around 0.003.</p>
<h2>Distinguish zero padding</h2>
<ul>
<li>In order to distinguish zero padding and zero of <strong>content_id</strong>, I add one to <strong>content_id</strong> and <strong>prior_question_had_explanation</strong> and <strong>response</strong>. It will help model distinguish zero padding and features. It boost score around 0.003.</li>
</ul>
<pre><code>q_ = q_+1
pri_exp_ = pri_exp_+1
res_ = qa_+1
</code></pre>
<h3>Transformer</h3>
<h5>Encoder Input</h5>
<ul>
<li>question embedding</li>
<li>part embedding</li>
<li>position embedding</li>
<li>prior question had explanation embedding</li>
</ul>
<h5>Decoder Input</h5>
<ul>
<li>position embedding</li>
<li>reponse embedding</li>
<li>prior elapsed time embedding</li>
<li>lag_time1 categorical embedding</li>
<li>lag_time2 categorical embedding</li>
<li>lag_time3 categorical embedding</li>
<li>Note that I tried categorical and continuous embedding in prior elapsed time and lag time. The performance of categorical embedding is better than continuous embedding.</li>
</ul>
<h5>Parameter of Transformer</h5>
<ul>
<li>max sequence: 100</li>
<li>d model: 256 </li>
<li>number of layer of encoder: 2</li>
<li>number of layer of decoder: 2</li>
<li>batch size: 256</li>
<li>dropout: 0.1</li>
<li>learning rate: 5e-4 with AdamW</li>
</ul>
<h3>Blending</h3>
<ul>
<li>LGBM + Catboost + three Transformer</li>
<li>LGBM + two Transformer</li>
</ul>
<h3>Result</h3>
<ul>
<li>Finally, our team get LGBM public LB 0.794 and Transformer public LB 0.799.</li>
<li>Get public LB 0.804 with blending.</li>
</ul>
<h3>Sample code</h3>
<ul>
<li><a href="https://www.kaggle.com/m10515009/saint-is-all-you-need-training-private-0-801" target="_blank">Training</a></li>
<li><a href="https://www.kaggle.com/m10515009/saint-is-all-you-need-inference-private-0-801" target="_blank">Inference</a></li>
</ul> |
Riiid Answer Correctness Prediction | 38th place solution: Single SAINT+ | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Notebook <a href="https://www.kaggle.com/fomdata/38th-solution-saint-model" target="_blank">link</a> added</p>
<p>First of all, thank you to the host and Kaggle team for the competition. I think it is a very useful challenge for those who want to try transformer models outside of the nlp. I would also like to thank everyone who shared their ideas for this competition. I can say that it contributed to my learning journey in many ways.</p>
<p>Thank you to Tito and his great <a href="https://www.kaggle.com/its7171/lgbm-with-loop-feature-engineering" target="_blank">notebook</a> about loop for feature engineering<br>
Thank you to Mark and his great <a href="https://www.kaggle.com/markwijkhuizen/riiid-training-and-prediction-using-a-state" target="_blank">notebook</a> for the harmonic mean idea. </p>
<h2>Results</h2>
<p>My local train/val AUC: <strong>0.804</strong> / <strong>0.806</strong><br>
I already knew the difference between my public and private lb score, because of the private leak.<br>
As a CV strategy, I used a similar but not the same as tito.</p>
<h2>Model details</h2>
<p>The model structure is very much the same as <a href="https://arxiv.org/abs/2010.12042" target="_blank">SAINT+</a> paper, except below parameters.</p>
<ul>
<li>2 encoder - 2 decoder layers</li>
<li>256 embed dim</li>
<li>0.1 dropout for attention layers, 0.2 dropout for fully connected heads at the end of the encoder-decoder layers</li>
<li>100 seq length</li>
</ul>
<p><strong>Encoder embeddings (Known features)</strong></p>
<ul>
<li>Pos embedding (Categorical)</li>
<li>Question id embedding (Categorical)</li>
<li>Bundle id embedding (Categorical)</li>
<li>Part embedding (Categorical)</li>
<li>Question encoded tag embedding (New number for each unique value) (Categorical)</li>
<li>Question attempt embedding (Categorical)</li>
<li>Lag time embedding (current task timestamp - prev task timestamp) (Categorical)</li>
<li>Harmonic mean embedding (Continuous)</li>
</ul>
<p><strong>Decoder embeddings (Future features)</strong><br>
<em>All features shifted/adjusted</em></p>
<ul>
<li>Pos embedding - same embedding layer for encoder (Categorical)</li>
<li>Answer embedding (Categorical)</li>
<li>Elapsed time (Categorical)</li>
<li>Answer bundle id - same embedding layer for bundle id (Categorical)</li>
</ul>
<h2>Training:</h2>
<ul>
<li>25 epoch, train almost took 7-8 hours on GPU, inference 3-4 hours</li>
<li>Adam with 1e-3 lr for first 20 epoch, then 1e-4 last 5 epoch</li>
<li>128 batch size</li>
</ul>
<p>Notebook <a href="https://www.kaggle.com/fomdata/38th-solution-saint-model" target="_blank">link</a></p> |
Riiid Answer Correctness Prediction | 39th Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I took me a while to post my solution since I was cleaning all the messy code I had. My solution was a SAINT-like model, however a bit simpler in architecture. I was surprised how stripping the model out of features that intuitively make sense can improve the score.</p>
<p>It was d_model: 128 , att_heads: 2 , ff_model: 512 , dropout: 0.1 , 4 layers , trained on whole data for 100 + epochs. It breaks the 0.800 ROC bar around the 60th epoch.</p>
<p>Since many people failed to successfully implement the SAINT model, I created a notebook with detailed explanation, and clear code that takes raw data, created a dataloader, train the SAINT model, then perform inference using Tito's iterator. The model in the notebook is 64 d_model, it score 0.789 in validation, scaling it would take it 0.800+.</p>
<p>Notebook: <a href="https://www.kaggle.com/abdessalemboukil/saint-training-inference-guide-39th-solution" target="_blank">https://www.kaggle.com/abdessalemboukil/saint-training-inference-guide-39th-solution</a> </p>
<p>Thanks for the awesome competition!</p> |
Riiid Answer Correctness Prediction | 3rd place solution (0.818 LB): The Transformer | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Let's say the encoder input is: $$QA_1, QA_2, QA_3, …$$ and the decoder input is: $$Q_1, Q_2, Q_3, …$$ where \(QA_x\) is question+answer \(x\)'s data and \(Q_x\) is question \(x\)'s data.</p>
<p>Now suppose we don't shift the encoder input: For question \(Q_x\) we MUST mask all QA from \(QA_1\) up to (and including) \(QA_x\), otherwise question 1 would be able to see its own answer and cheat. This means that for \(Q_1\) all QAs should be masked but we can't do that because in the attention mechanism, masking means we set the masked values to -inf after calculating \(QK^T\). In the case of \(Q_1\), the first row would be <strong>entirely</strong> set to -inf, and that's a problem because the softmax of all -inf is nan:</p>
<pre><code>a = torch.triu(torch.full((5, 5), -float('inf')))
torch.nn.functional.softmax(a, 1)
tensor([[ nan, nan, nan, nan, nan],
[1.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.5000, 0.5000, 0.0000, 0.0000, 0.0000],
[0.3333, 0.3333, 0.3333, 0.0000, 0.0000],
[0.2500, 0.2500, 0.2500, 0.2500, 0.0000]])
</code></pre>
<p>And that's why we always need to shift the encoder to the right in causal applications: there always has to be at least one token to pay attention to so that softmax can be calculated.</p>
<p>The <a href="https://github.com/jamarju/riiid-acp-pub" target="_blank">source code</a> is up. <a href="https://www.kaggle.com/imeintanis" target="_blank">@imeintanis</a> </p>
<p>LOL I've never had anyone say anything this beautiful about something I've written, thanks 😄</p>
<p>Thank you! We tried a variety of encoder+decoder layouts (0+4, 4+0, 2+4, 4+2, etc.) and those two (3+3, 4+4) turned out the best. I believe we are actually doing sequence tagging, I just didn't know it had such name.</p>
<p>You are right, I fixed the link. Thanks for noticing!</p>
<p>Yes, we will. Allow me a couple of days to recover mentally and clean up.</p>
<ul>
<li><strong>2021-01-16 edit 2</strong>: added missing linear layers after categorical embeddings + tag features to diagram</li>
<li><strong>2021-01-16 edit 1</strong>: the <a href="https://github.com/jamarju/riiid-acp-pub" target="_blank">source code</a> is up.</li>
</ul>
<p>Wow, what a ride this has been. First off, thanks <a href="https://www.kaggle.com/sohier" target="_blank">@sohier</a> <a href="https://www.kaggle.com/hoonpyotimjeon" target="_blank">@hoonpyotimjeon</a> Kaggle, Riiid and everyone involved in setting up this challenging competition.</p>
<p>Congratulations to #1 and #2 <a href="https://www.kaggle.com/keetar" target="_blank">@keetar</a> and <a href="https://www.kaggle.com/mamasinkgs" target="_blank">@mamasinkgs</a>!!! We truly look forward to reading about your solutions!</p>
<p>Also huge thanks to my teammate <a href="https://www.kaggle.com/antorsae" target="_blank">@antorsae</a> who joined me in the last stretch and without whose ideas, intuition and hardware I wouldn’t have gotten this far.</p>
<p>I was attracted to this competition by the relatively small dataset footprint compared to my other two previous competitions (deep fakes and RSNA pulmonary embolisms) but this ended up being much more resource intensive than I anticipated.</p>
<p>Our solution is a mixture of two Transformer models with carefully crafted attention, engineered features and a time-aware adaptive ensembling mechanism we nick-named “The Blindfolded Gunslinger”.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3076212%2F59b490f2f38b6d8b2412e5fd83585d2c%2Fblindfolded.jpg?generation=1610064421822590&alt=media" alt=""></p>
<p>(“Blindfolded gunslinger” hand-drawn by <a href="https://www.kaggle.com/antorsae" target="_blank">@antorsae</a> inspired by Red Dead Redemption 2)</p>
<h1>The Transformer</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3076212%2F2bba4e4717ba0cc0f7e6af00dfe4a709%2FEsquemaRiid%202.jpg?generation=1610837878777203&alt=media" alt=""></p>
<p>We use two transformers trained separately with 2.5% of the users held out for validation and sequences of 500 interactions. At train we simply split user stories in 500 non-overlapping interaction chunks and sample the chunks randomly.</p>
<ul>
<li>Transformer 1: 3+3 layers (encoder+decoder), no LayerNorms, <a href="http://www.cs.toronto.edu/~mvolkovs/ICML2020_tfixup.pdf" target="_blank">T-Fixup init</a> see paper for reasons why we used it, ReLU activations, d_model=512</li>
<li>Transformer 2: 4+4 layers, no LayerNorms, T-Fixup init, GELU activations, d_model=512</li>
</ul>
<p>We feed both the encoder and the decoder ALL the features. We use learned features for continuous variables (simply projecting them to d_model) and for categorical variables we first map it to embeddings with low dimensionality and then project it also to d_model=512 (to avoid potential overfitting). We use an embedding bag for question tags.</p>
<p>To prevent the transformer from looking into the future we shift the encoder input including both questions + answers to the right, hide all the answer-specific features from the decoder (user_answer, answered_correctly, qhe, qet), ie. those that are not immediately available upon inferring the interaction and use the appropriate attention masks in all 3 attentions.</p>
<p>To our surprise the 3+3 model outperformed its bigger 4+4 brother even if we tried to finetune the latter at the final hours of the competition.</p>
<h1>Engineered features</h1>
<p>This was a very rich dataset but we found the following derived features helped the transformer converge faster and reach a higher AUROC score. A lot of them have been discussed in the forum:</p>
<ul>
<li><code>qet</code>, <code>qhe</code>: these are the <code>prior_qet</code> and <code>prior_qhe</code> counterparts shifted upwards one container. This was probably discussed first by <a href="https://www.kaggle.com/doctorkael" target="_blank">@doctorkael</a> here: <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/194184" target="_blank">https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/194184</a></li>
<li><code>tsli</code>: time since list interaction, AKA timestamp delta. Discussed in many threads.</li>
<li><code>clipped_tsli</code>: <code>tsli</code> clipped to 20 minutes. <a href="https://www.kaggle.com/claverru" target="_blank">@claverru</a> hinted at this in the Saint benchmark mega-thread: <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/195632" target="_blank">https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/195632</a></li>
<li><code>ts_mod_1day</code>: <code>timestamp</code> modulus 1 day. This may reveal daily patterns such as the user being more / less attentive / tired in the mornings / after work, etc.</li>
<li><code>ts_mod_1week</code>: timestamp modulus 1 week. We similarly hope this will reveal weekly patterns (are “mondays” a bad day? etc.)</li>
<li><code>attempt_num</code>, <code>attempts_correct</code>, <code>attempts_correct_avg</code>: about 11% of the questions were <strong>repeated</strong> questions, so it made a lot of sense to keep a record of which question had been answered by whom and how many times it was answered correctly. This was revealed by <a href="https://www.kaggle.com/aravindpadman" target="_blank">@aravindpadman</a> in his great thread: <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/194266" target="_blank">https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/194266</a> and it was an extremely demanding feature to code since it takes a total of 11.2 GB of space at inference all by itself.</li>
</ul>
<h1>Attention</h1>
<p>Our architecture follows the auto-regressive application of sequence to sequence transformers, so a causal attention mask is needed to make sure a given interaction cannot attend to interactions in the future; however there is an exception to this which we believe is critical: interaction grouped by the same task_container_id.</p>
<p>We compute separate attention masks for the encoder and decoder, preventing the encoder self-attention from attending past interactions if they belong to the same task_container_id, and conversely modifying the decoder self-attention to allow to attend to all interaction belonging to the same task_container_id, we further restrain the output of the encoder to the decoder with the encoder attention to prevent a leakage of information from the residual connections in the encoder.</p>
<pre><code>causal_mask = ~torch.tril(torch.ones(1,sl, sl,dtype=torch.bool,device=x_cat.device)).expand(b,-1,-1)
x_tci = x_cat[...,Cats.task_container_id]
x_tci_s = torch.zeros_like(x_tci)
x_tci_s[...,1:] = x_tci[...,:-1]
enc_container_aware_mask = (x_tci.unsqueeze(-1) == x_tci_s.unsqueeze(-1).permute(0,2,1)) | causal_mask
dec_container_aware_mask = ~(x_tci.unsqueeze(-1) == x_tci.unsqueeze(-1).permute(0,2,1)) & causal_mask
</code></pre>
<h1>The Blindfolded Gunslinger</h1>
<p>We made a joke in the <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/208356#1137339" target="_blank">meme thread</a> about us wanting to ensemble multiple models, but the competition having only 9 hours to run full inference…</p>
<p>It was already reported that the public test set was sitting on <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/203383#1115680" target="_blank">the first 20% of the test set</a> so it is possible to maximize the allotted time for the private test set by skipping model inference in the first 20% and predicting only the last 80%. </p>
<p>We implemented dynamic ensembling that attempts to perform as much ensembling as it is possible in the allotted time for the last 80%.</p>
<p>We dubbed this idea “The Blindfolded Gunslinger” because it fires two guns (models) as much as it can (after a while it will only fire one) but it is blindfolded in the sense that the public LB will be ~0.5 so we cannot be sure if it worked or not until now…</p>
<h1>Hardware</h1>
<ul>
<li>1 computer with Ryzen 3950x (16c32t) + 64 Gb RAM + 1x3090</li>
<li>1 computer with Threadripper 1950x (16c32t) + 256 Gb RAM + 6x3090</li>
</ul>
<p>We set up the big computer during the competition which was a project on its own:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3076212%2Ff1186277b6c2045ba9c00053192d6be2%2Fpasted%20image%200.png?generation=1610065136682440&alt=media" alt=""></p>
<p>Also in the last 8 hours of the competition we rented a 190 Gb RAM + 6x3090 but it did not help us much.</p>
<h1>Software</h1>
<p>We used pytorch 1.7.1, fastai and we trained using distributed training and mixed precision (both as implemented in fastai). </p>
<p>For inference we included the last 500 interactions and summaries in both pickle files and memory-mapped numpy matrices.</p>
<h1>Source code</h1>
<p>Available at: <a href="https://github.com/jamarju/riiid-acp-pub" target="_blank">https://github.com/jamarju/riiid-acp-pub</a></p>
<p>Hello, <a href="https://www.kaggle.com/julia5" target="_blank">@julia5</a> and thanks for the kind words.</p>
<p>You can't run 05 off-line because 80% of the test data is hidden.</p>
<p>Try to convert the 05.ipynb nb to .py and submit that instead, it should reduce RAM footprint and that's actually how we did it. The first cell in 05 explains how to do such conversion with the <code>ipynb-py-convert</code> tool.</p> |
Riiid Answer Correctness Prediction | 403 place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you comment!</p>
<p>I have tried this idea by seen <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/190270" target="_blank">tkm2261's discussion thread</a>.</p>
<p>BigQuery ML merit is here:</p>
<ul>
<li>You can treat very large data easily.</li>
<li>You can manage feature values with BigQuery</li>
</ul>
<p>BigQuery ML demerit is here:</p>
<ul>
<li>If you make a sloppy query, you will have to spend a lot of money.</li>
<li>In the GBDT algorithm, it can only use XGB (looking at some solutions, LGBM is better.)</li>
</ul>
<p>I'm satisfied to learn many things.<br>
But it’s still regrettable and frustrating because I lost the medal since I shakedown.<br>
I'm studying top solutions now for the next competition.</p>
<h1>403 place solution</h1>
<p>My main strategy is training my model by using BigQuery ML because I wanted to learn it.<br>
I think it was a good way for me. And one other good thing about BigQuery ML is I can use all data to train.</p>
<h2>Model</h2>
<h3>XGBoost</h3>
<p>It looks that most people use LGBM but I use XGBoost.<br>
I'm not familiar GBDT algorithm, I hear that LGBM is better than XGBoost.<br>
Unfortunately, BigQuery ML only has XGBoost. So, I use XGBoost.</p>
<p>BigQuery ML has other ML models. For example, Logistic Regression, ARIMA, and Neural Network.<br>
I tried them but XGBoost was best.</p>
<h3>Parameter</h3>
<p>This is my BigQuery code to train my XGB model.<br>
The parameter can be seen here.</p>
<pre><code>CREATE MODEL `<My model PATH>`
OPTIONS(MODEL_TYPE='BOOSTED_TREE_CLASSIFIER',
BOOSTER_TYPE = 'GBTREE',
NUM_PARALLEL_TREE = 1,
MAX_ITERATIONS = 300,
TREE_METHOD = 'HIST',
EARLY_STOP = False,
MIN_REL_PROGRESS=0.0001,
LEARN_RATE =0.3,
MAX_TREE_DEPTH=11,
COLSAMPLE_BYTREE=1.0,
COLSAMPLE_BYLEVEL=0.4,
SUBSAMPLE = 0.9,
MIN_TREE_CHILD_WEIGHT=2,
L1_REG=0,
L2_REG=1.0,
INPUT_LABEL_COLS = ['answered_correctly'],
DATA_SPLIT_METHOD='CUSTOM',
DATA_SPLIT_COL='is_test')
AS
SELECT `<My Feature Tables>`
</code></pre>
<h2>CV strategy</h2>
<h3>leave-one-out</h3>
<p>I use <a href="https://www.kaggle.com/its7171/cv-strategy" target="_blank">this strategy</a> by leave-one-out. It has looked good to me.</p>
<h2>Ensemble</h2>
<p>I use the best XGBoost model and two SAKT models.<br>
SAKT models are made by forking <a href="https://www.kaggle.com/tarique7/v4-fork-of-riiid-sakt-model-full" target="_blank">this notebook</a>.<br>
I little updated three in this notebook that:</p>
<ol>
<li>Randomly cut to a length of 200 in train data. (original code had used last 200 data)</li>
<li>Save best score epoch model.</li>
<li>Parameter tuning.</li>
</ol>
<p>Each score is here.</p>
<table>
<thead>
<tr>
<th>model</th>
<th>CV</th>
<th>LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>XGB</td>
<td>0.7784</td>
<td>0.781</td>
</tr>
<tr>
<td>SAKT1</td>
<td>0.772</td>
<td>0.774</td>
</tr>
<tr>
<td>SAKT2</td>
<td>0.772</td>
<td>0.773</td>
</tr>
</tbody>
</table>
<p>I weighted averaging 3:1:1 this model and I got 0.787 in the public leaderboard.</p>
<p>I tried iteration ensemble and CV fold ensemble in XGBoost, however, both got the same score as my single model.</p>
<p>Final my submission notebook is <a href="https://www.kaggle.com/takamichitoda/riiid-infer-v8-xgb-sakt?scriptVersionId=51291197" target="_blank">here</a>.</p>
<h2>Feature</h2>
<p>My features is:</p>
<ul>
<li>aggregation by contents<ul>
<li>answered_correctl: count/sum/avg/std</li>
<li>timestamp: max/min/avg/std</li>
<li>user_id: count/unique_count</li>
<li>task_container_id: max/min/avg/std</li>
<li>user_answer_0~3: sum/avg/std</li>
<li>question_elapsed_time: max/min/avg/std</li>
<li>question_had_explanation: sum/avg/std</li></ul></li>
<li>each question tags cumulative sum<ul>
<li>number of try</li>
<li>number of correct answer</li>
<li>correct answer rate</li></ul></li>
<li>each lecture tag cumulative sum<ul>
<li>number of try</li></ul></li>
<li>time window features that size is 200 <ul>
<li>answer correctry: sum/avg/std</li>
<li>question_elapsed_time: avg/max</li>
<li>question_had_explanation: sum/avg/std</li>
<li>timestamp lag: avg/max</li>
<li>number of content_id=0</li>
<li>number of content_id=1</li></ul></li>
<li>timestamp lag</li>
</ul>
<h2>Reference</h2>
<h3>Notebook</h3>
<ul>
<li>Create SQL: <a href="https://www.kaggle.com/takamichitoda/riiid-create-sql" target="_blank">https://www.kaggle.com/takamichitoda/riiid-create-sql</a></li>
<li>Create Tags Onehot Encode Table: <a href="https://www.kaggle.com/takamichitoda/riiid-create-feature-table-csv" target="_blank">https://www.kaggle.com/takamichitoda/riiid-create-feature-table-csv</a></li>
<li>Create CV index: <a href="https://www.kaggle.com/takamichitoda/riiid-make-cv-index" target="_blank">https://www.kaggle.com/takamichitoda/riiid-make-cv-index</a></li>
</ul>
<p>EDIT:</p>
<ul>
<li><a href="https://www.ai-shift.co.jp/techblog/1505" target="_blank">my blog</a>(Japanese)</li>
<li><a href="https://github.com/trtd56/Riiid/tree/master/work/sql" target="_blank">SQL to generate Window Features.</a></li>
</ul> |
Riiid Answer Correctness Prediction | 4th place solution : Single Transformer Model | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for your kind words.<br>
Unfortunately I don't have a quantitative analysis of feature importance of my model. All I can say is that I already tried to remove some features from the model, it performed worse or sometimes no significant changes on valid set. In the early stage of the competition, when I added <em>time lag</em> it gave an important boost, <em>answered correctly, user answer</em> are also important.</p>
<p>Thanks.<br>
Here's the deal with <em>first_token_embedding</em>: because of the causal mask of the encoder - decoder attention (a position only attends to future position, excluding positions with the same timestamp), no position (of the encoder output) attends to the first position of the decoder. In the multi-head attention layer, attention masks are achieved by subtracting 1e9 from the attention logits; which doesn't work when no position attends to a position (because in that case, all attentions logits at the target position are subtracted by 1e9; these subtracted terms are canceled out when computing the softmax and as a result the mask has no effect) . To avoid this situation the encoder output is pre-padded with a <em>first_token_embedding</em> which attends to all positions of the decoder.</p>
<p>2048 is not the number of steps for a full traversal of the train dataset. Really sorry the naming in my code is a little misleading. Go through the whole dataset needs around 5000-6000 steps so the model was trained for around 6-7 epochs in the good sense of this word.</p>
<p>Hi everyone,<br>
It has been a real pleasure competing in this wonderful challenge. Thank you Kaggle and the Competition Host for making it possible.</p>
<p>I'm happy to share here my solution which got me to the 4th place. It is a single transformer model inspired from previous works (like <em>SAINT, SAKT</em>) very much discussed in this competition. </p>
<h1>The model</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F37949%2F2ec8f0682b82bb47954f2816f221093b%2Friiid-model%20(1).png?generation=1610229212586102&alt=media" alt="Riiid 4th solution model architecture"><br>
I hope my figure is straightforward. Below are key features of the model I'd like to explain more:</p>
<h3>Input sequences</h3>
<p>I tried to include all data available from the train table and metadata tables. I also add <em>time lag</em>, which is the delta time from the previous interaction (questions in the same container have the same timestamp so they share the same <em>time lag</em>). <br>
Also, on the question table, I added 2 features : difficulty level (<em>correct response rate</em> of each questions), popularity (<em>number of appearances</em>), which are computed from the whole training data.</p>
<h3>Input embeddings</h3>
<p>Same size of embeddings for all inputs, the embeddings are then concatenated and go through linear transform to feed to the first encoder and decoder of the transformer.</p>
<p>Embeddings of continuous features (<em>time lag</em>, <em>question elapsed time</em>, <em>question difficulty</em>, <em>question popularity</em>) are computed using a <em>ContinuousEmbedding</em> layer. The idea of <em>ContinuousEmbedding</em> is to sum up <code>(</code>weighted sum<code>)</code> a number of consecutive embedding vectors (from the embedding <em>weight matrix</em>). <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F37949%2F13b93a94023bb1ee627b220204f34272%2FContinuousEmbeddings.png?generation=1610279445750633&alt=media" alt=""><br>
This way we have a "smooth" version for the embeddings of the continuous variable: 2 values very close together should have similar embeddings. </p>
<h3>The transformer</h3>
<p>Input of the encoder are embeddings of all input elements. Input of the decoder doesn't not contain user answer related elements.<br>
Encoder and decoder layers are almost the same as in the original paper (<a href="https://arxiv.org/abs/1706.03762" target="_blank">Attention is all you need</a>).<br>
One key difference is of course the causal masks to prevent the current position from seeing the future. The other is a feature that I add to improve the performance and convergence speed : a kind of time aware weighted attention. The idea is to decay the attention coefficient by a factor of \(dt^{-w}\) where \(dt\) is the difference in timestamp of a position and the position it attends to and \(w\) is a trainable parameter constrained to be non-negative <code>(</code>one parameter per attention head<code>)</code>. This is pretty easy to implement: compute the timestamp difference matrix in log scale, multiply it with the parameters \(w\) and subtract it from the attention logits (<em>scaled dot product</em> output of the attention layer).</p>
<h1>Training</h1>
<p>I use the cv method <a href="https://www.kaggle.com/its7171/cv-strategy" target="_blank">https://www.kaggle.com/its7171/cv-strategy</a> (thanks <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a>). The model was implemented in Tensorflow and trained on TPU with Colab Pro.<br>
Sequences are randomly cut and padded to have the same length and all parts are kept for training.<br>
The final version of my model has embeddings size of 128, model size of 512, 4 encoder layers and 4 decoder layers. <br>
It was trained with the sequence length of 1024 for about 36000 steps (<em>warmup</em> 4000 steps then cosine decay) and with batch size 64. Training took about 4-5 hours.<br>
On the submission kernel I had to reduce sequence length to 512 due to resource limit.</p>
<h1>Some observations</h1>
<ul>
<li>Input embeddings: concatenation is better than sum</li>
<li>Longer sequence <code>(</code>for both training and inference<code>)</code> improves the performance</li>
<li>Model size also matters: bigger model size generally improves the performance but going beyond size of 512 and 4 layers does not improve much. </li>
</ul>
<h1>Why not ensemble</h1>
<p>I didn't have much time toward the end of the competition. When I still made improvement on my single model I made the choice of staying on that rather than spending time on making ensemble of smaller models. I'm not sure it was a good choice, but it got me this far so I'm still happy. </p>
<h1>Code</h1>
<p><a href="https://www.kaggle.com/letranduckinh/riiid-model-submission-4th-place-public-version" target="_blank">Here</a> is the submission kernel that I made public. You should find all my code source training log in the kernel. <br>
As you can see it scores 0.8180 on valid set, 0.815 on public LB and 0.817 on private LB.<br>
My top scored submission has the same model and training configuration but was trained on the whole training set, which did not improve much.</p>
<p><a href="https://github.com/dkletran/riiid-challenge-4th-place" target="_blank">Here</a> is the source code on github including all steps to reproduce the solution.</p>
<h3>Best regards to all</h3>
<p>Here is the intuition behind my choice:</p>
<ul>
<li>The encoder input value at a position represents the knowledge of the student up to that position. We need all information (content, time lag, his answer). </li>
<li>The decoder input (query) at a position represents the exercise (question) at that position. We only have information about the content itself and time lag from the previous exercises. </li>
<li>To predict the user answer at a position we need to know his "knowledge" (the encoder output) and the current question (the decoder query). Self attention and attention helps to look back the history in the past.</li>
</ul>
<p>Like <a href="https://www.kaggle.com/bacterio" target="_blank">@bacterio</a> said, it is an intuitive choice. We cannot guarantee it is the best choice. I did try some other ways (like in Saint++ put user answer in the decoder query) but the the did not work out well so I stayed with this. </p>
<p>I updated my post with with a schema to explain this.</p>
<p>I added a schema to explain this, hope it helps.</p>
<p>Just added a new figure to explain the embeddings of continuous variables.</p>
<p>Thanks.<br>
I didn't have much experience with TPU, only started using Colab TPU recently for Kaggle competitions because it is cheaper than cloud GPU. </p>
<p>Basically one sample per user except for users having more than <em>sequence_length</em> (1024) interactions. Training the model on each point in the training data would need a lot more time because in this case the gradient is propagated from only one position of a long sequence. I did try partial label masking as a kind of regularization (compute loss function only for some final positions of the sequence) but the model converges slowly without any improvement on performance.</p>
<p>Here is the github link of my solution: <a href="https://github.com/dkletran/riiid-challenge-4th-place" target="_blank">https://github.com/dkletran/riiid-challenge-4th-place</a>. You should find all scripts and instruction to regenerate all files you need to train & submit the model. Sorry for the late response.</p> |
Riiid Answer Correctness Prediction | 52nd place brief write up [priv 0.802 / GBDTs] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank Riiid Labs, partners and Kaggle for hosting this interesting competition. And congratulates to the all participants and especially the winners!</p>
<p>This was so interesting and well designed competition and I enjoyed for three months. I share a summary of my solution, hoping that this will be of some help to other participants.</p>
<h2>Final submission summary</h2>
<ul>
<li>Weighted averaging of <code>LightGBMx2 + CatBoostx1</code><ul>
<li>public: 51st (0.801)</li>
<li>private: 52nd (0.802)</li></ul></li>
<li>We use about 130 features</li>
<li>We were trying to make a model using NN(transfomer) in last few weeks, but we couldn't complete it.</li>
</ul>
<h2>Key Ideas</h2>
<h3>Features of the badness of a user_answer</h3>
<ul>
<li>+0.006, by far the brightest of the features we created😇.</li>
<li>The aim of creating the feature<ul>
<li>A person who chooses a bad answer that most people don't choose is probably bad</li>
<li>People who can do well will not choose bad choices even if they are wrong</li>
<li>This idea is probably correct, and the std aggregation below was very effective.</li></ul></li>
<li>How to make it<ul>
<li>Use all of train to calculate how many of each answer are selected for each question</li>
<li>e.g. for content_id=XXXX<ul>
<li>Answer 1: 9% of users choice</li>
<li>Answer 2: 5% of users choice</li>
<li>Answer 3: 1% of users choice</li>
<li>Answer 4: 85% of users choice</li></ul></li>
<li>Calculate the percentile for each choice by accumulating the percentage of choices for each question</li>
<li>Example: In the example above<ul>
<li>Answer 1: 15% (=1+5+9)</li>
<li>Answer 2: 6% (=1+5)</li>
<li>Answer 3: 1% (=1+5)</li>
<li>Answer 4: 100% (=1+5+9+85)</li></ul></li>
<li>Aggregation (std, avg, min, etc.) of the percentiles of the last N answers for each user.</li></ul></li>
</ul>
<h3>A little devised Word2Vec (by teammate)</h3>
<ul>
<li>This looks a bit leaky, but it worked about +0.004, so it was very effective too.</li>
<li>The aim of creating the feature<ul>
<li>Vectorizing users based on their past history of correct and incorrect answers, and calculate their approximation to the next question's correct or incorrect answer.</li></ul></li>
<li>How to make it<ul>
<li>For each user, line up <code>Question_(correct answer|incorrect answer)</code> (last N questions)</li>
<li>Vectorize <code>Question_(correct answer|incorrect answer)</code> by word2vec</li>
<li>Vectorize user by averaging the vector of <code>Question_(correct answer|incorrect answer)</code> of user's last N questions</li>
<li>Calculate cosine similarity between user's vector and next question's <code>Question_(correct answer)</code> <code>Question_(incorrect answer)</code>.</li></ul></li>
</ul>
<h3>Trueskill (by teammate)</h3>
<ul>
<li>importance was always at the top, but +0.001<ul>
<li>may have been conflicting with other features because we create this feature in the last stage</li></ul></li>
<li>How to make it<ul>
<li>Using <a href="https://trueskill.org/" target="_blank">trueskill</a> to score the strength of each question and each user.</li>
<li>From there, calculate the probability that the user will win (answer correctly) the question.</li></ul></li>
</ul>
<h3>Weighted counting of correct answers (by teammate)</h3>
<ul>
<li>Weight is the reciprocal of the percentage of correct answers, and correct answers are counted with weight.</li>
</ul>
<h2>Other Features</h2>
<p>The following is an excerpt from what worked for us.</p>
<ul>
<li>TargetEncodings<ul>
<li>percentage of questions answered correctly</li>
<li>percentage of users who answered the question correctly</li>
<li>and many more.</li></ul></li>
<li>User logs<ul>
<li>percentage of correct answers in the last 400 questions</li>
<li>percentage of correct answers in the same part of the last 400 questions</li>
<li>and many more.</li></ul></li>
<li>Lag features of timestamp<ul>
<li>The time elapsed since the last time user solved the same question.</li>
<li>Time elapsed since the previous question.</li></ul></li>
<li>Features processed from timestamp<ul>
<li>Things that use elapsed time</li>
<li>elapsed time since the previous question / (all users') average time taken for that task_container</li>
<li>elapsed time since the previous question / (all users') average time for that task_container (only correct answers are counted)</li>
<li>I thought the timestamp was the <code>timestamp of answering the question</code> since these were working.</li>
<li>the lag time simlar to SAINT's paper.</li>
<li>time elapsed since previous question - time spent on previous question</li></ul></li>
<li>Simple Word2Vec<ul>
<li>Line up questions for each user -> vectorize questions by word2vec</li>
<li>We've made a lot of these, by part, only correct questions, only wrong questions, various windows, etc.</li>
<li>The more patterns we added, the higher our score became.</li></ul></li>
</ul>
<h2>What did not work</h2>
<p>There are a lot, but I'm going to write down a few that I wondered why they don't work.</p>
<ul>
<li>Tags<ul>
<li>I was able to categorize almost all tags from the distribution of the parts where they appear.</li>
<li>I found the grammatical tags, intonation tags, etc.</li>
<li>I tried various ways to put it into the model, but it didn't work at all.</li>
<li>I tried target encoding and counting the number of correct answers of the user's past tags.</li></ul></li>
<li>Lecture<ul>
<li>I tried simple counting, but it did not work at all.</li>
<li>I gave up on using it as soon as I saw that no one seemed to be making use of it in discussions.</li></ul></li>
</ul> |
Riiid Answer Correctness Prediction | 58th solution: SAINT+ based predicts user_answer and then answered_correctly | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yeah nice! I tried that too and indeed it helped to reduce overfitting. Hope to see you around next comp, mate!</p>
<p>Hi folks!</p>
<p>My solution has something different from what I've seen until now. I have two outputs in a serial fashion:</p>
<ul>
<li>First the user answer prediction (A, B, C, D).</li>
<li>Then, inside the NN graph I use the correct answer id to gather the correctness probability.</li>
</ul>
<p>This way I have two different backwards signals that helped me to grow a little bit in LB.</p>
<p>Other things to notice:</p>
<ul>
<li>Log-normalization applied to time features (they have a pareto-like distribution). At the end of the competition I realized I didn't <em>need</em> to clip these features with such normalization.</li>
<li>All tags usage.</li>
<li>Continuous task_container_id (adds user expertise information).</li>
<li>I add a mask to make sure sequence elements in the same container don't attend to each other.</li>
<li>Pad tokens and lectures are masked in the loss functions.</li>
</ul>
<p>The rule I used to choose whether a feature should go on encoder or decoder is: static information on encoder, variable information on decoder.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1820636%2F71579dff863e98b9cdec6a2d0dfd48ee%2FUntitled%20Diagram%20(1).png?generation=1610226098358123&alt=media" alt="My solution"></p>
<p>I didn't think about it, I have everything in my local now and it would be somehow a lot of work. Maybe will do it within the next weeks. Thank you mate, I apreciate your appreciation :D</p> |
Riiid Answer Correctness Prediction | 6th Place Solution: Very Custom GRU | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I hope my inference kernel can be enough for that, therefore I made it public now: <a href="https://www.kaggle.com/aerdem4/riiid-starter" target="_blank">https://www.kaggle.com/aerdem4/riiid-starter</a></p>
<p>Thank you. Normally I share my code but this time it is in notebooks rather than well-organized scripts. I don't know if it will be readable.</p>
<p>I concatenate them.</p>
<p>GRU hidden dimension was 128. Larger GRUs were overfitting. Model size is around 13 MB, which is mostly content embedding weights.</p>
<p>Thanks <a href="https://www.kaggle.com/antorsae" target="_blank">@antorsae</a> Congrats to you and your teammate with the 3rd place!</p>
<p>First of all congrats to <a href="https://www.kaggle.com/keetar" target="_blank">@keetar</a> and <a href="https://www.kaggle.com/mamasinkgs" target="_blank">@mamasinkgs</a> and other top teams. I will shortly explain my solution. It is late here, so I may be missing some points.</p>
<p><strong>Some General Details</strong></p>
<ul>
<li>Used questions only. Lectures improve validation score but increase train-val gap and don’t improve LB.</li>
<li>Didn't use dataframes. Used numpy arrays partitioned by user ids.</li>
<li>Sequence length 256</li>
<li>15 epochs. 1/3 of the data each time with reducing LR.</li>
<li>8192 batch size</li>
<li>Ensemble of the same model with 7 different seeds trained on whole data</li>
<li>0.8136 single model validation score, 0.813 LB. Ensemble: 0.815.</li>
<li>8 hours training on 4-GPU machine</li>
<li>Used Github and committed any improvement with a message like: Add one more GRU layer (Val: 0.8136, LB: 0.813)</li>
</ul>
<p><strong>Inputs</strong><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F471945%2F4e161bd44dd25c8329c5ced0fa6c9ab3%2Fimg1.png?generation=1610064408784528&alt=media" alt=""></p>
<p><strong>Engineered Features</strong></p>
<p>Assume current question’s correct answer is X. Logarithm of:</p>
<ul>
<li>Number of questions since last X.</li>
<li>Length of current X streak. (can be zero)</li>
<li>Length of current streak on any non-X answer. (can be zero)</li>
</ul>
<p>This helps with users who always pick A as answer etc.</p>
<p><strong>Embeddings</strong><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F471945%2F8256c5fe356ce1bedeedaf9bc91e9884%2Fimg2.png?generation=1610064559098164&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F471945%2F73d8b182375f185231d35f8b141b8cbf%2Fimg3.png?generation=1610064576810743&alt=media" alt=""></p>
<p>Content Cosine Similarity</p>
<ul>
<li>A bit similar to attention with 16 heads</li>
<li>Linear transformation and l2 norm applied on content vectors</li>
<li>For 16 different transformation, cosine similarity between current content and history contents are calculated.</li>
<li>Transformation is symmetric for content and history contents.</li>
</ul>
<p>U-GRU:</p>
<ul>
<li>GRU with 2 directions but not BiGRU</li>
<li>First does reverse pass, concatenates the output and then does forward pass</li>
</ul>
<p>MLP:</p>
<ul>
<li>2 layers of [Linear, BatchNorm, Relu]</li>
</ul>
<p>Edit: Part embedding is trainable. There is actually sigmoid x tanh layer before GRUs.</p> |
Riiid Answer Correctness Prediction | 76th Place Solution | Minimal SAINT+ Model! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the organizers of this very interesting and difficult competition. Congratulations to all the winners. I have learned a lot in this competition. I am not a novice myself but not an expert either. This competition had some of the best discussion threads I could ask for. 2 days back I wasn't sure if my team is even gonna get a medal let alone in the top 100. I should say it is all thanks to the simple SAINT+ model we have build and the efforts of my teammates <a href="https://www.kaggle.com/vishwajeet993511" target="_blank">@vishwajeet993511</a> and <a href="https://www.kaggle.com/padma3" target="_blank">@padma3</a>. We are calling our model minimal because this is the first version we could come up with. Didn't get to try any hyperparameter tuning due to the constraint of time and computation power.</p>
<h2>SAINT+</h2>
<h3>Encoder</h3>
<p>We used the following features in the encoder model:</p>
<ul>
<li><code>question_id</code>: Embedding: 0 for padding</li>
<li><code>part_id</code>: Embedding: 0 for padding</li>
<li><code>correct_answer</code>: Embedding: 0 for padding. we thought that by giving both questions actual answers and the user's previous answers, the model should be able to come with a pattern and improve the prediction accuracy.</li>
<li><code>lsi_id</code>: Embedding: 0 for padding. I took lsi tags from <a href="https://www.kaggle.com/gaozhanfire/riiid-lgbm-val-0-788-offline-and-lsimodel-updated" target="_blank">this popular notebook</a>. </li>
<li><code>question_accuracy</code>: Linear: This and the below features are adding value in LightGBM. Giving the transformer model this info might have also helped. This is the mean value of accuracy for the question_id.</li>
<li><code>question_elapsed</code>: Linear: Quantile Transformed (Normal, 100 quantiles). Large Inputs to Neural Networks make it hard to converge. From my previous competition (MoA), I understood this much, so used it here. I can confirm that the convergence was faster. Could have tried bucketing as specified in the paper but didn't get time. This is the mean value of elapsed time for the question_id.</li>
<li><code>question_lagtime</code>: Linear. Quantile Transformed (Normal, 100 quantiles). I have discussed this <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/209303" target="_blank">in this thread</a>. This is the mean value of lagtime for the question_id.</li>
<li><code>part_accuracy</code>: Linear</li>
<li><code>lsi_accuracy</code>: Linear</li>
</ul>
<p>We take to add the embedding above and send it to the <a href="https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html" target="_blank">transformer model</a> as src input.</p>
<h3>Decoder</h3>
<p>We used the following features in the decoder model:</p>
<ul>
<li><code>responses</code>: Embedding: 0/1 feature. Shifted back by 1 interaction. I had also tried the way SAKT encodes responses but it wasn't optimal for me.</li>
<li><code>user_answer</code>: Embedding. Shifted back by 1 interaction.</li>
<li><code>prior_question_had_explanation</code>: Embedding. </li>
<li><code>elapsed_time</code>: Linear: Quantile Transformed (Normal, 100 quantiles).</li>
<li><code>lagtime</code>: Linear: Quantile Transformed (Normal, 100 quantiles).</li>
</ul>
<p>We take to add the embedding above and send it to the transformer model as tgt input.</p>
<h3>Masking</h3>
<ul>
<li>Masking is very important for transformer models. A lot of people have posted an AUC of 1.0. This is due to bad masking. I have discussed masking <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/209303" target="_blank">in this thread</a>.</li>
<li>Should have used future masking for encoder and task_container_id based masking for the decoder.</li>
<li>1 thing I would like to highlight is that, look for the dtype of masking. Changing from bool to float makes inverses the mask. This created a lot of pain for me. </li>
</ul>
<h3>Parameters</h3>
<p>MAX_SEQ = 180<br>
NUM_ENCODER_LAYERS = 2<br>
NUM_DECODER_LAYERS = 2<br>
EMBED_SIZE = D_MODEL = 512<br>
BATCH_SIZE = 128<br>
DROPOUT = 0.1<br>
TEST_SIZE = 0.1<br>
LEARNING_RATE = 2e-4<br>
EPOCHS = 20<br>
EARLY_STOPPING_ROUNDS = 3</p>
<p>We tried out embedding sizes of 128 and 256. They were good but increasing the size helped us. Initially, we were using a higher learning rate, but the results were not optimal.</p>
<h3>Training</h3>
<ul>
<li>We understood that training a transformer model is not easy. The learning rate, optimizer, and scheduler are very important. Try to use a small learning rate with Adam optimizer or a variant of it. Initially we used <a href="https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler.OneCycleLR" target="_blank">OneCycleLR</a> but we shifted to <a href="https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler.CosineAnnealingWarmRestarts" target="_blank">CosineAnnealingWarmRestarts</a></li>
<li>We were constrained by the Kaggle GPUs. Just training the model 4 times was enough to complete a week of our GPU quota.</li>
<li>Training each model took around 20-30 minutes per epoch. 5 to 6 hours in total.</li>
<li>We weren't able to reach early stopping in our best model. </li>
<li>I discussed debugging my model <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/209303" target="_blank">in this thread</a>.</li>
<li>The single model gives CV 0.792, Public 0.793 and <strong>Private 0.795</strong></li>
<li>If we had time, we wanted to do <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/209633" target="_blank">pretraining of SAINT+</a> with the next question prediction task. </li>
</ul>
<h2>Ensemble</h2>
<ul>
<li>We had our LightGBM pipeline ready but weren't able to spend training LightGBM models. Our features are very basic. In the end, I used the public popular LightGBM model. 1 change we did is that instead of using the first x million data as input, we used the last 20/30 million rows. Since the test data immediately start from the end of the train data, it makes sense to use the last part instead of the first. </li>
<li>It gave us a boost of 0.004 to the SAINT+ model we had.</li>
<li>We also tried to use ELO and FTRL models but they were getting 0 weights in our bagging experiments we did on the last day.</li>
<li>One thing we missed is that we should have used my SAKT model in the ensemble. But we weren't sure if it could have added value as it is very similar to SAINT.</li>
</ul>
<h2>Final Words</h2>
<ul>
<li>Thanks to all the people who were sharing good notebooks and sharing their ideas in the discussion forum. Even if I hadn't got a medal, I would have been happy with the things I learned in this competition. </li>
<li>For the next tabular competition, I would like to team up with other competitors who are good at building Boosting Models. I am pretty bad at feature engineering for LightGBM kind of models. I want to learn it. </li>
<li>I will be uploading my code to GitHub. Once I have done this, I will add the link.</li>
<li>I will be becoming a <strong>Kaggle Expert</strong>. Would like to try for a GOLD medal in the next one 😆. Kaggle Master, here I come.</li>
<li>Thanks a lot to everyone who participated. This is a good experience. </li>
</ul>
<p>(edited)</p> |
Riiid Answer Correctness Prediction | 9th place solution : 6 Transformers + 2 LightGBMs | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>item2vec is a simple idea to run word2vec with a sequence of question IDs as sentences, and turn latent expressions into features.<br>
Please note that it cannot be run in kaggle notebook due to lack of RAM.</p>
<pre><code>import pandas as pd
import numpy as np
from gensim.models import Word2Vec
EMBEDDING_DIM = 20
train = pd.read_csv('train.csv')
questions = pd.read_csv('questions.csv')
train = train[(train.content_type_id == 0) & (train.answered_correctly == 0)]
sentences = train.groupby('user_id')['content_id'].apply(lambda x: [str(t) for t in x]).values
model = Word2Vec(sentences, size=EMBEDDING_DIM , window=100, seed=0, workers=16)
result_vector = np.zeros((len(q), EMBEDDING_DIM ))
for i in range(len(q)):
try:
result_vector[i, :] = model.wv[str(i)]
except:
pass
</code></pre>
<p>Thanks! I also agree that <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> 's validation is awesome.</p>
<p>Thanks <a href="https://www.kaggle.com/mamasinkgs" target="_blank">@mamasinkgs</a> and congrats too! I personally don't think this issue is a "low signal matter" at all. I hope it will be dealt with appropriately.</p>
<blockquote>
<p>I guess there is no hole in Jane, because it has Forecasting Timeline.</p>
</blockquote>
<p>Yes. <strong>the problem occurs when the Time-Series API is combined with the Synchronous Code Competition format</strong>; a competition where recalculation is performed for the Private Leaderboard would not have the problem.</p>
<p>Exactly.<br>
I kept the raw historical data for each user in a numpy array, and <code>prior_user_answer</code> and <code>prior_answer_correctness</code> were also merged into the historical data once and then used for features in the same fashion as other standard information.</p>
<p>Exactly.<br>
I kept the raw historical data for each user in a numpy array, and <code>prior_user_answer</code> and <code>prior_answer_correctness</code> were also merged into the historical data once and then used for features in the same fashion as other standard information.</p>
<p>First of all, thanks to the hosts for a great competition! This was one of the toughest competitions I have ever entered, but well worth the effort.</p>
<p>Prediction of our team (tito <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> and nyanp <a href="https://www.kaggle.com/nyanpn" target="_blank">@nyanpn</a>)consists of following models.</p>
<ul>
<li>tito's transformer: LB 0.813</li>
<li>nyanp's SAINT+ transformer: LB 0.808</li>
<li>nyanp's LightGBM: LB 0.806</li>
</ul>
<p>Simple blending of these models scored LB 0.814 / private 0.816.</p>
<h2>Pipeline</h2>
<p>We converted the entire train.csv data into hdf5, and loaded only the user_id that appeared during inference into a np array (97% RAM savings compared to hold entire training data in RAM). We estimate that the overhead due to I/O in hdf is ~45 minutes. This some overhead allowed us to combine tito's large transformer with nyanp's feature engineering pipeline.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1532198%2F21ed1bbf93a37f6219fb7dc369761f0e%2Friiid%20pipeline.png?generation=1610287506256559&alt=media" alt=""></p>
<h2>Transformer (tito, LB 0.814)</h2>
<p>This is transformer model with encoder only, based on <a href="https://www.kaggle.com/claverru" target="_blank">@claverru</a>'s <a href="https://www.kaggle.com/claverru/demystifying-transformers-let-s-make-it-public" target="_blank">nice kernel</a></p>
<h3>summary</h3>
<ul>
<li>only trained and predicted for answered_correctly of last question of the sequence.</li>
<li>all features are concatenated (only position encoding is added).</li>
<li>used lectures, in timestamp order as it is</li>
<li>Window size 300-600</li>
<li>batch size 1000</li>
<li>drop_out 0</li>
<li>n_encoder_layers 3-5</li>
<li>augmentation to replace content_ids with dummy ids at a certain rate</li>
<li>kept only one question in the last task to avoid leaks</li>
</ul>
<h3>features</h3>
<p>embedded or dense was decided by CV.</p>
<ul>
<li>(embedded) content id</li>
<li>(embedded) part id</li>
<li>(embedded) same task question size</li>
<li>(dense) answered_correctly</li>
<li>(dense) had_explanation</li>
<li>(dense) elapsed time</li>
<li>(dense) lag time</li>
<li>(dense) diff of timestamp from the last question</li>
</ul>
<h3>combining models</h3>
<p>To avoid the overhead of calling model.predict() multiple times for ensemble, I made a combined model that links four models.</p>
<pre><code>inputs = tf.keras.Input(shape=(input_shape, n_features))
out1 = model1(inputs[:,-window_size1:,:])
out2 = model2(inputs[:,-window_size2:,:])
out3 = model3(inputs[:,-window_size3:,:])
out4 = model4(inputs[:,-window_size4:,:])
combo_model = tf.keras.Model(inputs, [out1,out2,out3,out4])
</code></pre>
<h2>SAINT+ (nyanp, LB 0.808)</h2>
<ul>
<li>d_model = 256</li>
<li>window_size = 200</li>
<li>n_layers = 3</li>
<li>attention dropout = 0.03</li>
<li>question, part, lag are embed to encoder</li>
<li>response, elapsed time, has_explanation are embed to decoder</li>
</ul>
<p>To prevent leakage, in addition to upper triangular attention mask, I masked the loss in questions other than the beginning of each task_container_id. Questions with the same task_container_id were shuffled in each batch during training, and the loss weights were adjusted to reduce the effect of masks. This mask improved LB by 0.0003.</p>
<p>(Note: I believe that indirect leaks still exist, but I've spent 80% of my time on the LightGBM implementation and data pipeline, so I couldn't improve it any further)</p>
<p>Other than that, there is nothing special about this NN. It scored .806 in single, .808 by averaging 2 models.</p>
<h2>LightGBM (nyanp, LB 0.806)</h2>
<p>LightGBM models are trained on 264 features. To speed up inference, I fixed the number of trees to 3000 and ensembled two models with different seeds (This is better than single large LGBM in terms of both speed and accuracy). By compiling this model with <a href="https://github.com/dmlc/treelite" target="_blank">treelite</a>, inference time became 3x faster (~10ms/batch, ~10min in total).</p>
<h3>features</h3>
<p>I mapped prior_question_* rows with their respective rows by following code and utilized them in some features.</p>
<pre><code>df['elapsed_time'] = df.groupby('user_id')['prior_question_elapsed_time'].shift(-1)
df['elapsed_time'] = df.groupby(['user_id', 'timestamp'])['question_elapsed_time'].transform('last')
df['has_explanation'] = df.groupby('user_id')['prior_question_had_explanation'].shift(-1)
df['has_explanation'] = df.groupby(['user_id', 'timestamp'])['has_explanation'].transform('last')
</code></pre>
<p>Here is the list of my features. There is no magic here; no single feature boost CV more than 0.0002. I repeated feature engineering based on well-known techniques and a little bit of domain knowledge.</p>
<h4>question features</h4>
<ul>
<li>count encoding</li>
<li>target encoding</li>
<li>number of tags</li>
<li>one-hot encoding of tag (top-10 frequent tags)</li>
<li>SVD, LDA, item2vec using user_id x content_id matrix</li>
<li>LDA, item2vec using user_id x content_id matrix (filtered by answered_correctly == 0)<ul>
<li>Typical word2vec model is trained on next-word prediction task. By constructing word2vec model over incorrectly answered questions, <br>
the latent vectors extracted from the model can be used to capture which incorrect question are likely to co-occur with each other.</li></ul></li>
<li>10%, 20%, 50%, 80% elapsed time of all users response, correct response, wrong response</li>
<li>SAINT embedding vector + PCA</li>
</ul>
<h4>user features</h4>
<ul>
<li>avg/median/max/std elapsed_time</li>
<li>avg/median/max/std elapsed_time by part</li>
<li>avg has_explanation flag</li>
<li>avg has_explanation flag by part</li>
<li>nunique of question, part, lecture</li>
<li>cumcount / timestamp</li>
<li>avg answered_correctly with recent 10/30/100/300 questions, recent 10min/7 days</li>
<li>avg answered_correctly by part, question, bundle, order of response, question difficulty, question difficulty x part<ul>
<li>question difficulty: discretize the avg answered_correctly of all users for each question into 10 levels</li></ul></li>
<li>cumcount by part, question, bundle, question difficulty, question difficulty x part</li>
<li>lag from 1/2/3/4 step before</li>
<li>correctness, lag, elapsed_time, has_explanation in the same question last time</li>
<li>tag-level aggregation features<ul>
<li>calculate tag-level feature for each user x tag, then aggregate them by min/avg/max</li>
<li>cumcount of wrong answer, cumcount of correct answer, avg target, lag</li></ul></li>
<li>(estimated elapsed time) - (avg/median/min/max elapsed time within same part)<ul>
<li>estimated elapsed time = (timestamp - prev timestamp) / (# of questions within the bundle)</li></ul></li>
<li>(estimated elapsed time) - (10%, 20%, 50%, 80% elapsed time of all users correct/wrong response)<ul>
<li>Because TOEIC part1-4 questions are usually answered after listening to the conversation, the correct answers tend to be concentrated immediately after the conversation ends</li></ul></li>
<li>task_container_id - previous task_container_id<ul>
<li>I'm not sure why this worked. There might be a difference in the correct rate if people answered from different devices than usual (multi-user?).</li></ul></li>
<li>part of last lecture</li>
<li>lag from last lecture</li>
<li>whether the question contains the same tag as the last lecture</li>
<li>whether the part is the same as the previous question</li>
<li>median lag - median elapsed_time</li>
<li>part of the first problem the user solved</li>
<li>lag - median lag</li>
<li>inner-product of user-{correct|incorrect}-question-vector and question-vector<ul>
<li>user-correct-question-vector: average of LDA vectors for each question that the user answered correctly.</li></ul></li>
<li>rank of lag compared to the same user's past lag (filtered by answered_correctly == 0, 1 respectively)</li>
</ul>
<h3>feature calculation</h3>
<p>Instead of updating the user dictionary, I calculate user features from scratch for each bacth.</p>
<p>The numpy array of historical data was loaded from the hdf storage and then split into questions and lectures, which were then wrapped in a pandas-like API and passed to their respective feature functions.</p>
<pre><code>@feature('lag1.user_lecture')
def lag1_user_lecture(df: RiiidData, pool: DataPool) -> np.ndarray:
"""
time elapsed from last lectures for each user
:param df: data in current batch
:param pool: cached data storage
"""
lag1 = {}
for u, t in set(zip(df.questions['user_id'], df.questions['timestamp'])):
past_lect = pool.users[u].lectures # history of lecture
if len(past_lect) > 0:
lag1[u] = t - past_lect['timestamp'][-1]
return np.array(list(map(lambda x: lag1.get(x, np.nan), df.questions['user_id'])), dtype=np.float32)
</code></pre>
<p>For training, I use the same functions as inference time. This way, there were almost no restrictions on feature creation, and I did not have to worry about bugs of train-test difference.</p>
<p>The feature functions were frequently benchmarked by a dedicated benchmark script, and functions with high overhead were optimized by various ways (numba, bottleneck and various algorithm improvement) .</p>
<p>The question features were precomputed and made into a global numpy array of shape (13523, *) and merged into the feature data using fancy index.</p>
<pre><code>@feature(tuple(f'question_item2vec_{i}' for i in range(20)))
def question_item2vec(df: RiiidData, _: DataPool):
"""
question embedding vector using item2vec
"""
qid = df.questions['content_id']
ret = QUESTION_ITEM2VEC[qid, :] # mere fancy index, faster than pd.merge
return ret
</code></pre>
<h2>Other ideas</h2>
<ul>
<li>TTA on SAINT+ by np.roll (it did improve SAINT+ by 0.001+, but we couldn't include it because of submission timeout)</li>
<li>linear blending based on public LB labels (timeout, too)</li>
</ul>
<h2>Feedback on Time-Series API Competition</h2>
<p>Although the competition was well-designed, our team still found that the time-series API allowed us to obtain private score information with 1-bit probing even after the known vulnerability was fixed.</p>
<pre><code>env = riiideducation.make_env()
iter_test = env.iter_test()
ground_truth = []
prediction = []
threshold = 0.810
for idx, (test_df, sample_prediction_df) in enumerate(iter_test):
ground_truth.extend(list(test_df['answered_correctly'].values))
predicted = model.predict(...)
prediction.extend(list(predicted))
if len(prediction) >= 2500000:
private_auc = roc_auc_score(prediction[500000:], ground_truth[500000:])
if private_auc < threshold:
raise RuntimeError()
# private auc of this submission is guaranteed to exceed .810 if submission is succeeded
</code></pre>
<p>By using this probing, we can select the final submission with the highest private score, or determine the best<br>
ensemble weight for private score by using the hill-climbing method.</p>
<p>We contacted the Kaggle Team and asked them if this is legal, and they said it was a "low signal matter". We still think this is a gray-area and decided not to use this probing.</p>
<p>We think it wasn't critical in this competition, but if there is a future competition with the same format but without AUC metrics, the hack with the "magic coefficient" could improve the ranking significantly. If a competition with the same format is held on Kaggle in the future, we suggest the Kaggle team to fix this problem (e.g. put a dummy value in the private label during the competition).</p>
<blockquote>
<p>How did you choose a well-fitted n_components ?</p>
</blockquote>
<p>The <code>n_component</code> for LDA is set to 10, but I don't know if this is the best because I didn't have time to explore other values. Since the number of questions is only ~15,000, setting the n_component larger than 30 will probably not have much effect.</p>
<p>Thanks, I've made my FE code public. Hope this helps ;)<br>
<a href="https://www.kaggle.com/nyanpn/lda-feature-for-riiid-part-of-9th-place-lightgbm" target="_blank">https://www.kaggle.com/nyanpn/lda-feature-for-riiid-part-of-9th-place-lightgbm</a></p> |
Riiid Answer Correctness Prediction | My personnal journey through the RIID competition - Silver LGB single model | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, and as many others, I had a lot of pleasure participating to that competition. The challenge was fun and there were many challenges that I was not used to face. </p>
<p>I joined the competition a bit late (1 month before the end) so I really decided to focus on the feature engineering part. I had the aim of reaching the silver zone, and that's what I did, so very happy about it !</p>
<p>Overall I learned a lot of tricks to handle RAM and improve CPU computational time:</p>
<ul>
<li>Making a single loop to make the whole feature engineering rather than using several pandas.apply()</li>
<li>Using a csv iterator rather than iterating through a numpy array fully loaded (or worst, a dataframe!)</li>
<li>Smart use of sets to calculate presence or not of elements in a list</li>
<li>The potential of dictionaries that I clearly underestimated until now !</li>
<li>Some RAM tricks such than converting the training array column by column to float32 before feeding it to the LGB model and avoid RAM explosion</li>
<li>The potential of SHAP for feature selection</li>
<li>The possibility to use conditional features rather than doing One hot Encoding</li>
<li>…</li>
</ul>
<p>For me, one of the key challenges was to not get lost in the amount of code I was producing, and it happened more than once that I decided to start again from a blank page to have a clear view of what I was actually doing ! If there is something I will try to handle better next time it is really global code organization. I think I lost a lot (too much) on it.</p>
<p>Separating the code in 3 parts worked very well for me, with one part handling creation of user history, one part updating it, and one part creating the features based on the user history.</p>
<p>Apart from all classical features that have been already discussed, one of my model key differentiation was the bayesian probability index that I built and discussed <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/207148" target="_blank">there</a>. Following <a href="https://www.kaggle.com/authman" target="_blank">@authman</a> comment, I created an efficient cluster-based feature, illustrated by the feature below:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1161354%2F9a1323fb5376e39469c104cae83ac379%2Fclusterriid.png?generation=1610124766536795&alt=media" alt=""></p>
<p>For those interested, I created a detailed kernel with all my inference pipeline and feature engineering. You can check it out here, there is probably one or two thing interesting to pick up !<br>
<a href="https://www.kaggle.com/bowaka/riid-lgb-single-model-0-793-full-summary" target="_blank">Notebook here</a></p>
<p>I will personally continuing to work on the datasets and take the opportunity to improve my knowledge and expertise about transformers (that is currently close to 0 !)</p>
<p>Yes I saw you in the leaderboard, I'm sorry you missed the silver from so few… Next will be the good one! </p> |
Riiid Answer Correctness Prediction | Only LGBMs. Private 71st LB 0.800 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I defined it as</p>
<p>'previous level0 lgbm prediction error' = 'answered_correctly' - 'lgbm0_prediction'.</p>
<p>I also tried its absolute value version but that performed worse.</p>
<p>For each user and for each selected feature, I simply keep last 10 records as a list. I didn't use pandas groupby at inference. </p>
<h1>Good competition!</h1>
<p>First of all, I'd like to thank the organizers and Kaggle for offering this wonderful competition. The amount of data and the inference API made this competition more challenging, interesting, and practical.</p>
<h1>LGBM only solution</h1>
<p>I tried several models including LightGBM, CatBoost, and SAKT. In December, I decided to use only LGBM models mainly because of my limited GPU resource. (I bet some teams used only Kaggle GPUs and made better models.)</p>
<h1>Stack</h1>
<p>My final model is a stack of LGBM models.</p>
<h2>Level 0</h2>
<p>The first level of the stack is an LGBM model with typical features such as timestamp lag, rolling mean accuracy, and whether a user saw the same quiz before. I also created a custom quiz ranking.</p>
<p>The level 0 model itself trained with half of the data scored the public LB 0.791.</p>
<h2>Level 1</h2>
<p>In addition to the same features as level 0, I added</p>
<ul>
<li>level 0 lgbm prediction</li>
<li>previous level 0 lgbm prediction error</li>
<li>previous 10 records of these features + some features</li>
</ul>
<p>The level 1 model trained with the other half of the data scored the public LB 0.798. I created two level 1 models changing the number of leaves 256 and 500. Averaging them gave me LB 0.799.</p>
<p>The private score was 0.8000.</p>
<h2>Wish I could do</h2>
<p>As I couldn't optimize the memory usage, there are several things I couldn't include in the solution.</p>
<p>For example, the quiz history for each user such as has_see_this_quiz_before, has_answered_this_quiz_before, how_long_has_past_since_last_saw_this_quiz. To save memory, I was using 'content_id' mod p, where I took p=409. Of course, this introduces some noise. There were discussions to use Sqlite3 but I didn't give it a try. In my final model, I didn't use mod p but instead, I only keep track with saw_this_quiz_before and how_long_has_past_since_last_saw_this_quiz features. My notebook used almost all 16GB of memory. If I added one more model for an ensemble, it gave me a memory error.</p>
<h1>Finally</h1>
<p>I like this challenging competition. It not only requires data science skills but also engineering skills to make a model work at inference. I look forward to learning new things from your solutions! If you have questions or comments, please leave a comment below!</p>
<p><strong>Thank you for reading!</strong></p>
<p><a href="https://www.kaggle.com/sishihara" target="_blank">@sishihara</a> Thank you for your comment. Let me explain what I mean by custom quiz ranking. By a quiz, I meant a question.</p>
<p>For each user the first few quizzes are for assessment. Thus I thought that how to answer the first 20 quizzes might classify students levels. (Here 20 is just my arbitrary choice.) So, students were classified with 2**20 classes. Then for each class, I give a score using its binary (0 for incorrect and 1 for correct and each class is represented by a series of 0 and 1).</p>
<p>Now, I take the average of students' rank defined above for each quiz. This is the custom quiz ranking.</p> |
Riiid Answer Correctness Prediction | Public/Private 13th solution (team takoi + kurupical) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi <a href="https://www.kaggle.com/adityaecdrid" target="_blank">@adityaecdrid</a> !</p>
<blockquote>
<p>What's same_question_index and target?</p>
</blockquote>
<p>-> same as takoi's distance between the same content_id</p>
<blockquote>
<p>LIT structure in EncoderLayer. What's LIT?</p>
</blockquote>
<p>-> Sorry, I forget to explain detail. <a href="https://arxiv.org/pdf/2012.14164.pdf" target="_blank">https://arxiv.org/pdf/2012.14164.pdf</a>. My implements is below. TransformerEncoder is copy from torch.nn.TransformerEncoder, I just replaced the linear part of src2 with a LITLayer.</p>
<pre><code>class LITLayer(nn.Module):
"""
https://arxiv.org/pdf/2012.14164.pdf
"""
def __init__(self, input_dim, embed_dim, dropout, activation):
super(LITLayer, self).__init__()
self.input_dim = input_dim
self.embed_dim = embed_dim
self.activation = activation
self.linear1 = nn.Linear(input_dim, embed_dim)
self.lstm = nn.LSTM(embed_dim, embed_dim)
self.linear2 = nn.Linear(embed_dim, input_dim)
self.norm_lstm = nn.LayerNorm(embed_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, x):
x = self.linear1(x)
x = self.activation(x)
x = self.dropout(x)
x, _ = self.lstm(x)
x = self.norm_lstm(x)
x = self.dropout(x)
x = self.linear2(x)
return x
class TransformerEncoderLayer(nn.Module):
r"""TransformerEncoderLayer is made up of self-attn and feedforward network.
This standard encoder layer is based on the paper "Attention Is All You Need".
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in
Neural Information Processing Systems, pages 6000-6010. Users may modify or implement
in a different way during application.
Args:
d_model: the number of expected features in the input (required).
nhead: the number of heads in the multiheadattention models (required).
dim_feedforward: the dimension of the feedforward network model (default=2048).
dropout: the dropout value (default=0.1).
activation: the activation function of intermediate layer, relu or gelu (default=relu).
Examples::
>>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8)
>>> src = torch.rand(10, 32, 512)
>>> out = encoder_layer(src)
"""
def __init__(self, d_model, nhead, dim_feedforward=256, dropout=0.1, activation="relu"):
super(TransformerEncoderLayer, self).__init__()
self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
# Implementation of Feedforward model
self.norm1 = nn.LayerNorm(d_model)
self.norm2 = nn.LayerNorm(d_model)
self.dropout1 = nn.Dropout(dropout)
self.dropout2 = nn.Dropout(dropout)
self.activation = _get_activation_fn(activation)
self.lit_layer = LITLayer(input_dim=d_model, embed_dim=dim_feedforward, dropout=dropout, activation=self.activation)
def __setstate__(self, state):
if 'activation' not in state:
state['activation'] = nn.ReLU
super(TransformerEncoderLayer, self).__setstate__(state)
def forward(self, src: torch.Tensor, src_mask: Optional[torch.Tensor] = None, src_key_padding_mask: Optional[torch.Tensor] = None) -> torch.Tensor:
r"""Pass the input through the encoder layer.
Args:
src: the sequence to the encoder layer (required).
src_mask: the mask for the src sequence (optional).
src_key_padding_mask: the mask for the src keys per batch (optional).
Shape:
see the docs in Transformer class.
"""
src2 = self.self_attn(src, src, src, attn_mask=src_mask,
key_padding_mask=src_key_padding_mask)[0]
src = src + self.dropout1(src2)
src = self.norm1(src)
# src2 = self.linear2(self.dropout(self.activation(self.linear1(src))))
src2 = self.lit_layer(src)
src = src + self.dropout2(src2)
src = self.norm2(src)
return src
</code></pre>
<p>First of all, thanks to Hoon Pyo (Tim) Jeon and Kaggle team for such an interesting competition.<br>
And congratulates to all the winning teams!</p>
<p>The following is the team takoi + kurupical solution.</p>
<p><br><br>
<br></p>
<h1>Team takoi + kurupical Overview</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1146523%2F3f2943b27feb7d884472c201cd032d5b%2F1.jpg?generation=1610075778291231&alt=media" alt=""></p>
<h1>validation</h1>
<p><a href="https://www.kaggle.com/tito" target="_blank">@tito</a>'s validation strategy. <a href="https://www.kaggle.com/its7171/cv-strategy" target="_blank">https://www.kaggle.com/its7171/cv-strategy</a></p>
<p><br><br>
<br></p>
<h1><strong>kurupical side</strong></h1>
<p>I would like to have three kaggler to thank.<br>
<a href="https://www.kaggle.com/takoi" target="_blank">@takoi</a> for inviting me to form a team. If it weren't for you, I couldn't reach this rank!<br>
<a href="https://www.kaggle.com/limerobot" target="_blank">@limerobot</a> for sharing DSB 3rd solution. I'm beginner in transformer for time-series data, so I learned a lot from your solution!<br>
<a href="https://www.kaggle.com/wangsg" target="_blank">@wangsg</a> for sharing notebook <a href="https://www.kaggle.com/wangsg/a-self-attentive-model-for-knowledge-tracing.!" target="_blank">https://www.kaggle.com/wangsg/a-self-attentive-model-for-knowledge-tracing.!</a><br>
I used this notebook as a baseline and finally get 0.809 CV for single transformer.</p>
<h2>model</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1146523%2F411c8ee779411bb443284b572980ef21%2F2.jpg?generation=1610075796425059&alt=media" alt=""></p>
<h2>hyper parameters</h2>
<ul>
<li>20epochs</li>
<li>AdamW(lr=1e-3, weight_decay=0.1)</li>
<li>linear_with_warmup(lr=1e-3, warmup_epoch=2)</li>
</ul>
<h2>worked for me</h2>
<ul>
<li>baseline (SAKT, <a href="https://www.kaggle.com/wangsg/a-self-attentive-model-for-knowledge-tracing" target="_blank">https://www.kaggle.com/wangsg/a-self-attentive-model-for-knowledge-tracing</a>)</li>
<li>use all data (this notebook use only last 100 history per user)</li>
<li>embedding concat (not add) and Linear layer after cat embedding(@limerobot DSB2019 3rd solution) (+0.03) </li>
<li>Add min(timestamp_delta//1000, 300) (+0.02)</li>
<li>Add "index that user answered same content_id at last" (+0.005)</li>
<li>Transformer Encoder n_layers 2 -> 4 (+0.002)</li>
<li>weight_decay 0.01 -> 0.1 (+0.002)</li>
<li>LIT structure in EncoderLayer (+0.002)</li>
</ul>
<h2>not worked for me</h2>
<p>I did over 300 experiments, and only about 20 of them were successful.</p>
<ul>
<li>SAINT structure (Transformer Encoder/Decoder)</li>
<li>Positional Encoding</li>
<li>Consider timeseries<ul>
<li>timedelta.cumsum() / timedelta.sum()</li>
<li>np.log10(timedelta.cumsum()).astype(int) as category feature and embedding<br>
etc…</li></ul></li>
<li>optimizer AdaBelief, LookAhead(Adam), RAdam</li>
<li>more n_layers(4 => 6), more embedding_dimention (256 => 512)</li>
<li>output only the end of the sequence</li>
<li>large binning for elapsed_time/timedelta (500, 1000, etc…)</li>
<li>treat elapsed_time and timedelta as continuous </li>
</ul>
<p><br><br>
<br></p>
<h1><strong>takoi side</strong></h1>
<p>I made 1 LightGBM and 8 NN models. The model that combined Transformer and LSTM had the best CV. Here is architecture and brief description.<br>
</p>
<h2>Transformer + LSTM</h2>
<p><br>
<br><br>
<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1146523%2Fcab7462101930cf01ccbdd867e3b8a68%2F2%20(1).jpeg?generation=1610097396888437&alt=media" alt=""></p>
<h3>features</h3>
<p>I used 17 features. 15 features were computed per user_id. 2 features were computed per content_id.</p>
<h4>main features</h4>
<ul>
<li>sum of answered correctly</li>
<li>average of answered correctly</li>
<li>sum of answered correctly for tag_user_id</li>
<li>average of answered correctly for tag_user_id</li>
<li>lag time</li>
<li>lag time of same content_id</li>
<li>previous answered correctly for the same content_id</li>
<li>distance between the same content_id</li>
<li>average of answered correctly for each content_id</li>
<li>average of lag time for each content_id<br>
<br>
<br><br>
</li>
</ul>
<h2>LightGBM</h2>
<p>I used 97 features. The following are the main features.</p>
<ul>
<li>sum of answered correctly</li>
<li>average of answered correctly</li>
<li>sum of answered correctly for tag_user_id</li>
<li>average of answered correctly for tag_user_id</li>
<li>lag time</li>
<li>lag time of same part</li>
<li>lag time of same content_id</li>
<li>previous answered correctly for the same content_id</li>
<li>distance between the same content_id</li>
<li>Word2Vec features of content_id</li>
<li>decayed features (average of answered correctly)</li>
<li>number of consecutive times with the same user answer</li>
<li>flag if the user answer being answered in succession matches the correct answer</li>
<li>average of answered correctly for each content_id</li>
<li>average of lag time for each content_id</li>
</ul> |
Riiid Answer Correctness Prediction | SAINT+ Model with code [Private 0.800 / 68th place] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for your response.</p>
<p>I tried <code>lag_time</code> in decoder, but validation score is lower than in encoder. Since <code>lag_time</code> is information that is known before answering the question, I think that it is better to enter it from the encoder side.</p>
<p>You are welcome.</p>
<p><code>user_count</code> is categorized as follows.</p>
<p>1 - 3 (part 1) : 1<br>
4 (part 2) : 2<br>
5 - 7 (part 3) : 3<br>
8 - 10 (part 4) : 4<br>
11 - 13 (part 4) : 5<br>
14 - 16 (part 4) : 6<br>
17 - 22 (part 5) : 7<br>
23 - 26 (part 6) : 8<br>
27 - 30 (part 7) : 9<br>
30 - : 10</p>
<p>We found that our model did not score very well for the first 30 questions. Therefore, We decided to add <code>user_count</code>. This increased the Public score by 0.03.</p>
<p>This is one of important contributions of my partner, <a href="https://www.kaggle.com/shuntarotanaka" target="_blank">@shuntarotanaka</a> . Please see <a href="https://www.kaggle.com/shuntarotanaka/riiid-the-first-30-questions-are-proficiency-test" target="_blank">this notebook</a> for details.</p>
<p><code>prior_question_elapsed_time</code> and <code>prior_question_had_explanation</code> are information about "prior" question. In other words, that information is additional information about the user's response. For example, even if the same correct answer is given, the meaning of the correct answer differs depending on whether the correct answer is taken over a long period of time or in a short time. Therefore, enter it in the decoder with <code>answered_correctly</code>. Please see <a href="https://www.kaggle.com/marisakamozz/prior-question-is-info-about-prior-question" target="_blank">this notebook</a> for details.</p>
<p>Thank you for your response.</p>
<p>I omitted the explanation, the input of the decoder is shifted by +1 in SAINT+ to avoid target leak.</p>
<p>prior_question_elapsed_time and prior_question_had_explanation are shifted -1. As a result, it becomes 0. Because prior_question_elapsed_time and prior_question_had_explanation are information about the user's response to the "prior" question.</p>
<p>Thank you, <a href="https://www.kaggle.com/doctorkael" target="_blank">@doctorkael</a>!</p>
<p>I'd like to share our solution. All codes are available on <a href="https://github.com/marisakamozz/riiid" target="_blank">GitHub</a>. You can reproduce our submission with <a href="https://www.kaggle.com/marisakamozz/riiid-saint-solution" target="_blank">this notebook</a>.</p>
<p>I hope it will be helpful to you. Thank you.</p>
<h1>Model Summary</h1>
<ul>
<li><a href="https://arxiv.org/abs/2010.12042" target="_blank">SAINT+</a> based model</li>
<li>used train.csv only.</li>
<li>used questions only.</li>
</ul>
<p>We didn't use lectures, because lectures didn't improve validation score.</p>
<h2>Input</h2>
<p>Encoder input:</p>
<ul>
<li>positional embedding</li>
<li>question_id embedding</li>
<li>lag embedding (= timestamp - last timestamp)</li>
<li>user_count: number of questions the user has solved</li>
</ul>
<p>lag feature is categorized by 100 quantiles.<br>
user_count is categorized by <a href="https://www.kaggle.com/shuntarotanaka/riiid-the-first-30-questions-are-proficiency-test" target="_blank">proficiency test</a>.</p>
<p>Decoder input:</p>
<ul>
<li>positional embedding</li>
<li>answered_correctly</li>
<li>prior_question_elapsed_time</li>
<li>prior_question_had_explanation</li>
</ul>
<p>prior_question_elapsed_time and prior_question_had_explanation are shifted -1.<br>
prior_question_elapsed_time is categorized by 100 quantiles.</p>
<h2>Model Architecture</h2>
<p>Embedding Layer + Tranformer + Feed Forward Layer (with residual connection)</p>
<p>Weights of embedding layers are sampled from a normal distribution with a standard deviation of 0.01.</p>
<p>Transformer architecture:</p>
<ul>
<li>sequence length: 150</li>
<li>number of dimensions: 128</li>
<li>number of encoder layers: 3</li>
<li>number of decoder layers: 3 (self-attention) + 3 (encoder-attention)</li>
<li>number of heads: 8</li>
</ul>
<h2>Training</h2>
<ul>
<li>batch size: 256</li>
<li>optimizer: Adam</li>
<li>training time is almost 20 hours on 1 K80 GPU machine.</li>
<li>early stopping by 1% unseen user's auc score. (but it didn't work well.)</li>
</ul> |
Riiid Answer Correctness Prediction | [0.801 Private, 60th place] Single LGB using 1% of data | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I might share the code after I get a chance to clean it up. In past experiments I got ~0.006-0.01 boost going from 1m to 10m rows and others have reported a similar boost going from 10m rows to the full data. </p>
<p>Thanks to the organizers of this very interesting competition and congrats to the winners. My LB score (0.800 public / 0.801 private) was achieved using a single LGB model trained on 1m rows. It wasn't my intention to only use 1m rows, but I had some bugs that I fixed too late to retrain on more data in time. Nevertheless, I'm sharing my solution in case it ends up being useful to anyone.</p>
<h1>Features</h1>
<p>The final model used 220 features that fit into following categories in rough descending order of importance:<br>
note: AC = answered_correctly</p>
<ul>
<li>Question-Question matrix (QQM) features: out of all users who answered A on question X, how many of them get question Y right? I built a 13523 x 13523 x 4 matrix of all the probabilities, then calculated user stats on their last 3, 10, 30, 50, 100, 200 questions. I also created a separate set of features weighting by Mahalanobis distance between the past and current question embeddings (see section on PEBG embeddings). </li>
<li>question's overall accuracy/difficulty across all users, number of responses</li>
<li>user's accuracy, number of respones across their entire history & current session (where a new session is created if the user is idle for more than 5 min)</li>
<li>question metadata, i.e. tags, part, index in bundle</li>
<li>elapsed time and lag time - based features. Ratios were good here, e.g. ratio of prior elapsed time to lag time, user's average elapsed time to average lag time. I also made a simple model to predict elapsed time of the current question</li>
<li>PEBG embedding-based features based on this paper: <a href="https://arxiv.org/pdf/2012.05031.pdf" target="_blank">https://arxiv.org/pdf/2012.05031.pdf</a>. I took the current question embedding, the user's average embedding overall, and the average embedding over the user's correct and incorrect responses. Mahalanobis distance between the current question's embedding to the user's correct and incorrect embeddings and the ratio between correct and incorrect distance were all important. </li>
<li>features from Ednet dataset (see below)</li>
<li>stats on user's past attempts on the same question, e.g. most recent AC, average AC, number of past attempts, time since last attempt, whether user saw explanation on last attempt</li>
<li>stats on user's past attempts on questions with the same tags</li>
<li>user's accuracy normalized by question difficulty across different windows, e.g. average of user's residuals for their last 3, 10, 30, 50, 100, 200 questions; residual = AC - question accuracy</li>
<li>user's accuracy trend over their last 4, 12, 30, 50, 100, 200 questions</li>
<li>user's performance on diagnostic questions: the 30 question sequence starting with 7900, 7876, 175, … is usually asked to each user at the beginning. I made 30 features for the user's AC on each question</li>
<li>lecture features: time since user watched a lecture with the same tag, time spent watching lecture compared to lecture duration (from Ednet data)</li>
</ul>
<p>I applied additive smoothing to target encoding features to guard against small samples (this noticeably helped the QQM features). </p>
<h1>Ednet dataset</h1>
<p>As you probably know, the competition data was taken from Riiid's Ednet database, from which a large public dataset has been open sourced at <a href="https://github.com/riiid/ednet" target="_blank">https://github.com/riiid/ednet</a>. At first glance, the overlap between the competition data and the open source data is not obvious - for example, the open source data has 784,309 users whereas the competition data only has 393,656 and the contents don't look the same either. But the Ednet data comes with question and lecture metadata files, and by comparing them to the metadata files we have in the competition, it's quite easy to map the questions, tags, and users between the datasets. The competition data seems to be a bit more recent, and has mostly filtered out users who only answered a few questions, but 350k+ out of ~393k users appear in both datasets. That's important because most of the private test data is from users appearing in the training set, so you can use the Ednet data to calculate some features on those users that you wouldn't otherwise have. Examples are:</p>
<ul>
<li>Ednet data has real timestamps, not shifted to start from zero. I calculated the hour, day, and week in KST</li>
<li>how often the user switches answers</li>
<li>Ground truth video lengths for lectures, which can be used in lecture features</li>
<li>whether user is primarily on mobile or web</li>
</ul>
<p>Also, I extracted ~6m rows from the Ednet data that was filtered out of the competition data. These were mostly users with few interactions so I didn't add them to the training set, but I did use them to pre-populate the QQM which helped a bit. </p>
<h1>Pipeline & Remarks</h1>
<p>I used a loop-based accumulation framework which updated user states every iteration. To strike a good balance between speed and memory usage, I used dictionaries to look up users and numpy arrays to store the accumulated features. This worked well in general and I could use the same code for inference, which took ~3 hours in the end with 220 features and used about 9GB RAM.</p>
<p>In the end, I wasn't able to train on the entire dataset, or even 10m rows due to bugs that weren't fixed until the morning of the deadline. One particularly frustrating one was from the very first piece of code I ran in the competition: the code to read in the train.csv file. I had copied a line to specify the data types from a public notebook assuming it would be the same one from the Competition Starter, but the notebook creator had changed prior_question_elapsed_time to float16 instead of float32. This resulted in all values in the column >=65536 being converted to np.inf. Lesson learned: never blindly copy from a public notebook! </p>
<p>Despite this, I really enjoyed the competition as by taking the feature engineering approach, it felt like I was uncovering new things about the data all the time. </p> |
Riiid Answer Correctness Prediction | [0.805 Private, 42nd place] LGB + 5 RAINT+ Ensemble | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Split the validation set into train/test sets, and use hyperparameters tuning to get the weights. I also tried adding another logistic regression layer and takes the predictions of the 6 models as inputs, the performance is not good.</p>
<p>Using GPU, the bottleneck is not the calculation on GPU, but the feature engineering part to generate the inputs of SAINT+ models. This part should be fully optimized.</p>
<p>Thank you :)</p>
<p>This is the 2nd competition I have attended and the 1st competition I got top 50, share the final Model I used. Please free feel to comment.<br>
<strong>Ensemble</strong><br>
<strong>AUC</strong> Private: 0.805, Public: 0.803</p>
<pre><code>saint_models = [saint1, saint2, saint3, saint4, saint5]
saint_weights = [0.09, 0.091, 0.116, 0.168, 0.368]
lgb_w = 0.167
</code></pre>
<p>With <strong>fully optimized implementation</strong>, the online inference running time with these <strong>6</strong> models is about <strong>3</strong> hours(<strong>GPU</strong> on)</p>
<h2>LGBM</h2>
<p><strong>73 features</strong> AUC 0.787 in both public and private</p>
<pre><code> 0:{'name': 'prior_question_elapsed_time'}
1:{'name': 'prior_question_had_explanation', 'dim_type': 'categorical'}
2:{'name': 'ques_part', 'dim_type': 'categorical'}
3:{'name': 'user_ans_acc', 'depends': ['user_ans_corr_cnt', 'user_ans_cnt'], 'cal_func': 'div', 'desc': 'user questions answering accuracy'}
4:{'name': 'user_ans_cnt', 'desc': 'number of questions the user answered'}
5:{'name': 'user_ans_corr_cnt', 'desc': 'number of correct answers the user gave'}
6:{'name': 'user_ques_part_1_cnt', 'desc': 'number of answers the user answered that are part 1'}
7:{'name': 'user_ques_part_1_corr_cnt', 'desc': 'number of correct answers the user gave to the questions that are part 1'}
8:{'name': 'user_ques_part_1_acc', 'depends': ['user_ques_part_1_corr_cnt', 'user_ques_part_1_cnt'], 'cal_func': 'div'}
9:{'name': 'user_ques_part_2_cnt'}
10:{'name': 'user_ques_part_2_corr_cnt'}
11:{'name': 'user_ques_part_2_acc', 'depends': ['user_ques_part_2_corr_cnt', 'user_ques_part_2_cnt'], 'cal_func': 'div'}
12:{'name': 'user_ques_part_3_cnt'}
13:{'name': 'user_ques_part_3_corr_cnt'}
14:{'name': 'user_ques_part_3_acc', 'depends': ['user_ques_part_3_corr_cnt', 'user_ques_part_3_cnt'], 'cal_func': 'div'}
15:{'name': 'user_ques_part_4_cnt'}
16:{'name': 'user_ques_part_4_corr_cnt'}
17:{'name': 'user_ques_part_4_acc', 'depends': ['user_ques_part_4_corr_cnt', 'user_ques_part_4_cnt'], 'cal_func': 'div'}
18:{'name': 'user_ques_part_5_cnt'}
19:{'name': 'user_ques_part_5_corr_cnt'}
20:{'name': 'user_ques_part_5_acc', 'depends': ['user_ques_part_5_corr_cnt', 'user_ques_part_5_cnt'], 'cal_func': 'div'}
21:{'name': 'user_ques_part_6_cnt'}
22:{'name': 'user_ques_part_6_corr_cnt'}
23:{'name': 'user_ques_part_6_acc', 'depends': ['user_ques_part_6_corr_cnt', 'user_ques_part_6_cnt'], 'cal_func': 'div'}
24:{'name': 'user_ques_part_7_cnt'}
25:{'name': 'user_ques_part_7_corr_cnt'}
26:{'name': 'user_ques_part_7_acc', 'depends': ['user_ques_part_7_corr_cnt', 'user_ques_part_7_cnt'], 'cal_func': 'div'}
27:{'name': 'ques_acc'}
28:{'name': 'ques_cnt'}
29:{'name': 'ques_corr_cnt'}
30:{'name': 'ques_tags_len', 'dim_type': 'categorical'}
31:{'name': 'user_acc_ques_acc_hmean', 'depends': ['user_ans_acc', 'ques_acc'], 'cal_func': 'harmonic_mean'}
32:{'name': 'ques_part_acc'}
33:{'name': 'user_ques_part_1_acc_ques_part_acc_hmean', 'depends': ['user_ques_part_1_acc', 'ques_part_acc'], 'cal_func': 'harmonic_mean'}
34:{'name': 'user_ques_part_2_acc_ques_part_acc_hmean', 'depends': ['user_ques_part_2_acc', 'ques_part_acc'], 'cal_func': 'harmonic_mean'}
35:{'name': 'user_ques_part_3_acc_ques_part_acc_hmean', 'depends': ['user_ques_part_3_acc', 'ques_part_acc'], 'cal_func': 'harmonic_mean'}
36:{'name': 'user_ques_part_4_acc_ques_part_acc_hmean', 'depends': ['user_ques_part_4_acc', 'ques_part_acc'], 'cal_func': 'harmonic_mean'}
37:{'name': 'user_ques_part_5_acc_ques_part_acc_hmean', 'depends': ['user_ques_part_5_acc', 'ques_part_acc'], 'cal_func': 'harmonic_mean'}
38:{'name': 'user_ques_part_6_acc_ques_part_acc_hmean', 'depends': ['user_ques_part_6_acc', 'ques_part_acc'], 'cal_func': 'harmonic_mean'}
39:{'name': 'user_ques_part_7_acc_ques_part_acc_hmean', 'depends': ['user_ques_part_7_acc', 'ques_part_acc'], 'cal_func': 'harmonic_mean'}
40:{'name': 'user_last_incorr_till_now'}
41:{'name': 'user_last_corr_till_now'}
42:{'name': 'user_ts_diff_lag_1'}
43:{'name': 'user_ts_diff_lag_2'}
44:{'name': 'user_ts_diff_lag_3'}
45:{'name': 'user_last_10_corr_cnt'}
46:{'name': 'user_acc_mov_avg'}
47:{'name': 'user_elapsed_time_sum'}
48:{'name': 'user_viewed_explanations_sum'}
49:{'name': 'user_elapsed_time_avg', 'depends': ['user_elapsed_time_sum', 'user_ans_cnt'], 'cal_func': 'div'}
50:{'name': 'user_viewed_explanations_avg', 'depends': ['user_viewed_explanations_sum', 'user_ans_cnt'], 'cal_func': 'div'}
51:{'name': 'ques_tags_id', 'dim_type': 'categorical'}
52:{'name': 'ques_tags_acc'}
53:{'name': 'user_ques_tags_cnt'}
54:{'name': 'user_ques_tags_corr_cnt'}
55:{'name': 'user_ques_tags_acc', 'depends': ['user_ques_tags_corr_cnt', 'user_ques_tags_cnt'], 'cal_func': 'div'}
56:{'name': 'user_ques_tags_max_cnt'}
57:{'name': 'user_ques_tags_max_corr_cnt'}
58:{'name': 'user_ques_tags_max_acc', 'depends': ['user_ques_tags_max_corr_cnt', 'user_ques_tags_max_cnt'], 'cal_func': 'div'}
59:{'name': 'user_ques_tags_min_cnt'}
60:{'name': 'user_ques_tags_min_corr_cnt'}
61:{'name': 'user_ques_tags_min_acc', 'depends': ['user_ques_tags_min_corr_cnt', 'user_ques_tags_min_cnt'], 'cal_func': 'div'}
62:{'name': 'ques_prior_elapsed_time_avg'}
63:{'name': 'ques_prior_had_explan_avg'}
64:{'name': 'ques_prior_had_explan_acc'}
65:{'name': 'ques_prior_hadnot_explan_acc'}
66:{'name': 'ques_elapsed_time_avg'}
67:{'name': 'ques_elapsed_time_std'}
68:{'name': 'ques_explan_avg'}
69:{'name': 'ques_u_ans_std'}
70:{'name': 'ques_n_answers'}
71:{'name': 'user_lect_viewed'}
72:{'name': 'user_lect_viewed_over_ques_answered', 'depends': ['user_lect_viewed', 'user_ans_cnt'], 'cal_func': 'div'}
</code></pre>
<h2>RAINT+</h2>
<p><strong>1</strong> RAINT+ with 4 encoders/decoders<br>
encoder side inputs: questions, question parts, timestamp lags<br>
decoder side inputs: response, timestamp lags<br>
Private: 0.799, Public: 0.797</p>
<pre><code>saint1_cfg = SaintModelConfig(
n_enc=4,
n_dec=4,
enc_heads=8,
dec_heads=8,
seq_len=100,
n_dims=128,
n_ques=13523,
n_parts=7,
n_ets=301,
n_lts=154,
n_resp=2,
n_explains=2,
emb_add_fn="add",
dropout=0.2,
)
</code></pre>
<p><strong>2</strong> RAINT+ with 6 encoders/decoders<br>
encoder/decoder inputs are the same as the 1st one<br>
Concat the embedding of the inputs and * W (shape: N x 128)<br>
Private: 0.798, Public: 0.796</p>
<pre><code>saint2_cfg = SaintModelConfig(
n_enc=6,
n_dec=6,
enc_heads=8,
dec_heads=8,
seq_len=100,
n_dims=128,
n_ques=13523,
n_parts=7,
n_ets=301,
n_lts=154,
n_resp=2,
n_explains=2,
n_freqs=111,
n_corr_freqs=94,
emb_add_fn="cat",
dropout=0.2,
enc_extra_inputs=tuple(),
emb_with_bias=True
)
</code></pre>
<p><strong>3</strong> RAINT+ with 4 encoders/decoders<br>
Adding question frequency/correct frequency to encoder side, other parts the same as the 1st one</p>
<pre><code>saint3_cfg = SaintModelConfig(
n_enc=4,
n_dec=4,
enc_heads=8,
dec_heads=8,
seq_len=100,
n_dims=128,
n_ques=13523,
n_parts=7,
n_ets=301,
n_lts=154,
n_resp=2,
n_explains=2,
n_freqs=111,
n_corr_freqs=94,
emb_add_fn="add",
dropout=0.2,
enc_extra_inputs=("freqs", "corr_freqs")
)
</code></pre>
<p><strong>4</strong> RAINT+ with 4 encoders/decoders<br>
Adding tags_id to the encoder side, other parts the same as the 3rd one<br>
Private: 0.802, Public: 0.800</p>
<pre><code>saint4_cfg = SaintModelConfig(
n_enc=4,
n_dec=4,
enc_heads=8,
dec_heads=8,
seq_len=100,
n_dims=128,
n_ques=13523,
n_parts=7,
n_ets=301,
n_lts=154,
n_resp=2,
n_explains=2,
n_freqs=111,
n_corr_freqs=94,
emb_add_fn="add",
dropout=0.2,
enc_extra_inputs=("freqs", "corr_freqs", "tag_ids")
)
</code></pre>
<p><strong>5</strong> RAINT+ with 4 encoders/decoders<br>
Concatenating embedding outputs * W (shape: N x 128, bias=False), other parts are the same as the 4th one.<br>
Didn't submit alone.</p>
<pre><code>saint5_cfg = SaintModelConfig(
n_enc=4,
n_dec=4,
enc_heads=8,
dec_heads=8,
seq_len=100,
n_dims=128,
n_ques=13523,
n_parts=7,
n_ets=301,
n_lts=154,
n_resp=2,
n_explains=2,
n_freqs=111,
n_corr_freqs=94,
n_tag_ids=1520,
emb_add_fn="cat",
dropout=0.2,
enc_extra_inputs=("freqs", "corr_freqs", "tag_ids"),
emb_with_bias=False
)
</code></pre> |
Riiid Answer Correctness Prediction | [1st place solution] Last Query Transformer RNN | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Kagglers, here is the paper link.<br>
Paper Link : <a href="https://arxiv.org/abs/2102.05038" target="_blank">https://arxiv.org/abs/2102.05038</a><br>
Please refer the paper for details.</p>
<h1>Summary</h1>
<p>I wanted to use Transformer but I could not input long history to the model because QK matrix multiplication in Transformer has O(L^2) time complexity when input length is L. </p>
<h1> </h1>
<p>My approach is to use only last input as Query, because I only predict last question's answer correctness per history inputs. It means I will only compare between last question(query) and other questions(key), and not between other questions. It makes QK matrix multiplication in Transformer to have O(L) time complexity(because len(Q)=1, len(K)=L), which allows me to input much longer history. </p>
<h1> </h1>
<p>In final submission, I ensemble 5 models with 1728 length history inputs.<br>
I didn't do feature engineering much, since I can use extremely long history, I wanted the model to learn it by itself. 5 input features I used are question id, question part, answer correctness, current question elapsed time, and timestamp difference.</p>
<h1>Acknowledgement</h1>
<p>Thank you <a href="https://www.kaggle.com/limerobot" target="_blank">@limerobot</a>, to share amazing transformer model to approach table data problem in the 3rd place solution of 2019 Data Science Bowl. It was a good motivation to work on transformer encoder. <a href="https://www.kaggle.com/c/data-science-bowl-2019/discussion/127891" target="_blank">https://www.kaggle.com/c/data-science-bowl-2019/discussion/127891</a> </p>
<h1> </h1>
<p>Thanks to competition sponsers and organizers for hosting this fun competition.</p>
<h1> </h1>
<p>Thank you for reading.</p>
<p>I think they're totally different.<br>
Normally, even if you train and predict for only last input, you don't change query vector in transformer to have length 1. In usual case, all query and value and key are set to have length L.(when input length is L). Because that's how transformer works. I believe the cases you mentioned have the same configurations. <br>
Changing query vector in transformer to have length 1 changes the inner workings of transformer significantly. When multi head attention output is added to each inputs, last query's attention output is added to each inputs like constant, instead of each query's attention output. I found that it does not hurt the performance much, while making QK matrix multiplication in transformer to have O(L) instead of O(L^2).<br>
To answer why my model is faster than others who also used last query for training, it's simply because my QK matrix multiplication is O(L) and others is O(L^2).</p>
<p>Hi <a href="https://www.kaggle.com/mamasinkgs" target="_blank">@mamasinkgs</a>, thank you.<br>
I'm surprised to hear many people tried this, as I saw none. Could you provide some links?</p>
<p>Thank you :)</p>
<p>Hi <a href="https://www.kaggle.com/darraghdog" target="_blank">@darraghdog</a>, thank you. <br>
Actually, it's really simple to implement the model.<br>
I only changed the one line from normal transformer encoder.<br>
In encoder layer,<br>
I changed<br>
<code>attn_output, _ = self.mha(x, x, x[:,:,:], mask) #mha : multi head attention</code><br>
to<br>
<code>attn_output, _ = self.mha(x, x, x[:,-1:,:], mask)</code><br>
Hope it helps!</p>
<p>Thank you :) And congratulations on your successful finish.</p>
<p>Thank you :)</p>
<p>Hi,</p>
<pre><code>1. If you keep seq len = 1 in the Query Q, then the final output will be having dim (1, D). This is just a single feature with dim D, then how are you passing it through the LSTM? ( Or you are assuming the single feature with dim D as a sequence and passing it through the LSTM ? If so then can you tell the intuition behind this please )
</code></pre>
<h1> </h1>
<p>In transformer encoder, there's a residual addition, which adds 'multi head attention output' to original input. In multi head attention, I use Query with len=1 to get some speed advantages in QK matrix multiplication. In result, the output of multi head attention becomes (batch size, 1, d_model) instead of (batch size, sequence length, d_model). So, for residual addition in transformer encoder, operands are original input with shape (batch size, sequence length, d_model) and multi head attention output with shape (batch size, 1, d_model). Because two shapes are different, I add multi head attention output to every point of original input.(like broadcasting) Therefore, transformer encoder output is (batch size, sequence length, d_model).</p>
<pre><code>2. In one of the comments you mentioned the following -
attn_output, _ = self.mha(x, x, x[:,-1:,:], mask)
Can you please elaborate more on the code, What is x and what is x[:,-1:,:] ? Which one of those 3 is Query Q ? ( If the code above is used from torch.nn.MultiheadAttention then in that case you are passing the values V as x[:,-1:,:], which will raise an error. )
</code></pre>
<p>x is an original input to encoder. x's shape is (batch size, seq_len, d_model). The order of mha(multihead attention) parameters is v,k,q,mask. </p> |
Riiid Answer Correctness Prediction | [62nd solution, LB: 0.801] Single light GBM with 113 features | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you.<br>
I utilized your code to add true skill features. It was very helpful for me.</p>
<p>Thank you very much.<br>
I could not submit my results without your thoughtful comments.</p>
<p>Many many thanks to the organizers and all kagglers.<br>
Throughout this competition, I realized good kagglers are not just smart data scientists, but also proficient engineers. They have deep and broad knowledge about how to speed up processing, handle data memory efficiently, and debug submission error. I learned a lot from their notebook and comments. <br>
I achieved 62nd (0.801 CV /0.799 public / 0.801 private) and got my first medal with a single LGB model. I tried blending with catboost and SAKT models, but it did not work well for me. I think it is mainly due to the time constraint to refine them.<br>
The code and model are opened <a href="https://www.kaggle.com/tomooinubushi/62nd-solution-lightgbm-single-model-lb-0-801" target="_blank">here</a>.</p>
<p>I mainly focused on feature engineering and used 113 features. The feature importance of them was as follows</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3754725%2Fc09711551a883f6995032d91b6408bd9%2F__results___42_0.png?generation=1610076363046071&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3754725%2F35c076ded96a2ef0d40ec307d0b8da51%2F__results___42_1.png?generation=1610076387230945&alt=media" alt=""></p>
<p>The meanings of abbreviations were as follows </p>
<ul>
<li>acc: answered_correctly</li>
<li>exp: prior_question_had_explanation</li>
<li>time: prior_question_elapsed_time</li>
<li>count: number of trial</li>
<li>count_lecture: number of lecture</li>
<li>tfl: timestamp from last nth trial to that from n+1th trial</li>
<li>dif: difficulty (1 - content-wise accuracy)</li>
<li>dpt: difficulty point (add difficulty if users answer was correct)</li>
<li>avg: average</li>
<li>std: standard deviation</li>
<li>sum: sum</li>
<li>hist: historical average with temporal decay of gamma 0.75 </li>
<li>hist2: historical average with temporal decay of gamma 0.25</li>
<li>u: user-wise</li>
<li>t: tag-wise</li>
<li>tg: taggroup-wise</li>
<li>p: part-wise</li>
<li>c: content-wise</li>
<li>b: bundle-wise</li>
<li>uc: user-content-wise</li>
<li>ub: user-bundle-wise</li>
<li>up: user-part-wise</li>
<li>time_per_count_u: timestamp/count_u</li>
<li>count_u_nondiagnostic: user-wise count of non-diagnostic questions</li>
<li>exp_avg_u_corrected: user-wise average of prior_question_had_explanation corrected for diagnostic questions. Th user cannot read explanation for diagnostic questions.</li>
<li>pp_ratio: ratio of paid part (part1, 3, 4, 6, 7) trials per all trials, only paid users of SANTA app can solve them if it is not diagnostic questions</li>
<li>listening_ratio: ratio of listening part trials per all trials</li>
<li>relative_dif: relative difficulty of questions calculated with acc_avg_c / dif_avg_u</li>
<li>notacc_sum: sum of failed trials</li>
<li>wp_ts: trueskill feature of win probability</li>
<li>umu_ts, usigma_ts, qmu_ts, qsigma_ts: trueskill features of mu, and sigma of user and content</li>
</ul>
<p>I learned a lot from many notebooks and comments of other kagglers. </p>
<ul>
<li>CV strategy, loop feature engineering, time series emulater by <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a></li>
<li>use of sqlite to prevent memory error from <a href="https://www.kaggle.com/higepon" target="_blank">@higepon</a> and <a href="https://www.kaggle.com/calebeverett" target="_blank">@calebeverett</a> </li>
<li>use true skill features from <a href="https://www.kaggle.com/zyy2016" target="_blank">@zyy2016</a> and <a href="https://www.kaggle.com/neinun" target="_blank">@neinun</a> </li>
<li>convert dataframe to numpy array memory-eficiently from <a href="https://www.kaggle.com/markwijkhuizen" target="_blank">@markwijkhuizen</a> </li>
<li>use of taggroup from <a href="https://www.kaggle.com/radema" target="_blank">@radema</a></li>
</ul>
<p>Please notify me if I missed anyone.</p>
<p>Random thought about this competition.</p>
<ul>
<li>The quantity and quality of the dataset were very high. We saw good cv-lb relationship.</li>
<li>The concept of submission API is very interesting. It is realistic and highly adequate for time-series prediction.</li>
<li>I think the kaggle system needs some improvements. The computational power is so small to handle huge dataset of this competition. The error message of submission should be more user-friendly.</li>
</ul>
<p>Any comments and questions are welcome!</p> |
Riiid Answer Correctness Prediction | [70th] My transformer solution (with code) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all winners!</p>
<p>Hope you all enjoyed and learned something in this great competition (ok, submission errors were painful …)</p>
<p>I want to share briefly my solution here, although it is not the one I got the private LB scores (I have difficultly to debug my final version).</p>
<p>My solution is transformer model, and here is the summary:</p>
<ul>
<li><p>Training / Validation is done with TensorFlow TPU<br>
* This is quite fast, 1 epoch takes 5 - 6 minutes (1M sequences of length 128)<br>
* Check <a href="https://www.kaggle.com/yihdarshieh/tpu-track-knowledge-states-of-1m-students" target="_blank">TPU - Track knowledge states of 1M+ students in the wild</a>.<br>
* Use Google Colab TPU for training </p></li>
<li><p>Use elapsed time / lag time : LB: 0.781 -> 0.795 ~ 0.797</p></li>
<li><p>I also included the loss for predicting the user actual response, not only the response correctness. This makes the training can run longer without overfitting. LB: 0.797 -> 0.800</p></li>
<li><p>I included the count of (in previous time) each part and each correct answer (of the questions), and their correctness ratio. On CV, 0.807 / 0.808 -> 0.809 / 0.810.</p></li>
<li><p>Finally, I included the count of the correction and the correction ratio of each question seen by each user. On CV 0.809 / 0.810 -> 0.813.</p></li>
</ul>
<p>Here is the link to my notebook:</p>
<p><a href="https://www.kaggle.com/yihdarshieh/r3id-transformer/" target="_blank">R3ID - Transformer</a></p>
<p>ps. Since I tried to fix the inference bug , the code is somehow messy …</p> |
Riiid Answer Correctness Prediction | [LB 125th Position - .795] Pure Feature Engineering .(3M pkl files) Comp & Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks a lot! i saw you joined early on but i guess you were busy with other things in // !</p>
<p>Thanks a lot Theo! Yep, just one more silver/gold medal and finally, I can proudly say, I am a Comp Master!</p>
<p>Thanks a lot for the honour but i would like to share the same with many others as well competing, both after the comp ends and during the comp! It's because of these discussions, we all made it possible!</p>
<p>Thanks a lot!</p>
<p>First of all, Thanks a lot Kaggle, Riiid and the everyone who participated! I have learnt a lot from you guys in just 3 months and really out of the box solution attempts and FEs! </p>
<p>And special thanks to my teammates! I have learnt a lot about different topics and FE ideas as well which i definitely couldn't come up on my own.</p>
<p>So coming to our solutions, it's a very simple ensemble of gbm models (~39/41 Features) and a NN (AKT). CV-LB was amazingly in sync, so no surprise there.</p>
<p>Features List -:</p>
<pre><code>['user_resid_rolling',
'sum_answered_correctly_user',
'count_user',
'content_attempt',
'lag_1',
'lag_2',
'lag_3',
'lag_4',
'lag_5',
'lag_sum_5',
'part',
'tags1',
'prior_question_had_explanation_enc',
'count_content',
'sum_answered_correctly_content',
'mean_answered_correctly_content',
'prior_question_elapsed_time',
'mean_explanation_content',
'mean_elapsed_content',
'theta_user_ct', # from elo
'theta_user_pt', # from elo
'theta_user_bd', # from elo
'content_id',
'last_lec_ts',
'diff_to_last_lec_ts',
'num_lects_watched',
'answer_1',
'answer_2',
'answer_3',
'answer_4',
'answer_5',
'answer_6',
'answer_7',
'last_7_answers_sum',
'correct_answer',
'n_tasks_performed',
'timestamp',
'prev_answer_content',
'diff_content_time',
'mean_answered_correctly_user',
'mean_user_resid_rolling']
</code></pre>
<p>Yes, we did have lecture features like <code>'last_lec_ts', 'diff_to_last_lec_ts', 'num_lects_watched',</code> and they definitely help us improve both, our CV and LB. Other interesting features that we had were not limited but deserve a mention "user_resid_rolling", "diff_content_time", "multiple elo features", "last_7_answers_sum".</p>
<p>"user_resid_rolling" -> </p>
<pre><code>train['content_mean'] = train.groupby('content_id')['answered_correctly'].transform('mean')
train['resid'] = train['answered_correctly'] - train['content_mean']
train['resid'] = train.groupby('user_id')['resid'].shift()
train['user_resid_rolling'] = train.groupby('user_id')['resid'].agg(['cumsum'])
train['user_resid_rolling'] = train['user_resid_rolling'].fillna(0)
train.drop(['resid', 'content_mean'], axis=1, inplace=True)
</code></pre>
<p>"last_lec_ts" & "diff_to_last_lec_ts" -> </p>
<pre><code>prev_u, lec_ts = -1, 0
last_lec_ts = []
for u, c_type, t in zip(train['user_id'], train['content_type_id'], train['timestamp']):
if u != prev_u:
lec_ts = 0
if c_type == True:
lec_ts = t
prev_u = u
last_lec_ts.append(lec_ts)
train['last_lec_ts'] = np.array(last_lec_ts)
train['diff_to_last_lec_ts'] = train['timestamp'] - train['last_lec_ts']
</code></pre>
<p>And the craziest of all FE and a cool FE, <strong>We saved ~.3M pkl files during runtime for the FE "diff_content_time",</strong> (do delete the dir after your inference is complete) (basically for every user difference b/w the TS b/w their current and last TS for a particular content_id)</p>
<pre><code>!mkdir pkld
dct = {}
dct[-1] = 0
prev_u = -1
res = np.zeros(3)
for u, cid, ctype, ts in tqdm(zip(train['user_id'], train['content_id'], train['content_type_id'], train['timestamp'])):
if prev_u != u:
pickle.dump(dct[prev_u], open('pkld/' + str(prev_u) + '.pkl', 'wb'), protocol=pickle.HIGHEST_PROTOCOL)
del dct[prev_u]
dct[u] = {}
if ctype == False:
dct[u][cid] = ts
prev_u = u
pickle.dump(dct[prev_u], open('pkld/' + str(prev_u) + '.pkl', 'wb'), protocol=pickle.HIGHEST_PROTOCOL) #########
del dct
</code></pre>
<p>There were a lot of FE ideas that were tried but didn't help us. To mention few, "tf-idf" on content_ids history of the user, rolling_windows using numba, FTRL's, and other hell lot of FE's ideas that gave marginal boost but we had a rule to add +.001 single-single features in place.</p>
<p>What we regret was definitely not having a SAINT like arch for sure. I messed up badly, should have simply used nn.Transformer rather than handling the encoder/decoder parts myself.</p>
<p>But until next time,</p>
<p>Happy Fair Kaggling & Keep Sharing A Ton!</p>
<p>cc <a href="https://www.kaggle.com/alexj21" target="_blank">@alexj21</a> <a href="https://www.kaggle.com/nikhilmishradev" target="_blank">@nikhilmishradev</a> <a href="https://www.kaggle.com/rohanrao" target="_blank">@rohanrao</a> [Alphabetical Order]. (Please add anything that i missed)<br>
Thanks a lot for the collaboration guys!</p>
<p>Thanks a lot Chris! Great Work for yet another Gold! Any chances you are going to share your team's solution's as well? Ty!</p> |
Riiid Answer Correctness Prediction | the simplest numpy solution with a robust 744 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/abdulkadirguner/score-pblb0744" target="_blank">https://www.kaggle.com/abdulkadirguner/score-pblb0744</a></p>
<p>New to the platform.<br>
Wanted to share my simple solution.</p>
<p>Interestingly, this few-liner pure stats solution performed better, than the simple LGB I used, which is most probably my bad 😊😊😊</p>
<p><a href="url" target="_blank">https://www.kaggle.com/abdulkadirguner/score-pblb0744</a></p> |
Rossmann Store Sales | Code sharing, 13th place, no external data | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rossmann Store Sales
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Well the good thing is I that I was 72th on the public LB and still also 72th on the private LB, so that submission was pretty constant in terms of place on the LB :)</p>
<p>Sometimes you just have to trust you local CV score more than the place on the leaderboard.</p>
<p>@utah777</p>
<p>I used train_test_split splits to randomly split the data. </p>
<p>I tried splitting on date, but my score on the public LB was always better when using a random split. Now knowing the private LB scores, splitting on date could have been a better idea. I was just too focused on the public LB score... dumb mistake.</p>
<p>Using the continuous blocks probably would work, but you have to be careful with some stores missing a few weeks of data.</p>
<p>It is what they always say, don't overfit!</p>
<p>Well I tried blending and averaging to prevent overfitting, but somehow I think that was not the problem. </p>
<p>The split between public and private LB was time based, something like first two weeks / last four weeks. Most single models had a higher score on the public LB than private LB, so my intuition is that single models do worse over time and blending them makes them loose the sharpness and spikes even more.</p>
<p>One of my submissions, a blend of two of my single models including external data, would have made 3rd. Selecting a submission is an art, one very important art apparently.</p>
<p>With this iPyton Notebook, using XGBoost, you achieve the 13th place in the Rossmann Store Sales competition with NO external data being used:</p>
<p><a href="http://goo.gl/UUmyzb">http://goo.gl/UUmyzb</a></p>
<p>If you would add Google Trends daily searches for 'Rossmann' in Germany, State data (by @dune_dweller) and Weather data per State, you can get to the 7th place with just one model. But because that model would be useless for real world usage, that data is kept out of this model.</p>
<p>This script uses about 8Gb of RAM and takes some hour to run.</p>
<p>Unfortunately I selected my two best public LB submissions and ended up being 72th :(</p>
<p>I hope some of you guys find it useful!</p>
<p>This should help you with the installation: <a href="https://github.com/dmlc/xgboost/blob/master/doc/build.md">https://github.com/dmlc/xgboost/blob/master/doc/build.md</a></p>
<p>@westwood</p>
<p>I think you have to update XGBoost. In your data there are some int32 and int8 variables and an older version of XGBoost only accepted int64. I fixed that a few months ago and if you update XGBoost, it should work fine.</p>
<p>@westwood_ye </p>
<p>Your error says that core.py line 152 contains:</p>
<p><code>152 if not all(dtype.name in ('int64', 'float64', 'bool') for dtype in dtypes)</code></p>
<p>But core.py in the latest version of XGBoost contain these lines:</p>
<p><code>152 if feature_types is None:</code></p>
<p><code>153 feature_types = [PANDAS_DTYPE_MAPPER[dtype.name] for dtype in data_dtypes]</code> </p>
<p>Somehow you don't have the latest version.</p>
<p><a href="https://github.com/dmlc/xgboost/blob/master/python-package/xgboost/core.py">https://github.com/dmlc/xgboost/blob/master/python-package/xgboost/core.py</a></p> |
Rossmann Store Sales | Code sharing, 3rd place, category embedding with deep neural network | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rossmann Store Sales
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>That's because I used Keras 1.x API while writing the code. One work around is to use keras 1.x by running:</p>
<p><code>
pip3 uninstall keras
</code>
and then</p>
<p><code>
pip3 install keras==1.2.2
</code></p>
<p>Sorry, you give me too little information for me to give you any advice.</p>
<p>@Aarshay Jain Sorry to reply you late!
We just put our paper online: <a href="http://arxiv.org/abs/1604.06737">http://arxiv.org/abs/1604.06737</a></p>
<p>Thank for your compliments:) @Gert, let me know if you have any question with the code, it's my honor.</p>
<p>Here is the code we use</p>
<p><a href="https://github.com/entron/entity-embedding-rossmann">https://github.com/entron/entity-embedding-rossmann</a></p>
<p>Compare with most other top teams we used relative simple features and only used neural network (with keras). As far we we know entity embedding is a new method, and we will write a blog and paper to describe it in more detail. </p>
<p>Edit: I renamed category embedding to entity embedding as it is a more precise description of the method.</p>
<p>Edit2: We have put our paper about entity embeddings online: <a href="http://arxiv.org/abs/1604.06737">http://arxiv.org/abs/1604.06737</a> </p>
<p>@WhizWilde, thanks a lot for pin point these post! Btw, I am very impressed by these author's insight and hard work! Agree with your comments so let me take 2 ( Bishwarup B's approach ) and 4 ( Alexander Bauer's approach ) as example to explain the similarity and difference with entity embedding. </p>
<p>Both approaches try to find relations among stores. This is exactly the same aim as entity embedding (when used for store. Actually I have used entity embedding for all category features, including year, month, day, german states etc.). Both approaches build a space with dimensions of different sales of the same store and then either use cluster-id (using kmeans) or coordinates in a subspace which contains big variance (using PCA) to represent the store. In the sense that they both put store in a space this is exactly the same idea as entity embedding. However entity embedding is a much more systematic and sophisticated way to do this. Entity embedding does not need to predefine the meaning of the space. The dimension is not necessarily sales, it could be that "the likelihood that a store will have more sales on Fridays in April when it is also a school holiday" you can't define space with these kind of dimensions by hand but entity embedding could find these kind of dimensions automatically. <strong>entity embedding is used to find the best space to represent the stores (and other category features) so than one can better predict sales</strong>. In principle it could also find the space build by Bishwarup B and Alexander Bauer when taking time into account (which I should but didn't) if these space are indeed good to used as feature to predict sales.</p>
<p>@WhizWilde, very interesting, do you still remember where is the post?</p>
<p>[quote=WhizWilde;101818]</p>
<p>What make you define a new approach instead of using for example clustered values as new features?
Which techniques look similar to yours and what conceptual improvement do you find over these (but maybe will you answer to this in your paper)?</p>
<p>[/quote]</p>
<p>Thanks for your questions @WhizWilde!</p>
<p>Entity embedding is inspired by semantic embedding, or in a more popular name word2vec. It is a very hot topic in NLP currently. Here are two classical papers about it for example:</p>
<p>"A Neural Probabilistic Language Model", Yoshua Bengio et. al., 2003</p>
<p>"Distributed Representations of Words and Phrases and their Compositionality", Tomas Mikolov et. al. 2013</p>
<p>We applied the idea developed in NLP to a non-NLP regression problem like in this contest and made it work, that's new. From my experience in this contest entity embedding with NN is at least as good as xgboost. This new method has a great potential; lots of interesting cases with it still need to be explored. Use the Rossmann problem for example, one may first use unsupervised training with autoencoder to build the embedding and then use sales to refine it with supervised learning. One can also use the learnt embedding as the feature and use other ML methods like linear regression, knn, tree based model on top of it.</p>
<p>By "clustered values" do you mean for example to represent store ids use some typical sales value of this store? If it is, then it is first hand crafted thus not general, second not guaranteed to be the best representation. The entity embedding is automatically learnt, and it is guaranteed to be the best representation to get the best predicting power in the NN framework. Maybe this answered your question?</p>
<p>[quote=westwood_ye;103140]</p>
<p>File "D:\ProgramFiles\WinPython-64bit-3.4.3.7\python-3.4.3.amd64\lib\site-packages\keras\backend\theano_backend.py", line 458, in relu
x = T.nnet.relu(x, alpha)
AttributeError: 'module' object has no attribute 'relu'</p>
<p>why?</p>
<p>[/quote]</p>
<p>It seems to be a Theano or Keras installation problem. See here:</p>
<p><a href="https://groups.google.com/forum/#!topic/keras-users/HqalLw2gZR0">https://groups.google.com/forum/#!topic/keras-users/HqalLw2gZR0</a></p>
<p>I haven't got time to try XGB with learned entity embeddings. It would be very interesting to try it! Not only with XGB but also with historical median, linear regression etc. I expect entity embedding will improve those methods too. What I mean in the blog is NN with entity embedding vs XBG itself. I haven't compared XGB and NN (with EE) systematically except some experiments I did during this competition. Maybe I shall address this in the coming paper.</p>
<p>hi Richard, good question! Currently, there is no solid theory behind it. It is the same problem as in any deep learning problems to decide the size and number of layers of neural networks. I chose the dimensions mostly based on experiments. I do following the following guidelines when I tried to make a good guess to start with:</p>
<ol>
<li>The more complex the more dimensions. Try to think how many features/aspects you might need to describe the entities and use that as the dimension to start with.</li>
<li>If I have no idea about the first guideline, then I use $n - 1$ as the dimension where $n$ is the number states of the entity (e.g. for month, $n=12$).</li>
</ol>
<p>Let me know if you find other ideas or better choice of dimensions for this problem!</p> |
Rossmann Store Sales | Model documentation 1st place | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rossmann Store Sales
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes, calculation of the 'recent data features' for the test set started counting from the most recent train set record.</p>
<p>Within the train set, I made sure that the 'recent data features' were calculated in the same way. Therefore, let me take July 17 as an example. For July 17, the recent data feature on the previous quarter was calculated on April/May/June data. I could do the same for August 17 and calculated the previous quarter on May/June/July.</p>
<p>For test set data in september, I needed to train 'monthahead models'. For those models, I calculated the previous quarter for July 17 on March/April/May (not using data from June). That exactly matches what I could do for september: calculate the previous quarter on May/June/July (not using data from August because they weren't there).</p>
<p>@Hassan, it was quite simple: my ensemble method was to take the harmonic mean over selected models. I selected models using various strategies, based on the stored holdout set predictions:</p>
<ul>
<li>compare all possible pairs of 2 models and select the best pair</li>
<li>do greedy forward selection to find the model that improves the ensemble most (and sometimes manually adjust it, inspired by the pairs results)</li>
<li>reduce/simplify the ensemble by by thinking about it, and looking at what the same model with a different random seed would do (a kind of manual backward elimination)</li>
</ul>
<p>My final ensemble consists of:</p>
<ul>
<li>2 * allfeatures (different random seeds)</li>
<li>2 * allfeatures using monthahead features (skipping the most recent month for recent/trend features)</li>
<li>1 * salesmodel</li>
<li>1 * customermodel</li>
<li>REPEAT all six models for monts May to Septemper</li>
</ul>
<p>For september, all of the 2*6 models used monthahead features </p>
<p>To give everyone interested a better insight into what I did, you find my model documentation attached.</p>
<p>If you have any questions, please ask them here on the forum. I am currently getting a lot of e-mails that I will not be able to answer before Christmas ;)</p>
<p>*
*
*</p>
<p>Here is the dictionary that contains all of the features I used (that's what it is all about!):</p>
<p>features={
'store':[0,0,1,0],'storetype':[0,0,1,0],'assortment':[1,1,1,0],
'shopavg_open':[0,0,1,0],'shopavg_salespercustomer':[0,0,1,0],'shopavg_schoolholiday':[1,1,1,0],
'shopsales_holiday':[0,0,1,0],'shopsales_promo':[1,1,1,0],'shopsales_saturday':[0,0,1,0],
'day':[1,1,1,0],'dayofweek':[1,1,1,0],'dayofyear':[1,1,1,0],'month':[0,0,1,0],'week':[1,1,1,0],'year':[0,0,1,0],
'dayavg_openyesterday':[0,0,1,0],
'Promo2':[0,0,1,0],'Promo2SinceWeek':[0,0,1,0],'Promo2SinceYear':[1,1,1,0],'daysinpromocycle':[1,1,1,0],'primpromocycle':[1,1,1,0],'promo':[0,0,1,0],'promointerval':[1,1,1,0],
'CompetitionDistance':[1,1,1,0],'CompetitionOpenSinceMonth':[0,0,1,0],'CompetitionOpenSinceYear':[1,1,1,0],'daysincompetition':[0,0,1,0],'daysincompetition_unrounded':[1,1,1,0],'rnd_CompetitionDistance':[0,0,1,0],
'schoolholiday':[1,1,1,0],'stateholiday':[1,1,1,0],'holidays_lastweek':[1,1,1,0],'holidays_nextweek':[1,1,1,0],'holidays_thisweek':[1,1,1,0],
'prevquarter_dps_med':[0,0,0,1],'prevquarter_ds_hmean':[0,0,0,0],'prevquarter_hmean':[0,0,0,0],'prevquarter_med':[1,0,0,1],
'prevhalfyear':[0,0,0,1],'prevhalfyear_m1':[0,0,0,1],'prevhalfyear_m3':[0,0,0,0],
'prevyear_dphs_med':[0,0,0,1],'prevyear_dps_med':[1,0,0,1],'prevyear_ds_m1':[0,0,0,1],'prevyear_ds_m2ln':[0,0,0,0],'prevyear_ds_med':[0,0,0,1],'prevyear_ds_p10':[0,0,0,1],'prevyear_m1':[0,0,0,1],'prevyear_m2':[0,0,0,0],'prevyear_m3':[0,0,0,0],'prevyear_m4':[0,0,0,0],'prevyear_med':[0,0,0,1],
'prevquarter_cust_dps_med':[0,1,0,1],'prevyear_cust_dps_med':[0,1,0,1],
'lastmonth_yoy':[0,0,0,0],'linmod_quarterly':[0,0,0,1],'linmod_yearly':[0,0,0,1],
'weather_maxtemp':[1,1,1,0],'weather_precip':[1,1,1,0],'relativeweather_maxtemp':[0,0,1,0],'relativeweather_precip':[0,0,1,0],
'closurefeat':[1,1,1,0],
}</p>
<p>Meaning of dict entries:</p>
<ul>
<li>index 0: feature is part of salesmodel</li>
<li>index 1: feature is part of customermodel</li>
<li>index 2: feature is part of MA (monthahead) models</li>
<li>index 3: pmd (previousmonthdate) variant of feature is part of MA models</li>
</ul>
<p>There is also an 'allfeatures' model, containing all dict entries; and models on months May to September have an extra feature on days relative to summer holiday start</p>
<p>Explanation about features starting with prev:</p>
<ul>
<li>ds=by day/store</li>
<li>dps= by day/promo/store</li>
<li>dphs=by day/promo/shoolholiday/store</li>
<li>(none)=by promo/store (how I started)</li>
</ul>
<p>Second part:</p>
<ul>
<li>med (or none)=median</li>
<li>m1=mean</li>
<li>m2=standard deviation</li>
<li>m3=skewness</li>
<li>m4=kurtosis</li>
<li>hmean=harmonic mean</li>
</ul>
<p>cust means that it is calculated on customers instead of sales</p>
<p>About the rnd/rounded features: at some point I excluded store from my model, and was worried that daysincompetition and competitiondistance would be used as proxies for store, therefore I rounded them on the log scale. In my final model I don't think it makes a difference.</p>
<p>Relative weather is weathervariable today, divided by average weathervariable during the last 7 days (did not help much).</p>
<p>Hi Amr, I was using means, but tried harmonic mean after reading this post <a href="https://www.kaggle.com/c/rossmann-store-sales/forums/t/16763/median-dayofweek-benchmark-score">https://www.kaggle.com/c/rossmann-store-sales/forums/t/16763/median-dayofweek-benchmark-score</a> and it did improve the ensemble a little bit</p>
<p>Hi Juan,</p>
<p>I think I learnt by far the most tricks for feature engineering on the Kaggle forums: participate in the competitions, join the forum discussions and read what the winners share after the competition.</p>
<p>I am not aware of good resources to study feature engineering, but a bit of SQL experience (or maybe a course) will certainly be useful. I recommend to use SQLite, so you can skip all the troubles to get a database running.</p>
<p>Hi Derek, I have not publicly shared the scripts that are mentioned in the documentation (only with the competition sponsor).</p> |
Rossmann Store Sales | source code & description, 66th place, R, glmnet + xgboost | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rossmann Store Sales
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I've removed the broken snippet from playground.R. Please remember that playground.R is just a bunch of (un)related snippets.</p>
<p>Not sure which part are you talking about.
There is save-environment.R which dumps prepared data and environment.R which loads it back.
There is something about "search-space.RDS" in playground.R which you can skip because it was not used later in submissions.
playground.R is an unstructured collection of code snippets that I used for experiments or submissions. Final submission was made by snippet on lines 151-158</p>
<p>the correct link for detailed description is <a href="http://mabrek.github.io/blog/kaggle-forecasting">http://mabrek.github.io/blog/kaggle-forecasting</a></p>
<p>Detailed description <a href="http://mabrek.github.io/blog/kaggle-forecasting/">http://mabrek.github.io/blog/kaggle-forecasting/</a></p>
<p>Short description <a href="https://github.com/mabrek/kaggle-rossman-store-sales/blob/master/strategy.md">https://github.com/mabrek/kaggle-rossman-store-sales/blob/master/strategy.md</a></p>
<p>Source code <a href="https://github.com/mabrek/kaggle-rossman-store-sales">https://github.com/mabrek/kaggle-rossman-store-sales</a></p> |
SETI Breakthrough Listen - E.T. Signal Search | 10th Place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SETI Breakthrough Listen - E.T. Signal Search
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Thank team</strong> <a href="https://www.kaggle.com/sggpls" target="_blank">@sggpls</a> <a href="https://www.kaggle.com/vzaguskin" target="_blank">@vzaguskin</a> <a href="https://www.kaggle.com/igorkrashenyi" target="_blank">@igorkrashenyi</a> for hard work</p>
<h1>Preprocessing</h1>
<p>At the beggining of the competition tried different approaches (use all channels, only 0,2,4, with vstack and without it). So, the best option was to use only 0,2,4 channels with vstack.</p>
<p>Also had a lot of experiments with img size and found that bigger resolution can improve results (512x512 and 640x640).</p>
<p>Augmentation:</p>
<ul>
<li>HorizontalFlip, VerticalFlip</li>
<li>ShiflScaleRotate</li>
<li>RandomResizedCrop (scale 0.9)</li>
<li>IAAAdditiveGaussianNoise (scale 0.15)</li>
</ul>
<p>Also, during the training we tried mixup and fmix, both perform pretty same. Also, tried to increase the signal during mixup, worked but not always.</p>
<h1>Train</h1>
<h5>Worked</h5>
<ol>
<li>Big models (efficientnet_b7, efficientnet_v2m, resnet200d, nfnet_f3 and etc.)</li>
<li>Mixup</li>
<li>Big resolution (640x640)</li>
<li>Pseudo labels</li>
<li>TTA4 during inference</li>
<li>OOF Stacking</li>
</ol>
<h5>Didn't work</h5>
<ol>
<li>Shuffle channels as augmentation</li>
<li>Siamese Net for splitting channels</li>
<li>LSTM Architectures to incorporate on/off</li>
<li>VOLO Models (hard to train, need to tune params)</li>
</ol>
<h1>LB simplified roadmap</h1>
<ul>
<li>b0 single 0.74+LB</li>
<li>b0 oof 0.77+ LB</li>
<li>v2m oof 0.775+ LB</li>
<li>blend 0.785+ LB</li>
<li>pseudo 0.795 + LB</li>
</ul>
<p><strong>Thanks to all participants, organizers, and hosts</strong> for their work and to recover competition after the leak.</p>
<p>Tried both, but hard targets worked better</p> |
SETI Breakthrough Listen - E.T. Signal Search | 12th place solution - experiments with augmentation | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SETI Breakthrough Listen - E.T. Signal Search
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><ol>
<li>Thanks pointing it broken url, fixed now. </li>
<li>ON/OFF layers were temporally stacked and since I did add indicator channel (0 and 1 to differentiate ON/OFF), I could have used vertical flip I guess.</li>
<li>Yes, improves both CV and LB</li>
</ol>
<p>Thanks to hosts, all competitors and Kaggle for once again providing a great learning opportunity.</p>
<h3>Models</h3>
<p>For most experiments I used <code>efficientnet_b0</code>. During final days, I added <code>efficientnetv2s</code> and <code>efficientnetb1</code>, <code>efficientnetb2</code> to ensemble, which is a geometric mean of different runs.<br>
(I missed the trick here by not adding bigger models).</p>
<h3>Input</h3>
<p>For most of my experiments I used all 6 inputs stacked by time and resized to either 512x512 or 768x512.<br>
For most most experiments, original stacked image is used as channel 0, frequency normalized image is used as channel 1, and 3rd channel is 0 and 1 basis whether that part of input is ON target or OFF target.</p>
<h3>Training setup</h3>
<p>I extensively use config files and <a href="https://hydra.cc/" target="_blank">hydra</a> to manage experiments. <a href="https://wandb.ai/site" target="_blank">Weights and biases</a> for logging and monitoring. <a href="https://github.com/PyTorchLightning/pytorch-lightning" target="_blank">Pytorch lightning</a> for boiler plate training code. Most of the code is <strong>flake8</strong> linted and auto formatted by <a href="https://github.com/psf/black" target="_blank">black</a>. Great thanks to <code>https://github.com/ashleve/lightning-hydra-template</code> for providing a good starter code base. Thanks to all the developers of those libraries!</p>
<p>All my codes are available here: <a href="https://github.com/mohsinkhn/seti-kaggle-tezdhar" target="_blank">https://github.com/mohsinkhn/seti-kaggle-tezdhar</a></p>
<h3>Experiments:</h3>
<ul>
<li><p>Including off channels gives small boost on CV but a bigger one (0.1-.2) on LB for some of my early models. It could be explained by narrowband drift signals existing in both on-off channels (more so in test data).</p></li>
<li><p>Untied mixup ( <a href="https://openreview.net/pdf?id=SkgjKR4YwH" target="_blank">https://openreview.net/pdf?id=SkgjKR4YwH</a> ). The basic idea is to have different distributions for mixing of inputs and targets. For inputs I use beta distribution with alpha=1.0, beta=1.0 while for targets I use beta distribution with alpha=1.0, beta=0.5.</p></li>
<li><p>Augmentations: To detect unknown classes, I wanted to enforce the condition that signal should be present in only ON channels and NOT OFF inputs. To this end, I roll data in inputs by 1 step ( [A0, B, A1, C, A2, D] --> [D, A0, B, A1, C, A2]) and make target 0. The image in input 0 becomes image in input 1, image in input 1 becomes image in input 2 and so on. <strong>Doing so, along with SpecAug, I am able to successfully predict new class in test data</strong></p></li>
<li><p>Other than above augmentation, I used hflip, roll, specAug, meanshift, verticalshift(2 steps as 6 input image can't be vertically flipped)</p></li>
<li><p>I tried many normalisation techniques, mean absolute deviation scaling, sigmoid scaling, random power after min-max scaling etc. but none of them gave significant boost.</p></li>
<li><p>Tried pseudo labelling towards the end with only confident predictions, but couldn't get much benefit.</p></li>
</ul>
<p>Although, feels bad to loose solo gold by 1 rank, it was really good learning experience for me, especially my experimentation stack has improved significantly. .</p>
<p>Thank you for well wishes! </p>
<p>Yes, your understanding is correct. The probability of this happening was kept pretty low at 10-20%, which meant only 1-2% positive samples were impacted. <br>
I think my it gave small improvement on LB 0.786-->0.789. But looking at probabilities of new class in test set, I could see them move from 0.7-0.8 to 0.9ish</p>
<p>Implementation for untied mixup is quite tricky, I had multiple bugs before I finally got right!<br>
<a href="https://github.com/mohsinkhn/seti-kaggle-tezdhar/blob/dd3b1dcb8729ed73c5bb6ce0041cd385c412156e/src/models/plmodels.py#L108" target="_blank">https://github.com/mohsinkhn/seti-kaggle-tezdhar/blob/dd3b1dcb8729ed73c5bb6ce0041cd385c412156e/src/models/plmodels.py#L108</a></p>
<pre><code> def _mixup_data(self, x, t, use_mixup):
"""Returns mixed inputs, pairs of targets, and lambda"""
if not use_mixup:
return x, t
alpha = self.hparams["mixup_alpha"]
beta = self.hparams["mixup_beta"]
ualpha = self.hparams["mixup_ualpha"]
ubeta = self.hparams["mixup_ubeta"]
untied = self.hparams["mixup_untied"]
if alpha > 0:
dist1 = betad(alpha, beta)
dist2 = betad(ualpha, ubeta)
dist3 = betad(ubeta, ualpha)
rng = np.random.rand()
lam1 = dist1.ppf(rng)
if untied:
lam2 = dist2.ppf(rng)
lam3 = dist3.ppf(rng)
else:
lam2 = lam1
else:
lam1 = 1
lam2 = 1
batch_size = x.size()[0]
index = torch.randperm(batch_size).to(self.device)
mixed_x = lam1 * x + (1 - lam1) * x[index, :]
if untied:
mixed_y = torch.clamp((lam2 * t) + (1 - lam3) * t[index], 0, 1) # 0.3, 1 --> 0.5, 1; 0.7, 0 --> 0.5, 1
else:
mixed_y = lam2 * t + (1 - lam2) * t[index]
return mixed_x, mixed_y
</code></pre>
<p><a href="https://www.kaggle.com/superchenhao" target="_blank">@superchenhao</a> I add it as [1, 0, 1, 0, 1, 0], but IMO it shouldn't really matter</p>
<p>Yes, It improved for me! I guess by not including ON and OFF channels together, it would learn to identify wave like signal stretching from ON to OFF target cadences as non-significant and hurt new class prediction on test set.</p>
<p>Taking notation from <a href="https://www.kaggle.com/superchenhao" target="_blank">@superchenhao</a>, [A0, B, A1, C, A2, D] --> [D, A0, B, A1, C, A2]</p> |
SETI Breakthrough Listen - E.T. Signal Search | 15th Place Solution : 8 days of Hard Work | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SETI Breakthrough Listen - E.T. Signal Search
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi all , <br>
First of all I would like to thank Kaggle and organizers for their hard work to make this competition happen , the data after the reset was really good. We weren't sure of doing this competition after the reset as we were involved in other competitions but I would like to thank <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> and <a href="https://www.kaggle.com/shivamcyborg" target="_blank">@shivamcyborg</a> who joined me here , despite being tired after their SIIM Covid finish with<b> just 8 days left </b> </p>
<h1>Summary</h1>
<p>Our approach is pretty similar to others in the top solution , we just missed a few key points due to the short timeframe .</p>
<p>Our modelling contains three phases :<br>
1) Pretrain on Old Train Data <br>
2) FineTune on New Train Data<br>
3) Further tune on old Test Data and Pseudo labelled new Test Data</p>
<p>We trained all models on the "ON" phase of data i.e [0,2,4] for quick iteration as the data was huge and time was less, also it seemed to work better for us</p>
<h1>Phase 1 : Training</h1>
<p>Before the competition reset , we were at 6th place without using the leak and had a lot of good strong models , So the first Phase of our training was already done for most models . We had logged everything properly which helped us to restart the competition quickly</p>
<h1>Phase 2 Training</h1>
<p>After Rejoining the competition we didn't train any model from scratch , we loaded all our models trained on Previous leaky Train Data and started training on the new data . </p>
<p>We trained the following models in this stage and these all were used in our final ensemble:</p>
<ul>
<li>Effnet B5</li>
<li>Effnet B7</li>
<li>Eca-nfnet_l0</li>
<li>Effnetv2m</li>
</ul>
<p>We used very light augs (vflip, hflip, randombrightnesscontrast, cutout) as we were afraid of losing the weak signals if we used heavy augs . All the models were trained for 20 epochs and used stratified five folds for CV</p>
<h1>Phase 3 Training</h1>
<p>We knew that the new train and test data had different distributions , so we started with pseudo on just highly confident labels and took only 5000 examples , this helped us increase CV but didn't reflect on LB , ofc we didn't give time to analyze why this was happening . We thought the old test data also might have some distribution similar to new test so finally we decided to retrain the models on all old test data and all new test data (pseudo labelled with our current best blend)</p>
<p>In the third phase we trained the models with similar augs but with low LR and lower epoch</p>
<h1>Quest Towards Finding the Magic</h1>
<p>We were able to reach CV : 0.901 and LB : 0.795 with the aforementioned techniques and last few days were spent on finding the magic. From the hosts comment we all knew that there is a different set of signals in train , we used our best models embeddings to cluster and generate TSNE plot in order to isolate those images </p>
<p><a href="https://ibb.co/WWnCy1K"><img src="https://i.ibb.co/tMBRs5C/image-1.png" alt="proc"></a></p>
<p><a href="https://ibb.co/ZNwxMg6"><img src="https://i.ibb.co/SJ1B3mX/image-2.png" alt="proc"></a></p>
<p>From the above plots it was clear to us that more than 50 percent of the test had little to no overlap with the train set , also we were able to see cluster 1 belongs mostly to test set and hence we started to look at the images in that and finally isolated the new signals .</p>
<p><b> However We were not able to effectively use it </b></p>
<h1>Things that didn't work us</h1>
<ul>
<li>Qishen ha's Alaska competition trick (Removing last few blocks of efficientnet and adding conv layers on top ) similar to <a href="https://www.kaggle.com/tanulsingh077/qishen-ha-alaska-implementation" target="_blank">this</a></li>
<li>Reducing Model strides in conv stem to allow for longer training at high resolutions</li>
<li>Residual Bi-LSTM heads</li>
<li>Winning solution of previous SETI competition from <a href="https://github.com/sgrvinod/Wide-Residual-Nets-for-SETI?source=post_page" target="_blank">here</a></li>
</ul>
<h1>Lessons Learned</h1>
<p>After reading up the top solutions , we missed up some really basic stuff probably due to lack of time and also due to some laziness . </p>
<ul>
<li>Mixup using the OR logic really makes sense</li>
<li>Carefully analyzing the predictions/oofs (Error Analysis is one of the most important part of any machine learning problem)</li>
</ul>
<h1>Hardware used</h1>
<p>The data in this competition was fairly huge , we used the following :</p>
<ul>
<li>Colab Pro (Mine)</li>
<li>2xV100 ( <a href="https://www.kaggle.com/piantic" target="_blank">@piantic</a> )</li>
<li>2xV100 (@shivamcyborg )</li>
<li>2xA100 ( Rented from <a href="https://cloud.jarvislabs.ai/" target="_blank">https://cloud.jarvislabs.ai/</a>) </li>
</ul>
<p>I used jarvislabs cloud services for the first time and I found it very useful , especially the feature with which we can pause the instance and change the GPU type / cluster type without losing the hard disk or cpu.</p>
<p>Similar to any other competition I am taking a good amount of learning from here as well , I really thank <a href="https://www.kaggle.com/markpeng" target="_blank">@markpeng</a> for his efforts in finding the magic , its always a pleasure working with him , I learned a lot , I also want to thank <a href="https://www.kaggle.com/piantic" target="_blank">@piantic</a> for all his efforts</p>
<p>Thanks <a href="https://www.kaggle.com/woosungyoon" target="_blank">@woosungyoon</a> </p> |
SETI Breakthrough Listen - E.T. Signal Search | 194th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SETI Breakthrough Listen - E.T. Signal Search
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I was able to learn a lot from the discussion in this competition. </p>
<p>Here's a brief introduction to how I tried.</p>
<p>Used Model</p>
<ul>
<li>Backbone model1: Efficientnet b3</li>
<li>Backbone model2: Mobilenet v3</li>
<li>Four fully connected layers.</li>
</ul>
<p>The overall learning sequence is as follows.<br>
Step 1 -> Step 2 -> Step 3 -> Step 4</p>
<p><strong>Step 1) Resize Image</strong></p>
<ul>
<li>I Add the resize image and max pool result.<br>
resized_image = 1<em>Resize + 0.5</em>(Maxpool(3), Resize) + 0.25<em>(Maxpool(6), Resize) + 0.125</em>(Maxpool(9), Resize)</li>
</ul>
<p>This resize decreases the model's overall performance, but the model can train in a small number of epochs in a small image size. (64, 128, 192)</p>
<p><strong>Step 2) Training backbone model 1.</strong></p>
<p>I used 4 fully connected layers when training model 1.<br>
FC1 + FC2 : use on the train dataset<br>
FC1 + FC3: use on the test dataset<br>
FC1 + FC4: use on the test dataset</p>
<p>x = backbone(img)<br>
pred = (FC1 + FC2)(x) in train dataset<br>
pred = (FC1 + FC3)(x) in test dataset (all label 1)<br>
pred = (FC1 + FC4)(x) in test dataset (all label 0)</p>
<p>I gave a weight decay to FC2, FC3, and FC4. I expect FC2, FC3, and FC4 would predict 0 gradually.</p>
<p><strong>Step 3) Training backbone model 2 on the test dataset.</strong><br>
(I round the prediction of backbone model 1. )</p>
<p>T1: model 1 pred 0.0 : 0 label<br>
T2: model 1 pred 0.8, 0.9, 1.0 : 1 label<br>
T3: [1, 3, 5] channels of pred 0.0, 0.8, 0.9, 1.0: 0 label</p>
<p>Model 2 trains the T1, T2, and T3 datasets as a mixup.<br>
Mixup train:<br>
new_y = torch.where(y + y[batch]>=1, 1, 0)</p>
<p>The goal of model 2 is to correct the prediction of model 1 in the test dataset.</p>
<p>Step 4) Add prediction.<br>
submission = (EfficientNet b3 prediction + 0.3* MobileNet prediction)</p>
<p>I thought that most of model1's mistakes occur in the 0.1 - 0.4 interval. <br>
In this way, I expected to have 3 classes.<br>
0.0, 0.1-0.7, 0.8-1.0</p>
<p>Results:<br>
It's not very effective.</p>
<p>I learned a lot from you guys. Thank you.<br>
Next time, I will come back with better skills than I have now.<br>
See you.</p> |
SETI Breakthrough Listen - E.T. Signal Search | 1st Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SETI Breakthrough Listen - E.T. Signal Search
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle and Berkeley SETI Research Center for this interesting competition. In the following, we want to give a summary of the winning solution of Team Watercooled. As always, thanks to all team members contributing equally to the solution.<br>
<a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a> <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a> <a href="https://www.kaggle.com/ilu000" target="_blank">@ilu000</a></p>
<h1>Summary</h1>
<p>Our solution is based on large state-of-the-art classification models that were fine-tuned for this specific task. We pre-processed images by cleaning the backgrounds to boost signal to noise ratio. During training, we employed heavy augmentation in the form of Mixup. For faster training iterations, we only used the ON-channels. We only rely on provided competition data and do not utilize any external data. We additionally augment the training data with an extra signal that only appears in test files -- an “s-shape” signal -- by using a randomized signal generator. </p>
<h1>Cross-Validation and Preprocessing</h1>
<p>The CV setup for this competition was quite straightforward, using a 5 fold split on the new training data. We also found that the old leaky data was still giving us a significant boost if used for training, so we added the full old train and old test data to each training fold. To reduce the impact of leakage in the training as much as possible, we cast to float32 and subsequently applied a channel normalization after loading an image. </p>
<h1>CV/LB gap</h1>
<p>As most participants, we experienced major differences between CV and LB scores early on in the competition. While this is in theory not necessarily always an issue, LB might just be harder (and it certainly is), we did not observe clear correlation between higher CV and higher LB. This led us to explore different ways of understanding these differences and trying to account for them. Our big gap to second place on LB appears to mostly be based on these insights and we elaborate them next.</p>
<h1>The magic #1</h1>
<p>As mentioned, we observed that certain models exhibit significant different LB scores even though their CV scores were similar. That led us to start investigating single predictions, where different models with different LB scores disagree. After comparing several models, we observed that better LB models had much higher probabilities for images with an “s-shape” signal as shown below. </p>
<p><img src="https://i.imgur.com/9mE6ptH.png" alt="s-shape in test"></p>
<p>That type of signal only appears in the test set and some models were able to identify the signal as atypical, while others were not. It was later also confirmed by hosts that test data contains additional type(s) of signal. To better predict these signals and better generalize to the test data, we added a signal generator (code base adjusted from <a href="https://github.com/bbrzycki/setigen" target="_blank">https://github.com/bbrzycki/setigen</a>) which adds those signals to the training set. We tuned the probability (p=0.01) and randomized the shape and signal to noise ratio for maximum LB score while making sure that CV score stays constant. An example of such an injected signal is shown in the plot below. </p>
<p><img src="https://i.imgur.com/SCsiLeo.png" alt="s-shape injected"></p>
<h1>The magic #2</h1>
<p>Many noticed a large jump in our public leaderboard score a few weeks before the competition ended. We were already at a decently high score close to 0.800, which was 2nd place on LB at that time, and we were quite confident to further improve our solution as we were only subbing very simple single-fold blends and haven’t even turned our attention to further advancement like pseudo tagging which appeared to hold much promise in this competition. To further improve our solution, we continued to investigate the large CV/LB gap that was still present after injecting the s-shaped signal. </p>
<p>One obvious aspect in this competition is that the background between train and test data is very different. This is not only imminent from visual inspection or simple binary classifiers that can easily distinguish between train and test, but also from early pseudo tagging experiments we conducted. Here, we only added very certain target=1 samples from test to train, and while CV looked good, our models suddenly only predicted target=1 for the whole test, meaning that the models perfectly learned that target=1 is always from test. </p>
<p>So we started to speculate that the models sometimes focus too much on the background rather than the signal. In the plot below, we exemplary show the histogram of our output logits grouped by train similarity. Here, train similarity is just based on image data (mean, std, unique counts, min, max), but it was clear that the models are more certain if they “know the background”. Thus, we explored methods to reduce the impact of the background noise or reduce overfitting on the background noise.</p>
<p><img src="https://i.imgur.com/V9BxbeT.png" alt="train_like vs not_train_like"></p>
<p>Further investigations revealed that there even was significant overlap from one image to another. This means that the model could also potentially overfit to single backgrounds if it only sees target=0 or target=1 for that background. As a result of this investigation, we decided to attempt to remove some common background artifacts from the data (i.e. clean the data), to increase the signal and reduce the noise and overfit potential. </p>
<p>However, It is far from trivial to utilize this property due to image-wise channel normalization. One cannot just compare raw values from one image to another. To allow for comparison between images, we normalized per column and calculated the mean pixel difference of the first column of each image to all other images and columns. Due to rounding errors and possible signals in the columns, we received column distances greater than 0. We utilized the overlaps to increase signal to noise ratio, by replacing the original data with the difference of the normed regions between multiple matched samples. This induces shadowing (difference is much smaller than 0) from signals that are present in the matched samples, which our models were able to distinguish from actual signals. In the plot below, the difference of the normalized overlapping region between two samples is shown. The yellow needle (values > 0) originates from sample 1, and the blue needle (values < 0) originates from the matched sample. </p>
<p><img src="https://i.imgur.com/XxC4lDI.png" alt="match shadow"></p>
<p>Based on this process, we attempted to clean all images where possible; see the following examples:</p>
<p>Original sample with signal:<br>
<img src="https://i.imgur.com/1y9kpMs.png" alt="Original sample with signal"><br>
Cleaned sample with signal:<br>
<img src="https://i.imgur.com/CemLefI.png" alt="Cleaned sample with signal"></p>
<p>Original sample without signal:<br>
<img src="https://i.imgur.com/VSE7nxC.png" alt="Original sample without signal"><br>
Cleaned sample without signal:<br>
<img src="https://i.imgur.com/STPDRyi.png" alt="Cleaned sample without signal"></p>
<p>Originally, when we started cleaning the data, we just quickly attempted to use existing models trained on the uncleaned data and augment inference with the cleaned data. Immediately, we saw big boosts on LB giving us the 0.853 score. As this was already a significant difference to the next highest score on LB, we contacted Kaggle to elaborate on our approach and to confirm that we can continue using it. After that, we started to directly utilize the cleaned data in our modeling approach, to further boost our performance significantly.</p>
<h1>Models</h1>
<p>As we were very confident about our solution, we decided against submitting huge blends or usage of pseudo labels, thus rendering our solution much better suited for actual use in the field. Our best final submission is thus based on a single fit on the full data.</p>
<p>Our model architecture and training rechime works both for uncleaned and cleaned data (i.e. nearly duplicating training data). Consequently, we decided to train the data on all data available:</p>
<ul>
<li>Old train (uncleaned & cleaned)</li>
<li>Old test (uncleaned & cleaned)</li>
<li>New train (uncleaned & cleaned)</li>
</ul>
<p>For inference, we always pick the cleaned image, if available, and otherwise the uncleaned image.</p>
<p>Early, we discovered that norm-free models exhibit slightly better CV/LB correlations, probably also due to the lack of batch norm that can be influenced by data distributional differences. Our final best model used an eca_nfnet_l2 backbone and trainable GeM pooling. </p>
<p>Model and resolution wise we saw on CV that the bigger the better. Hence, as mentioned earlier, we feed the full image with only concatenated ON channels in and do not do any resizing. <br>
To further “enlarge” the images we change the model stride of the first conv layer to (1,2), which is a common trick used in computer vision competitions to basically further enlarge image resolution. That allows the model to train on a higher resolution through the backbone. Christof and Philipp used the same trick in ALASKA2 Steganalysis competition, motivated by the very similar data setup (find hidden data in an image). Training time of this “high resolution” model is about 15h on 8xV100 using DDP.</p>
<p><img src="https://i.imgur.com/WdbhFTB.png" alt="model"></p>
<p>We train the model for 20 epochs, and apart from random vertical flip, we also employ mixup. We randomly mix two images with drawing from the beta distribution with alpha=beta=5, so mostly equal blend. We always take the maximum of both labels as the final label should always be target=1 as soon as there is some form of a signal in the data. We only mix between uncleaned and cleaned images respectively. What we also found helpful, is to re-normalize the images after mixing, to keep the nature of the data intact for inference. We also employ 4xTTA (regular, hflip, vflip, hflip+vflip).</p>
<p>Our final single fold validation score is 0.976 with a public LB score of 0.965 and private LB score of 0.964 (nearly closing the cv/lb gap). The fullfit (trained on all data with the same parameters) scores 0.967 public LB and 0.967 private LB.</p>
<h1>Outlook</h1>
<p>We already had a very good solution with a public LB score close to 0.800 before starting to additionally clean the backgrounds in images. We quickly realized that this has a major impact on the solution and focussed our efforts on utilizing the information to its fullest extent. Thus, bound by long training times, we did not explore every approach: In the early stage of the competition, we also trained models with a kind of self attention based on the OFF-channel activations. While not yielding better scores on its own, this proved to be valuable when used in blends as it seems to add diversity. One other thing we noticed is that pseudo labeling has a positive impact as it may help with the train/test domain shift. We also explored additional techniques and research to tackle domain shift.</p>
<blockquote>
<p>So top1 decided it is s-shape new class and set it to label one for all examples using human eye</p>
</blockquote>
<p>To clarify on this, of course we did NOT hand label any test data. We used a signal generator and randomly applied that (together with target=1) on the train set. </p>
<p>Regarding your point on generalization to new signals/classes, this is obviously a huge challenge in all computer vision tasks and not really solved, yet. Supervised ML can only reliably learn what is already in the training set. </p>
<p>With the first conv layer, we are referring to the first conv layer of the backbone. We modified the stride of that one. </p>
<p>I am sorry about the image in the above post, which is indeed a bit misleading in this case.</p>
<p>sure, it's "0000799a2b2c42d"</p> |
SETI Breakthrough Listen - E.T. Signal Search | 208th Solution - Common Approach, But Hope for Someone's Help | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SETI Breakthrough Listen - E.T. Signal Search
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the all hosts & participants.<br>
Especially for the <a href="https://www.kaggle.com/ttahara" target="_blank">@ttahara</a> 's baseline[1]. I reused the notebook throughout my experiments.</p>
<p>Also, Google Colabo Pro & Kaggle Notebook.</p>
<h2>Final Model</h2>
<p>Stacking of 5 models (Table 1).</p>
<p><b>Table 1: Models and Score of Final Submission</b></p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Private LB</th>
<th>Public LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>EfficientNet-B2, 512x512, 5-fold</td>
<td>0.763</td>
<td>0.764</td>
</tr>
<tr>
<td>EfficientNet-B0, 512x512, 5-fold</td>
<td>0.757</td>
<td>0.759</td>
</tr>
<tr>
<td>EfficientNet-B0, 512x512, 3-fold, controlling sampling rate, logical-or on mix-upped target</td>
<td>0.730</td>
<td>0.733</td>
</tr>
<tr>
<td>ResNet18d, 512x512, 5-fold, controlling sampling rate</td>
<td>0.745</td>
<td>0.744</td>
</tr>
<tr>
<td>ResNet18d, 320x320, 5-fold</td>
<td>0.719</td>
<td>0.720</td>
</tr>
<tr>
<td>Stacking 5 models (1-hidden-layer MLP)</td>
<td>0.764</td>
<td>0.765</td>
</tr>
</tbody>
</table>
<p>In the early trials, I was wondering about class imbalance problem. Since AUC-ROC equally evaluates the minority and majority class, I thought simple BCE loss over-penalize the positive set, and make it hard to detect them (which I finally found that's NOT TRUE).<br>
So I balanced class by controlling sampling rate of training set (i.e. oversampling positive set or/and downsampling negative set). I also added the train/test's B-observations (1, 3, 5 channels) as the negative sets to help models learning the background information. A typical settings are shown in table2.</p>
<p>The trial boosted the training speed, but at the same time it suppress the score: feeding the whole dataset improves score about +0.03. I think it is because "knowing background" is an important information for the competition.</p>
<p><b>Table 2: Number of Samples for Train Dataset</b></p>
<table>
<thead>
<tr>
<th>target</th>
<th>train-pos-A</th>
<th>train-pos-B</th>
<th>train-neg-A</th>
<th>total</th>
<th>ratio</th>
<th>after mix-up</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0</td>
<td>4,000</td>
<td>4,000</td>
<td>8,000</td>
<td>0.67</td>
<td>0.45</td>
</tr>
<tr>
<td>1</td>
<td>4,000</td>
<td>0</td>
<td>0</td>
<td>4,000</td>
<td>0.33</td>
<td>0.55</td>
</tr>
<tr>
<td>total</td>
<td>4,000</td>
<td>4,000</td>
<td>4,000</td>
<td>12,000</td>
<td>1.0</td>
<td>1.0</td>
</tr>
</tbody>
</table>
<h2>Other Settings (of the Best Score Model)</h2>
<ul>
<li>Augmentations: <ul>
<li>Resize 512x512</li>
<li>HorizontalFlip</li>
<li>VerticalFlip</li>
<li>ShiftScaleRotate (rotate_limit: 0)</li>
<li>MotionBlur</li>
<li>Sharpen</li>
<li>MixUp</li></ul></li>
<li>Loss<ul>
<li>BCE loss</li></ul></li>
<li>Epoch: 20</li>
<li>Inference<ul>
<li>5-fold average</li>
<li>TTA x4: original, vertical flip, horizontal flip, both</li></ul></li>
</ul>
<h2>Not Worked</h2>
<ul>
<li>Weighed BCE Loss (Class-Balanced Loss) [2]</li>
<li>learn image scaling[3]</li>
<li>logical-OR of mix-upped target</li>
<li>Image preprocessing (column-wise diff, log1p transform, and min/max-clipping)</li>
</ul>
<h2>Reference</h2>
<p>[1] <a href="https://www.kaggle.com/ttahara/rerun-seti-e-t-resnet18d-baseline" target="_blank">https://www.kaggle.com/ttahara/rerun-seti-e-t-resnet18d-baseline</a><br>
[2] <a href="https://www.kaggle.com/c/seti-breakthrough-listen/discussion/265973" target="_blank">https://www.kaggle.com/c/seti-breakthrough-listen/discussion/265973</a><br>
[3] <a href="https://www.kaggle.com/tuckerarrants/seti-learned-image-resizing" target="_blank">https://www.kaggle.com/tuckerarrants/seti-learned-image-resizing</a></p>
<p>As for top ranker's solutions, it seems larger model performs better.<br>
I am curious about anyone who achieves high score with relatively small models.<br>
If anyone who can achieve the high score with small model, please share the approach.</p>
<p><a href="https://www.kaggle.com/bakeryproducts" target="_blank">@bakeryproducts</a> Thank you for sharing.</p>
<p>Do you think the score difference is due to the model choice or other trials (or both)?</p>
<p>According to your solution, you tried other trials like larger image size, logical-OR on mix-upped target, freezing backbone etc. Which trial do you think is the most effective to the score?</p>
<p><a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> Thank you for sharing your model and score. Larger model, larger image size was the key to the score LB 0.779. In my situation, I don't have my own workstation, I have to train within the limit of cloud service like Google Colab and Kaggle notebook. So I didn't dare to train larger model than EfficientNet-B2. I think sufficient computing resource is de facto standard for today's kaggle competition.</p>
<p><a href="https://www.kaggle.com/bakeryproducts" target="_blank">@bakeryproducts</a> I didn't come up with training over hundreds of epochs. It's like training the model from scratch. Thank you for the advice.</p>
<p><a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a> Variation of data was also crucial to this competition. </p>
<p>Personally, one more important thing is EDA. I did some EDA and error analysis, but I couldn't find out useful information from them, or, if I find something (S-signal, CV-LB difference), I didn't dig further or doing something by utilizing the fact. </p> |
SETI Breakthrough Listen - E.T. Signal Search | 2nd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SETI Breakthrough Listen - E.T. Signal Search
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle and Berkeley SETI Research Center for this competition. Finding the aliens was very exciting. I tried many things to close the gap between cv and lb, but few succeeded, and as a result, my solution is very simple.</p>
<h2>Abstract</h2>
<p>My solution is a two-step process. The first stage is training with train data only, and the next stage is training with test data and pseudo labels. In the first stage, I use only vflip, cutout, and mixup to prevent the model from being confused by losing the signal due to unexpected augmentation. In addition, StochasticDepth and Dropout are strongly used to prevent overtraining. The mixup uses logical OR instead of alpha blending when mixing targets. This allows creating a model that responds strongly to weak signals. At this stage, we achieved PublicLB 0.800 by training efficientnetB5.</p>
<p>In the second stage, we finetune the first stage model using pseudo-labels created by using the pre-trained model from the first stage. In this case, we refer to NoisyStudent and use Stochastic Depth and Dropout more strongly to prevent overfitting to the pseudo-label. By repeating this process several times, the PublicLB 0.813 is reached. Finally, I simply averaged the five submissions with the highest LB scores.</p>
<h2>1st Stage Details</h2>
<p>You can see a sample of the code here.<br>
<a href="https://www.kaggle.com/hirune924/2ndplace-solution" target="_blank">https://www.kaggle.com/hirune924/2ndplace-solution</a></p>
<ul>
<li>The input images are merged in the time direction and resized to a size of 512x512.</li>
<li>The mixup target can be mixed by using the following expression to express a logical OR, which also supports soft targets when using pseudo labels.</li>
</ul>
<pre><code>y = y + y[index] - (y * y[index])
</code></pre>
<ul>
<li>TTA with vflip is used for inference</li>
</ul>
<p>Sometimes, the loss became nan or the validation auc became 0.5 and the training did not progress. In this case, I solved the problem by training only the head and encoder BNs for one epoch first.</p>
<h2>2nd Stage Details</h2>
<p>The code is almost the same as the first step, adding pseudo-labels and making Stochastic Depth and Dropout even stronger for training.</p>
<h2>Didn't work</h2>
<ul>
<li>Unsupervised Domain Adaptation by finetune only BN</li>
<li>Add artificial signal, chi2noise, referring to setigen.</li>
<li>input stem stride=1</li>
<li>using old data</li>
<li>Anomaly detection based metric learning</li>
<li>Semi Supervised Learning</li>
<li>many other things…</li>
</ul>
<p>Use both train and test.</p>
<p>I tried L2 constrained softmax Loss, but the max I could get was about lb0.73.</p>
<p>I'm honored to have arrived at the same trick as you !</p>
<p>V100x1 GPU is used to train b5, and mixed precision training is used to save GPU RAM.</p>
<p>Typically, we use imagenet pre-trained models for the encoder and untrained Linear for the classifier. In the early stage of training, the encoder parameters are corrupted by the meaningless gradient through the untrained Linear, which makes the training unstable. To prevent this, I trained only the classifier first.</p>
<p>I achieve Public LB 0.80 at the first level alone. Also, the larger the model, the higher the score, but I reached the limit with b5!</p>
<p>My method differs from strict logical OR, and works on soft targets because I used the following continuous method that works similarly.<br>
y = y1 + y2 - (y1 * y2)</p>
<p>Your team's solution is also a great job. Congratulations on the 1st place.</p>
<p>Actually, I thought that using all the pseudo-labels instead of only target=1 is dangerous because it may overfit to the pseudo-labels.<br>
Therefore, we tried to use only those pseudo-labels that showed extreme target scores by threshold, but the Noisy Student style method using all pseudo-labels showed better and more stable results.<br>
I would have liked to increase the model size according to the paper, but there was not enough time.</p>
<p>Finally, the following models were blended. v2 was introduced to increase diversity.</p>
<table>
<thead>
<tr>
<th>model</th>
<th>public LB</th>
<th>private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>tf_efficientnet_b5_ns</td>
<td>0.81341</td>
<td>0.81094</td>
</tr>
<tr>
<td>tf_efficientnet_b5_ns</td>
<td>0.81384</td>
<td>0.81196</td>
</tr>
<tr>
<td>tf_efficientnet_b5_ns</td>
<td>0.81203</td>
<td>0.81209</td>
</tr>
<tr>
<td>tf_efficientnetv2_m_in21ft1k</td>
<td>0.81169</td>
<td>0.80978</td>
</tr>
<tr>
<td>tf_efficientnetv2_m_in21ft1k</td>
<td>0.80626</td>
<td>0.80486</td>
</tr>
</tbody>
</table>
<p>Yes, that's what I'm using. Apparently, it does not work for all models, but it does work for many models that are structured like ResNet.</p> |
SETI Breakthrough Listen - E.T. Signal Search | 3rd place solution (Triplet Attention) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SETI Breakthrough Listen - E.T. Signal Search
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks!</p>
<p>Refining was also using a 5 fold split on training set, just like the first stage training.<br>
So, I added the whole pseudo images to each training fold.</p>
<p>Hi thanks!</p>
<p>Pseudo-labels have been added to the new test set only.</p>
<p>Thanks!</p>
<p>In the comparison with EfficientNet-B4, the training time for 1epoch was reduced from 387 seconds to 347 seconds.</p>
<p>We thank all organizers for this very exciting competition. <br>
Congratulations to all who finished the competition and to the winners.</p>
<h2>Summary</h2>
<p>Best submission was an emsemble of four EfficientNets with <a href="https://arxiv.org/abs/2010.03045" target="_blank">Convolutional Triplet Attention Module</a>.</p>
<p><strong>This post was updated on September 1, 2021.</strong><br>
The main changes are as follows.</p>
<ol>
<li>Added stage0 to train the model for pseudo-label creation.</li>
<li>Already used pseudo-labels in stage1, and fixed that.</li>
<li>Update initial learning rate value.</li>
</ol>
<p>The implementation, including training code, has also been released at the following URL.<br>
<a href="https://github.com/knjcode/kaggle-seti-2021" target="_blank">https://github.com/knjcode/kaggle-seti-2021</a></p>
<h2>Validation and Preprocess</h2>
<ul>
<li>Use new train data only</li>
<li>StratifiedKFold(k=5)</li>
<li>Use ON-channels only (819x256) and resize (768x768)</li>
</ul>
<h2>Model Architecture</h2>
<p>Backbone -> Triplet Attention -> GeM Pooling -> FC</p>
<ul>
<li>Use <a href="https://arxiv.org/abs/1905.11946" target="_blank">EfficientNet</a> B1, B2, B3 and B4 backbone</li>
<li><a href="https://arxiv.org/abs/2010.03045" target="_blank">Triplet Attention</a> (increase kernel size from 7 to 13)<ul>
<li>Add only one attention layer after the backbone</li>
<li>I used the implementation in <a href="https://github.com/rwightman/pytorch-image-models/blob/499790e117b2c8c1b57780b73d16c28b84db509e/timm/models/layers/triplet.py" target="_blank">here</a></li></ul></li>
<li><a href="https://arxiv.org/abs/1711.02512" target="_blank">GeM Pooling</a> (p=4)</li>
<li>To accelerate the training, replace Swish to ReLU (B3 and B4 only)</li>
</ul>
<h2>Training</h2>
<p>The training process consists of three stages</p>
<h3>Stage 0</h3>
<p>This model will not be used for the final submission.</p>
<p>Training EfficientNet-B4</p>
<ul>
<li>60epochs DDP AMP</li>
<li>Focal Loss (gamma=0.5)</li>
<li><a href="https://github.com/facebookresearch/madgrad" target="_blank">MADGRAD</a> optimizer<ul>
<li>Initial Lr 1e-2 LinearWarmupCosineAnnealingLR (warmup_epochs=1)</li></ul></li>
<li>Mixup (alpha=1.0)<ul>
<li>If the target of either data is 1, mixed_target will also be set to 1</li></ul></li>
<li>Data Augmentation<ul>
<li>Horizontal and Vertical flip (p=0.5)</li>
<li>ShiftScaleRotate (shift_limit=0.2, scale_limit=0.2, rotate_limit=0)</li>
<li>RandomResizedCrop (scale=[0.8,1.0], ratio=[0.75,1.333])</li></ul></li>
</ul>
<p>Generate Pseudo labels from oof prediction of this model from new test set. <br>
(Select 5,000 images each of positive and negative data with high confidence)</p>
<h3>Stage 1</h3>
<p>Training EfficientNet-B1,B2,B3 and B4 by adding pseudo labeled images.</p>
<p>The training settings are almost the same as stage0.</p>
<ul>
<li>60epochs DDP AMP</li>
<li>Focal Loss (gamma=0.5)</li>
<li><a href="https://github.com/facebookresearch/madgrad" target="_blank">MADGRAD</a> optimizer<ul>
<li>Initial Lr 1e-2 or 1e-3 LinearWarmupCosineAnnealingLR (warmup_epochs=5)</li></ul></li>
<li>Mixup (alpha=1.0)<ul>
<li>If the target of either data is 1, mixed_target will also be set to 1</li></ul></li>
<li>Data Augmentation<ul>
<li>Horizontal and Vertical flip (p=0.5)</li>
<li>ShiftScaleRotate (shift_limit=0.2, scale_limit=0.2, rotate_limit=0)</li>
<li>RandomResizedCrop (scale=[0.8,1.0], ratio=[0.75,1.333])</li></ul></li>
</ul>
<h3>Stage2</h3>
<p>Refine stage1 models and generate submission with TTA</p>
<p>I refined the model with lighter data augmentation than stage1 for 10epochs.</p>
<ul>
<li>10epochs DDP AMP</li>
<li>Focal Loss (gamma=0.5)</li>
<li>MADGRAD optimizer<ul>
<li>Initial Lr 1e-4 LinearWarmupCosineAnnealingLR (warmup_epochs=0)</li></ul></li>
<li>Horizontal and Vertical flip only with Mixup</li>
<li>4xTTA in inference<ul>
<li>regular, hflip, vflip, hflip+vflip</li></ul></li>
</ul>
<p>Best Single Model (Efficientnet-B4)</p>
<ul>
<li>5foldCV: 0.9048 PublicLB: 0.80586 PrivateLB: 0.80294</li>
</ul>
<p>Ensemble of 4 EfficinetNets</p>
<ul>
<li>5foldCV: 0.9100 PublicLB: 0.80575 PrivateLB: 0.80475</li>
</ul>
<h2>Reference</h2>
<ul>
<li><a href="https://arxiv.org/abs/1905.11946" target="_blank">EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks</a></li>
<li><a href="https://arxiv.org/abs/2010.03045" target="_blank">Rotate to Attend: Convolutional Triplet Attention Module</a></li>
<li><a href="https://arxiv.org/abs/1711.02512" target="_blank">Fine-tuning CNN Image Retrieval with No Human Annotation</a></li>
</ul>
<p>Thanks!</p>
<p>I have only used the MADGRAD optimizer in this competion, so I don't know the difference between it and other optimizers.</p>
<p>Thanks!</p>
<p>The difference between with and w/o Triplet Attention was confirmed only with resnet34.<br>
The CV scores are as follows. (In this case, kernel_size=7)</p>
<p>with Triplet Attention: 0.88807<br>
w/o Triplet Attention: 0.88762</p>
<p>Sorry, I can't check LB because I have already deleted the model.</p>
<p>Thanks!</p>
<p>I used the code in the following link.<br>
<a href="https://github.com/rwightman/pytorch-image-models/blob/499790e117b2c8c1b57780b73d16c28b84db509e/timm/models/layers/triplet.py" target="_blank">https://github.com/rwightman/pytorch-image-models/blob/499790e117b2c8c1b57780b73d16c28b84db509e/timm/models/layers/triplet.py</a><br>
I changed the kernel_size of the convolution layer from 7 to 13.</p>
<p>There is also an official github repository by the authors of the Triplet Attention paper.<br>
<a href="https://github.com/landskape-ai/triplet-attention" target="_blank">https://github.com/landskape-ai/triplet-attention</a></p>
<p>Sorry, the model architecture was a typo.</p>
<p>It was not resnet18, it was resnet34.<br>
(I also corrected my reply)</p>
<p>Thanks!</p>
<p>I think the long training epochs was important in my model architecture and training setup this time.</p>
<p>In the early days when I started working on the competition, I trained models for 20epochs, and most of the time the best score was in the last epoch or just before it.<br>
So I increased training epochs to 30 or 60, and the scores improved further.<br>
Even in training for 30 epochs, the best score was often at or just before the last epoch, but not so in 60 epochs, which is why I finally decided on use 60 epochs.</p>
<p>I am wondering if there is any difference in the effect of long training epochs of the model with and without Triplet Attention, but since I have not trained long epoch a model without it, I do not know the effect.</p>
<p>Thanks!</p>
<p>I have trained the model using only the new train set.</p>
<p>However, since I used pseudo labels to train the model in second stage training.<br>
If there is s-shape target that first stage model can determine with high confidence (if it is in top 5,000 positive), it was included in the training data.</p>
<p>Also, I use soft target for pseudo labels.<br>
It is not the value of 0,1, but the value of the probability predicted by the model.</p> |
SETI Breakthrough Listen - E.T. Signal Search | 4th Place Solution. Steven Signal | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SETI Breakthrough Listen - E.T. Signal Search
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Intro</h1>
<p>First of all, we want to thank the organizers and the Kaggle team for hosting the competition. We believe that someday AI will open a big road to exploring the Universe.</p>
<p>Besides that, we want to point out the following projects:</p>
<ul>
<li><a href="https://pypi.org/project/timm/" target="_blank">timm, rwightman</a></li>
<li><a href="https://pytorch.org/" target="_blank">pytorch</a></li>
<li><a href="https://github.com/qubvel/" target="_blank">qubvel, TTA, cls models, seg models, etc. Great stuff</a></li>
<li>feel free to mention any project/author we forget about. We are happy to update this post with references</li>
</ul>
<h1>TLDR</h1>
<p>Classification ensemble with MSDA and focal loss. 2 models types: nf-regnetb1, HRNet18. Spatial cadence with guide layer for keeping ABACAD information</p>
<h1>Input, preprocessing</h1>
<p>Most of our experiments were on original resolution (img/sec reasons). Within the last couple of weeks, we scaled up to current image sizes.</p>
<ul>
<li><p>We use spatial vstack, 6 frames side by side, as most of you did. Samples upscaled with cubic interpolation x3 by frequency axis, scaled down to 6x160 by time axis. The resulting sample size is 960x768</p></li>
<li><p>We use 'guide layer' as the second input layer, mask 960x768 of -1/1 structure: 1 for 'A' frames,-1 for 'B', 'C', 'D' frames</p></li>
<li><p>NN input is ~70% random crop from 960x768x2</p></li>
</ul>
<p>Augmentation (on batches, hand-coded and bit of kornia):</p>
<ul>
<li>Random Crop</li>
<li>HFlip, VFlip</li>
<li>Special tuned MSDA</li>
</ul>
<p>CV split 15k / 45k, cv is correlated with LB</p>
<h3>MSDA</h3>
<p>We feel like MSDA is one of the key parts of this solution.</p>
<ul>
<li><p><a href="https://arxiv.org/abs/1710.09412" target="_blank">MixUp</a> Used on all batches, alpha=2. <strong>But</strong> mixing any label within an anomaly will result in an anomaly label. Labels and permuted labels are the same. It was done not to penalize NN with loss from the mixed anomaly. So one can say it is not MixUp at all.</p>
<ul>
<li><a href="https://arxiv.org/abs/1905.04899" target="_blank">CutMix</a> Used on all batches, with parameters from official implementation. <strong>But</strong> mixing happening in two separate sub-batches: the first is only anomaly batch, and the second is rest. It was done due to the inability to distinguish the position of the signal inside of the frame. So one can say it is not CutMix at all.</li></ul>
<p>We tried other techniques like AugMix, but it's hard to scale up image processing speed with a larger image size. Therefore, we end up using only batch-wise augmentations.</p></li>
</ul>
<h1>Train</h1>
<p>Used models: timm nf-regnetb1 - our light one, timm HRNet18 - larger one. Such pick is random, but we feel like regnet ideas are correct, and HRnet looks like a good choice for this competition.</p>
<p>Training time for nf-regnetb1 is ~7hrs, for HRNet ~24hrs (4xV100)</p>
<h3>Train Features</h3>
<ul>
<li><a href="https://arxiv.org/abs/1708.02002" target="_blank">FocalLoss</a>, gamma=2, alpha=.7</li>
<li><a href="https://github.com/pytorch/examples/blob/master/distributed/ddp/README.md" target="_blank">DDP</a> multiple GPUs</li>
<li><a href="https://pytorch.org/docs/stable/notes/amp_examples.html" target="_blank">AMP</a> to reduces memory requirements for training models, enabling larger models or larger batches</li>
<li><a href="https://fastai.github.io/timmdocs/training_modelEMA" target="_blank">EMA</a> (timm)</li>
<li>usual LR stuff, warmup, cosine decay </li>
<li>early in competition SAM, but with AMP and DDP it requires much careful tuning, fails with numerical instability (and x2 training time)</li>
<li>all data in the tensor dataset, sharded between GPUs (which is not a good idea accuracy-wise, but it is fast)</li>
</ul>
<h1>Inference, postprocess</h1>
<p>TTA: flips, multiplier, sometimes additional Gaussian noise(mean=0, std=[0,.1,.2]).</p>
<p>Ensemble, merge is simple average:</p>
<ul>
<li><a href="https://arxiv.org/pdf/2101.08692.pdf" target="_blank">nf-regnetb1</a>, 384, 768, split1 </li>
<li><a href="https://arxiv.org/pdf/2101.08692.pdf" target="_blank">nf-regnetb1</a>, 384, 768, split2, minor MSDA changes</li>
<li><a href="https://github.com/HRNet/HRNet-Image-Classification" target="_blank">HRNet18</a>, 448, 896, split1</li>
<li><a href="https://github.com/HRNet/HRNet-Image-Classification" target="_blank">HRNet18</a>, 448, 896, split1, minor MSDA changes, Focal with lower gamma and alpha</li>
</ul>
<p>CV score for ensemble parts: .901, .901, .908, .908</p>
<p>LB: .794, .794, .797, .797</p>
<h1>LB basic roadmap</h1>
<ul>
<li>.780 simple cnn, but with MSDA</li>
<li>.783 +guide layer</li>
<li>.788 +focal </li>
<li>.794 +upscale resolution</li>
<li>.797 +heavy model</li>
<li>.800 +ensembles</li>
</ul>
<p>Update. Best single model HRNet18 488x896 PB 0.80095; LB 0.79766</p>
<h1>Shortlist of things that did not work</h1>
<p>Channel-wise cadence, consistency loss, AUC proxy loss, frequency normalizing, whitening, dynamic sampling, selective sampling, autoencoders, VAE, VQVAE, MEMAE, more of XXXAE and unsupervised methods, <em>PSEUDO LABELING, best was non-harming one</em>, KNN, DBSCAN clustering of train/ test with adversarial models, frames decomposition, transformers, logits ensembles, shuffling cadences, creating additional data from test B, C, D frames, space2depth and other stems.</p>
<p>Probably a personal record.</p>
<hr>
<h3>P.S.</h3>
<p>There was a belief in finding some data insights that could give that jump to the final score. However, spending nearly all of our time and efforts were not worth it. </p>
<p>As we fail to use almost all of our data-related discoveries (such as difference between train and test) we left that writing to other teams.</p>
<h3>Upd after reading 1-st place, congrats btw</h3>
<ol>
<li>We used only new data, old data did give a little boost to us, so little that we dropped that.</li>
<li>We also find sin-shaped signal and also tried to add it with setigen/handcode, but with no improvements to score</li>
<li>We also added near-1 probabilities as pseudo labels, but score immediately drop to .75. So we did DBSCAN clustering on test images and select certain cluster, that 'base', (not a cluster). Adding it as pseudo labels dropped score only slightly, and basically we could not go further then that.</li>
<li>Cleaning images for us ended at trimmed normalization by frequency and time, which visually helped, but not scoring-wise </li>
</ol>
<p>Thanks, we didnt plan to, but maybe in future</p>
<p>And this is what i dont like about platform. We cant really start discussion until its over and people are tired after a month or more of intensive research. Im happy you get something from my write-up.</p> |
SETI Breakthrough Listen - E.T. Signal Search | 58th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SETI Breakthrough Listen - E.T. Signal Search
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for competition hosts and participants!</p>
<p>This is 58th place solution. (LB:83th place)</p>
<h1>Method</h1>
<ul>
<li>ensemble 5 models and weighted using ridge regression</li>
</ul>
<p>efficientnet_b4(6 channel, 512) CV:0.8450 LB:0.73689 PB:0.74290<br>
efficientnetv2_m(576) CV:0.8749 LB:0.77383 PB:0.77542<br>
efficientnetv2_s(576) CV:0.8738 LB:0.77502 PB:0.77084<br>
efficientnetv2_s(576) CV:0.8738 LB:0.77360 PB:0.77222<br>
efficientnet_b4(512) CV:0.8731 LB:0.77383 PB:0.77542</p>
<ul>
<li>stratified 5 fold</li>
<li>mixup alpha=1(some model used alpha=0.4~0.6)</li>
<li>data aug (hflip, vflip, shift scale rotate, coarse dropout, motion blur, IAA sharpen)</li>
<li>TTA (hflip, vflip, hflip+vflip)</li>
<li>around 15 epoch</li>
<li>adamW optimizer + cosine annealing LR</li>
<li>use 3 channel (may be competition key idea)</li>
<li>large input boosts LB</li>
</ul>
<p>Ensembling with other model(LB around 0.77) is not effective.</p>
<p>We started 2 weeks ago, so it is hard time to fine-tune training parameters and selecting models.</p>
<h1>not working</h1>
<ul>
<li>other model(seresnext50, efficientnet_b0,b3, efficientnet_v2_b3, swin-transformer, nfnet-0, resizer model, volo)</li>
<li>mean output (LB:0.77987 PB:0.77936) This doesn't look so bad.</li>
<li>weight from logistic regression</li>
<li>6channel (ensembling with 3chanel looks effective)</li>
<li>stacking some classifier (svc, nn, logistic regression etc) This improves oof val score, but it may be overfitted.</li>
<li>multi loss (BCE loss + L1 smooth loss)</li>
<li>retraining (I may made mistakes to select learing rate)</li>
<li>pseudo labeling (overfit)</li>
</ul>
<h1>Regret</h1>
<ul>
<li>try other efficientnet models (ensembling with these improves pb and I may be able to get silver).</li>
<li>adversarial validation to fill in CV-LB gap I posted here <a href="https://www.kaggle.com/c/seti-breakthrough-listen/discussion/266652" target="_blank">https://www.kaggle.com/c/seti-breakthrough-listen/discussion/266652</a></li>
<li>use 3 channel earlier ( I wasted a few days….)</li>
<li>EDA (we took almost much time to create models)</li>
</ul>
<p>We didn't have enough time and climbing LB was very hard because of late joining, but I'm glad to join and got some new idea like ResizerModel and VOLO (thank you authors).</p>
<p>Thanks</p> |
SETI Breakthrough Listen - E.T. Signal Search | 5th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SETI Breakthrough Listen - E.T. Signal Search
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to all the participants for their hard work.<br>
Thanks also to the organizers and hosts.</p>
<p><strong>Working Topics:</strong></p>
<ul>
<li>eca_nfnet_l2 (4fold) + efficientnet_v2_m (4fold)</li>
<li>x5 tta (base, h-flip, v-flip, h&v-flip, scaling)</li>
<li>augment (Horizontal flip, Vertical flip, ShiftScaleRotate)</li>
<li>512x512 use 0/2/4 channel<br>
It tends to be better to use images with larger resolution for both LB and CV.</li>
<li>mixup</li>
<li>SHOT (<a href="https://github.com/tim-learn/SHOT" target="_blank">https://github.com/tim-learn/SHOT</a>)</li>
<li>pseudo labeling
As it was said in the discussion, we needed to find the difference between training and testing in this competition. As a method to resolve the domain shift we are using pseudo labels. In this competition, we assumed that there would be less swing between public and private tests, since the host had said that they would be split randomly.
For the pseudo-labels, we used the following two methods for ensembling. We believe that this creates diversity and contributes to improved robustness.<ol>
<li>use all pseudo labels</li>
<li>use only the high reliable pseudo-labels</li></ol></li>
</ul>
<p><strong>Not Working Topics:</strong></p>
<ul>
<li>focal loss, weighted cross entropy loss</li>
<li>Using off-target images</li>
<li>Using preprocessing images<br>
Learning by merging images differentiated in the frequency direction.</li>
<li>K-means clustering<br>
Remove clusters that do not contain test images from the training data.</li>
<li>Using old SETI's data for Pretraining</li>
<li>DANN (<a href="https://arxiv.org/abs/1505.07818" target="_blank">https://arxiv.org/abs/1505.07818</a>)</li>
<li>Audio Spectrogram Transformer (<a href="https://arxiv.org/pdf/2104.01778.pdf" target="_blank">https://arxiv.org/pdf/2104.01778.pdf</a>)</li>
<li>Unsupervised Data Augmentation for Consistency Training (<a href="https://arxiv.org/pdf/1904.12848.pdf" target="_blank">https://arxiv.org/pdf/1904.12848.pdf</a>)</li>
<li>ROC_star (<a href="https://github.com/iridiumblue/roc-star" target="_blank">https://github.com/iridiumblue/roc-star</a>)</li>
</ul>
<p>We used soft labels for pseudo labeling.</p>
<p>The scores for the single model are as follows:</p>
<table>
<thead>
<tr>
<th></th>
<th>private</th>
<th>public</th>
</tr>
</thead>
<tbody>
<tr>
<td>eca_nfnet_l2</td>
<td>0.79763</td>
<td>0.79806</td>
</tr>
<tr>
<td>efficientnet_v2_m</td>
<td>0.79592</td>
<td>0.79690</td>
</tr>
</tbody>
</table>
<p>The CV has not been properly verified because of the different folds between models, but it is about publicLB +0.1.</p>
<p>TTA is Test Time Augmentation. TTA is inference the original image and augmentatied image (such as horizontal-flip), and takes the average.</p>
<p>for example following:</p>
<pre><code>x0 = x.to(device)
x1 = torch.flip(x0, [-1])
y = (model(x0) + model(x1)) / 2
</code></pre> |
SETI Breakthrough Listen - E.T. Signal Search | 6th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SETI Breakthrough Listen - E.T. Signal Search
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Ah, I don't resize the channels. I use a model with the first layer comprising convolutions of depth 2 (rather than 3). This is easy using the <code>timm</code> library, where you can do <code>model = timm.create_model(model_name, pretrained=True, in_chans=in_channels, num_classes=1)</code> where <code>in_channels</code> is 2. It's even clever enough to adjust the first convolutional layer's weights so you can use pretrained weights, see <a href="https://github.com/rwightman/pytorch-image-models/blob/a16a7538529e8f0e196257a708852ab9ea6ff997/timm/models/helpers.py#L193" target="_blank">here</a></p>
<p>I wrote my own RRC class inherits from Albumentations' transforms. It has the extra feature where I can bias it to always keep the "bottom" of the spectrogram and only crop the top off. I ended up not really using that, though.</p>
<pre><code>import albumentations.augmentations.geometric.functional as fa
from albumentations.core.transforms_interface import ImageOnlyTransform
class RandomResizedCrop(ImageOnlyTransform):
def __init__(self,
final_height,
final_width,
keep_bottom=False,
height_proportion_range=(0.3, 1.0),
width_proportion_range=(0.5, 1.0),
always_apply=False,
p=1.0):
super().__init__(always_apply, p)
self.final_height = final_height
self.final_width = final_width
self.keep_bottom = keep_bottom
self.height_proportion_range = height_proportion_range
self.width_proportion_range = width_proportion_range
def apply(self, img, **params):
orig_height, orig_width = img.shape[:2]
height_proportion = random.uniform(*self.height_proportion_range)
width_proportion = random.uniform(*self.width_proportion_range)
crop_height = round(height_proportion * orig_height)
crop_width = round(width_proportion * orig_width)
if self.keep_bottom:
h_from = orig_height - crop_height # Always crops to the bottom
else:
h_from = random.randint(0, orig_height - crop_height)
w_from = random.randint(0, orig_width - crop_width)
cropped_img = img[h_from:, w_from:w_from+crop_width]
return fa.resize(cropped_img, self.final_height, self.final_width, cv2.INTER_LINEAR)
</code></pre>
<p>Thanks to the organisers, this was good fun!</p>
<p>I'm getting married tomorrow so I wasn't even sure if I'd rejoin the competition after the reset. However, the end of last week before I went on my stag do I thought I'd just get some models training and see how they went. Turns out they did quite well!</p>
<p>Just one model of mine did nearly all the heavy lifting and would have had me in 8th place, so I'll focus on that.</p>
<p>It was an EfficientNet-B7 that I trained in mixed precision at 1024 * 1024 resolution (and tested at 1280 * 1280).</p>
<p>I'm sure what made my model so effective was I felt I needed to structure the input data so that the model was able to identify whether a needle appearing/disappearing was as at a transition (between "on" or "off" target) or 'within' a segment - these have fundamentally different significance. Also, you can't just put all the "ons" next to each other and all the "offs" next to each other, because then the data time aren't aligned, and a continuously present signal such as the red line here…</p>
<p><img src="https://storage.googleapis.com/kaggle-media/competitions/SETI-Berkeley/Screen%20Shot%202021-05-03%20at%2011.39.42.png" alt="SETI signal"></p>
<p>… would jump around, and wouldn't be "trackable" across.</p>
<p>I therefore used a very simple solution, which was add an input channel which was 1 or 0 depending on whether that period was "on" or "off", see the penultimate line:</p>
<pre><code>x = np.load(npy_path).astype(np.float32)
time, spec = x.shape[1:]
_x = np.zeros((spec, time * 6, 2))
_x[:, :, 0] = np.vstack(x).transpose()
for i_channel in range(len(x)):
_x[:, i_channel*time:(i_channel+1)*time, 1] = 1 if (i_channel % 2 == 1) else 0
x = _x
</code></pre>
<p>I figured this would allow convolutional kernels in the first layer to emerge which are "transitional" needle detectors and "non-transitional" needle detectors, which obviously is hugely relevant.</p>
<p>This also meant that if I did aggressive RandomResizedCropping (which I did) the model would still "know" if it was an On or Off signal, regardless, of its location in the image.</p>
<p>Other things:</p>
<ul>
<li>As with everyone else, Mixup was very important</li>
<li>HFlip, VFlip. </li>
<li>AdamW, OneCycle (5e-5 -> 1e-8), gradient clipping</li>
</ul>
<p>I didn't really try anything else because I only had time to train 3 different models! Didn't even have time to test TTA; my final model finished a few hours before competition end and I ran out of subs. There's a lesson in here somewhere…</p> |
SETI Breakthrough Listen - E.T. Signal Search | 86th - Bronze Medal - with Silver Gold Magic Explained | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SETI Breakthrough Listen - E.T. Signal Search
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Exciting Competition!</h1>
<p>What an interesting competition and difficult puzzle! Thank you Kaggle and SETI for hosting. I started this competition last Wednesday and worked hard every day. In this time, I closed CV LB gap from 0.125 to 0.104. I would have liked to have more time and continue the puzzle of closing the CV LB gap. Below is my journey over the past 7 days.</p>
<h1>My Model - CV 0.890 - LB 0.765 - GAP = 0.125 (Days 1-3)</h1>
<p>Last Wednesday, Thursday, and Friday, I built dozens of my own models without reading public discussions nor notebooks. I was excited when my best model achieved CV 0.890 but surprised and disappointed when I submitted and received LB 0.765 🙁This was a 0.125 gap between CV and LB. Something is going on!</p>
<p>My best model used only "on" cadence in a spatial stack resized to 768x768 and fed into a EfficientNetB4 backbone. The only augmentation was horizontal flip and vertical flip. It used cosine learning schedule and 15 epochs.</p>
<h1>Alternative Ideas (Days 3-4)</h1>
<p>At this point, I thought the gap was caused by new pattens in test data. So, on Friday and Saturday, i starting building more creative models that used more than just classify train patterns.</p>
<ul>
<li>Use "off" cadence train images as more <code>target=0</code>. Increased CV 0.003 but decreased LB 0.010</li>
<li>Use "off" cadence test images as more <code>target=0</code>. Did not affect CV but decreased LB like 0.050!</li>
<li>Use test pseudo labels. Decrease LB like 0.030!</li>
<li>Train model to classify "train" image versus "test" image. So discard target column and use <code>test image as target=1</code> and <code>train image as target=0</code>. The hope was that this model would find any new patterns in test data because that would be the difference between train and test. To my surprise the model achieved CV 0.99 but LB was terrible 🙁</li>
<li>Train model to classify "on" cadence <code>img[::2]</code> versus "off" cadence <code>img[1::2]</code>. So discard target column and use <code>cadence on as target=1</code> and <code>cadence off as target=0</code>. The advantage here is that we can train directly on test data without train data!</li>
<li>Sliding window over "on" cadence and compute cosine similarity with sliding window over "off" cadence. Take minimum cosine similarity value over all crops of "off" cadence. Then take maximum (of these minimums) over all crops of "on" cadence. One advantage here is that we can apply this to test data without using train data!</li>
</ul>
<p>My favorite model was the last which achieved LB 0.550 and did not use any train data. It simply compared crops of test image "on" cadence with crops of test image "off" cadence and searched for dissimilarity.</p>
<h1>Public Notebook - CV 884 - LB 0.779 - GAP = 0.105 (Days 5-6)</h1>
<p>By Saturday, i was frustrated that I couldn't reach Bronze zone. How was everyone doing it? </p>
<p>I began reading discussions and notebooks. I found <a href="https://www.kaggle.com/tuckerarrants" target="_blank">@tuckerarrants</a> amazing "SETI - Learned Image Resizing" notebook <a href="https://www.kaggle.com/tuckerarrants/seti-learned-image-resizing?scriptVersionId=68644223" target="_blank">here</a>. What impressed me most was that his <strong>GAP was only 0.102</strong>! His CV was 0.846 and his LB was 0.744. This was a smaller gap than my model. </p>
<p>Without understanding why, I quickly ran his notebook locally with larger models and larger image sizes to secure bronze position. The following table are results for fold 0 only. </p>
<p>Note that Tucker's notebook has a resize module before the EfficientNet, so we put <code>image_in</code> into the resize module and <code>image_out</code> comes out. We then feed <code>image_out</code> into the EfficientNet backbone:</p>
<table>
<thead>
<tr>
<th>Backbone</th>
<th>Image In</th>
<th>Image Out</th>
<th>CV - fold 0</th>
<th>LB - fold 0</th>
</tr>
</thead>
<tbody>
<tr>
<td>EB3</td>
<td>512</td>
<td>256</td>
<td>0.859</td>
<td>0.764</td>
</tr>
<tr>
<td>EB3</td>
<td>640</td>
<td>320</td>
<td>0.870</td>
<td>0.768</td>
</tr>
<tr>
<td>EB3</td>
<td>768</td>
<td>384</td>
<td>0.870</td>
<td></td>
</tr>
<tr>
<td>EB3</td>
<td>orig</td>
<td>448</td>
<td>0.872</td>
<td></td>
</tr>
<tr>
<td>EB4</td>
<td>640</td>
<td>320</td>
<td>0.873</td>
<td>0.770</td>
</tr>
<tr>
<td>EB4</td>
<td>768</td>
<td>384</td>
<td>0.872</td>
<td></td>
</tr>
<tr>
<td>EB4</td>
<td>orig</td>
<td>448</td>
<td>0.869</td>
<td></td>
</tr>
<tr>
<td>EB6</td>
<td>640</td>
<td>320</td>
<td>0.872</td>
<td>0.770</td>
</tr>
<tr>
<td>Ensemble</td>
<td></td>
<td></td>
<td>0.883</td>
<td>0.777</td>
</tr>
</tbody>
</table>
<p>Then 5-Fold was LB 0.778 and post process got LB 0.779</p>
<h1>Nvidia 4xV100 GPUs</h1>
<p>For the past few days, Nvidia V100 GPUs were running constantly to train the models in the table above. I am learning PyTorch and discovered a great trick <a href="https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html#create-model-and-dataparallel" target="_blank">here</a>. Using Tucker's notebook, you can add 1 line of code to train PyTorch with multiple GPUs (for large backbones and batch sizes). Just replace the one line <code>model.to(device)</code> with the three lines:</p>
<pre><code>import torch.nn as nn
model = nn.DataParallel(model)
model.to(device)
</code></pre>
<p>That's it! With 1 line of code, we can train larger models and larger batch sizes by using multiple Nvidia GPU. </p>
<h1>The Secret of Timm Backbones</h1>
<p>Many people (including <a href="https://www.kaggle.com/dragonzhang" target="_blank">@dragonzhang</a>) were confused why they could not train efficientnet B4 with Tucker's notebook. That is because timm does not contain an imagenet pretrained efficientnet B4. (So if you use <code>model_name='efficientnet_b4'</code> you train with a not-pretrained effnet) </p>
<p>Timm only contains pretrains of EB4, 5, 6, 7 of the TF converted weights, <code>tf_efficientnet_b4</code>, <code>tf_efficientnet_b4_ns</code>, <code>tf_efficientnet_b4_ap</code>. The first is original efficient net, the next two are pretrained with noisy student and adv prop. To view the full list of available timm pretrained backbones, type</p>
<pre><code>import timm
from pprint import pprint
model_names = timm.list_models(pretrained=True)
pprint(model_names)
</code></pre>
<h1>Model Difference - Mysterious "S" Shape! (Day 7)</h1>
<p>After securing bronze medal position, I began to investigate the CV LB gap again. Four hours before competition deadline, I plotted the difference between my model and public notebook model. I sorted by test images where the difference between my model prediction and public model prediction were greatest. To my surprise, I found a mysterious "S" shaped test pattern!</p>
<p>The image on the left is original comp test data "on" cadence vertical stack. With time on x axis and frequency on y axis. The second image is pixel histogram. The third image is "on" cadence with emboss, blur, and adaptive histogram equalization. (Filter code shown below) The fourth image (right image) is the "off" cadence.</p>
<pre><code>import cv2
clahe = cv2.createCLAHE(clipLimit=16.0, tileGridSize=(8,8))
# FILTERS FOR BETTER IMAGE DISPLAY
img = np.vstack(cadence[::2,])
img = img[1:,1:] - img[:-1,:-1] #emboss
img -= np.min(img)
img /= np.max(img)
img = (img*255).astype('uint8')
img = cv2.GaussianBlur(img,(5,5),0)
img = clahe.apply(img)
</code></pre>
<p>Above the left image we see the public notebook prediction value. And above the third image we see my model's prediction value.</p>
<p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2021/ex1b.png" alt=""><br>
<img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2021/ex2b.png" alt=""><br>
<img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2021/ex3b.png" alt=""><br>
<img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2021/ex4b.png" alt=""><br>
<img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2021/ex5b.png" alt=""><br>
<img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2021/ex6b.png" alt=""><br>
<img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2021/ex7b.png" alt=""><br>
<img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2021/ex8b.png" alt=""></p>
<h1>Post Process - LB +0.001 - GAP = 0.104 (Last 4 hours!)</h1>
<p>With only 4 hours remaining, I trained a model on the images that my model and public notebook model disagreed most on. I then inferred the entire test data. For all images with <code>p>0.6</code>, I increased their submission prediction. This post processed closed the CV LB gap by <code>0.001</code> and increased LB by <code>0.001</code>. Thus there is something special about these "S" patterns!</p>
<h1>UPDATE - Mixup is the Silver Medal Magic!</h1>
<p>After the comp ended, i have been continuing with experiments to determine why my original model's CV LB is large. It appears that mixup is very important to teach your model to generalize to test data which is different than the train data. It both closes the CV LB gap and boosts CV LB. </p>
<p>Using 1 fold of EB4 and image size 768x768 with only Hflip, Vflip, and Mixup (alpha=3, max target). One can achieve Silver medal with public LB 0.786 and private LB 0.781. Notebook <a href="https://www.kaggle.com/cdeotte/silver-medal-with-grad-cam-lb-0-780" target="_blank">here</a>. (Also important is large image size and large backbone).<br>
<img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2021/mixup3.png" alt=""></p>
<h1>UPDATE - "S-shape" and "Remove Noise" is Gold Medal Magic!</h1>
<p>To climb from Silver medal to Gold medal, we need to detect "s-shapes" and remove noise. The first place winning solution <a href="https://www.kaggle.com/c/seti-breakthrough-listen/discussion/266385" target="_blank">here</a> explains how. Additionally, repeatedly using test pseudo labels also helps climb to top of Gold Medals! (and using old data before reset can help too).</p>
<h1>UPDATE - Grad Cam</h1>
<p>I posted a discussion <a href="https://www.kaggle.com/c/seti-breakthrough-listen/discussion/268314" target="_blank">here</a> about grad cam and a notebook <a href="https://www.kaggle.com/cdeotte/silver-medal-with-grad-cam-lb-0-780" target="_blank">here</a> using grad cam. Using grad cam can help us understand what makes a particular image a <code>target=1</code>. For example, our model can make a test prediction and then circle what it thinks is causing a <code>target=1</code> prediction. The image below is 100% generated by code and no human labeling!<br>
<img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2021/grad_cam_3.png" alt=""></p>
<p>The code is posted above. For "on" cadence, i would read NumPy like </p>
<pre><code>file = numpy .load(path+name[0]+'/'+name+'.npy').astype('float32')
ON_CADENCE = file[::2,]
</code></pre>
<p>And for the same "off" cadence, use the same <code>name</code> and load</p>
<pre><code>file = numpy .load(path+name[0]+'/'+name+'.npy').astype('float32')
OFF_CADENCE = file[1::2,]
</code></pre>
<p>Next, i apply the following filter to make the spectrograms easier to see with human eye</p>
<pre><code>import cv2
clahe = cv2.createCLAHE(clipLimit=16.0, tileGridSize=(8,8))
img = np.vstack(CADENCE)
img = img[1:,1:] - img[:-1,:-1] #emboss
img -= np.min(img)
img /= np.max(img)
img = (img*255).astype('uint8')
img = cv2.GaussianBlur(img,(5,5),0)
img = clahe.apply(img)
</code></pre>
<p>To plot 4 tiles in a row, you use <code>subplot</code> like</p>
<pre><code>plt.figure(figsize=(20,5)
plt.subplot(1,4,1)
plt.imshow(img1)
plt.subplot(1,4,2)
plt.imshow(img2)
plt.subplot(1,4,3)
plt.imshow(img3)
plt.subplot(1,4,4)
plt.imshow(img4)
plt.show()
</code></pre>
<p><a href="https://www.kaggle.com/friedchips" target="_blank">@friedchips</a> There are two efficientnet's you can use for TensorFlow. You can use the one in <code>https://keras.io/api/applications/</code> which was pretrained with image pixels of 0 thru 255. Or you can use <code>pip install efficientnet</code> <a href="https://pypi.org/project/efficientnet/" target="_blank">here</a> which was pretrained with image pixels normalized with mean 0 and std 1. </p>
<p>If you use the PyTorch efficientnet's. They are pretrained with pixels mean 0 and std 1.</p>
<p>It will help your efficientnet converge faster to use the same pixel range that the pretraining used. However if it you get it wrong, the model should still converge and learn to adjust to what you are feeding it.</p>
<p>Hi Markus. My model is more than just an EfficientNet. It has a special module beforehand which learns to resize the image by itself. Here's a diagram of just the module. (The efficientnet is not in the picture):<br>
<img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2021/resize.png" alt=""><br>
So first i load the Numpy array and resize it to <code>image_in</code> with <code>img = np.vstack( img[::2] )</code> and <code>img = cv2.resize(img,(image_in,image_in))</code>. </p>
<p>Then it gets fed into the module. Then the module learns to resize it to <code>image_out</code>. Finally that result is fed into the EfficientNet. I used Tucker's public notebook <a href="https://www.kaggle.com/tuckerarrants/seti-learned-image-resizing" target="_blank">here</a></p>
<p>Note that i used this public notebook because i had a few days left and could not figure out how to close the CV LB gap with my own model. I am now investigating why that notebook has a smaller CV LB gap than my model. I have discovered that this special module is not needed. I believe the secret of that notebook is that it uses mixup augmentation whereas my model did not.</p>
<p>Wow, CV 0.90, that's great. I would to see code that scores over LB 0.780. To me, achieving over 0.780 is still magic. No team posted code that achieves over LB 0.780 yet. (Second place posted code but it throws an error if you run it).</p>
<p>Thanks <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> Congrats earning Silver medal. I was watching your team. At one point, i saw you jump from LB 775 to LB 783. What discovery did you make at that point?</p>
<p>I'd love to hear about your solution. What was your strongest single model? And what preprocessing and/or post processing techniques did you use?</p>
<p>Update. I have been conducting more experiments. I have updated my post to identify the Silver Medal Magic. And the Gold Medal Magic!</p> |
SETI Breakthrough Listen - E.T. Signal Search | 8th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SETI Breakthrough Listen - E.T. Signal Search
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks! Hope more medals to come 😃</p>
<p>Just winning gold medals without any sponsorship should also be fine 😆 </p>
<h1>TL;DR</h1>
<p>My best scoring prediction is an average of eca_nfnet_l1 (384x384 and 420x420), EfficientNet-B2 (384x384), -B6 (512x512 and 640x640) and -B7 (640x640) predictions with 2x tta (original + HFlip). The models were trained with FastAI locally on my 3070 and later 3080Ti (eca_nfnet_l1 and B2) and with TF on Kaggle TPU (B6 and B7) with LAMB as optimizer, SGD with restarts as lr schedule and applying slight weighing in cross-entropy loss function (2). New train data, old train and test data normalized per tile as well as soft labeled new test data were used for training with MixUp and HFlip as augmentations. All 6 tiles of the cadence were stacked and resized in two steps: first, upsized to 1638x1638 with INTER_NEAREST interpolation and then downsized with INTER_AREA.</p>
<h1>In detail</h1>
<h3>Selecting models</h3>
<p>As was noted by other teams too, not all models with good local CV had correspondingly good public LB. The two families of models that turned out to perform well were eca_nfnet_l's and EfficientNets while resnets (regular and d ones), ecaresnets, resnests and some other had a lower LB than I would anticipate from their CV scores. Unfortunately, I did not manage to extract much useful insights from this discrepancy between the LB and CV… Also, unsurprisingly larger models performed slightly better than the smaller ones, at the same time, excluding the smaller models from the final ensemble did worsen the score a bit. The best single model was a 4-fold EfficientNet-B7 (640x640) with a public LB of 0.80092 and private LB 0.79648.</p>
<h3>Image size and resizing</h3>
<p>For the training I used all the 6 tiles of cadence because before the restart of the competition such an approach performed best. While after the restart I kept using the same approach as I saw no gain to a slight negative decline in the score when using just the 3 positive tiles. Also, very early in the competition I discovered that the score was pretty sensitive to the resizing method. When downsizing the images to 256x256, INTER_AREA interpolation was the method of choice as any other interpolation available in OpenCV was consistently giving a 0.02-0.03 lower AUC for any model tested. In addition, changing two dimensions at the same time (like 1638x256 -> 320x320) I could never obtain a better score than with 256x256 inputs regardless of the interpolation. The only way for me to get a better score for the images with a frequency dimension greater than 256 px was to first scale the original inputs to 1638x1638 with INTER_NEAREST interpolation and then resize these large images to any smaller size. When upscaling the inputs in this two-step way, the larger input sizes always resulted in larger CV and LB scores. At the end, the input sizes I used in my training was a compromise between the available resources and the score.</p>
<h3>Optimizers and lr schedules</h3>
<p>LAMB optimizer turned out to give consistently better score (both with TF and FastAI) in comparison with any other optimizer I tried even though LAMB was the slowest of all them. However, the consistent gain in score was worth the 10-20% slowdown per epoch. Regarding the lr schedules, SGD with warm restarts and training for 7 epochs (1 + 2 + 4) gave the best results in turns of training time and the CV score when compared to the 1-cycle policy or cosine decay with warm-up as well as training for more epochs. Finally, using the cross-entropy with a light weighing of positive examples outperformed simple cross-entropy and the focal loss.</p>
<h3>Train data</h3>
<p>In my experiments adding the normalized old train and test data to the new train data resulted in a consistent increase by 0.02-0.03 in the local CV. As it was obvious that the new test data differed a lot from the train sets, using the soft labeled test data helped further improve the public LB score (at the same time it slightly decreased the local CV but at this point I trusted the LB score more that my CV).</p>
<h3>Tools</h3>
<p>I did almost all my experiments in FastAI on my local computer with one RTX 3070. It was only in the last couple of weeks of the competition when I found a good way to upscale the inputs and that's when I ran out of my local resources (even though I upgraded my GPU to 3080Ti) and switched to TensorFlow and TPUs on Kaggle. In general, I find the amount of the train data quite suitable for fast experimentations even on a such relatively small machine. Also it was interesting to see that FastAI turned out to be faster than Lightning in several tests (at least in my hands :D), so I decided not to search for any alternatives and kept using FastAI almost exclusively. Finally, to follow all the experiments I used wandb and I should say it helped me a lot not to lose track of all the hundreds of experiments I ran.</p> |
SETI Breakthrough Listen - E.T. Signal Search | 9th place overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SETI Breakthrough Listen - E.T. Signal Search
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for the reading! </p>
<p>First of all thanks to my teammates <a href="https://www.kaggle.com/tikutiku" target="_blank">@tikutiku</a>, <a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a> , <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> , <a href="https://www.kaggle.com/cooolz" target="_blank">@cooolz</a> for this great collaboration on this interesting SETI competition. It was really nice for me to meet new Kagglers again and work together to reach 9th place. I’m impressed about the rising of every day new ideas.</p>
<p>Our solution is an ensemble of multiple models. The team tried many approaches to make relevant models and close the CV/LB gap. <a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a> has found the S-signal (a.k.a. magic#1) by inspecting the predictions and we decided to try different approaches to have similar messages in train data:</p>
<ul>
<li>Different custom augmentations to make existing narrow-band messages look similar to S-signal. The idea is that sine deformation will move a line to a S and all other properties (intensity, SNR, ..) would remain.</li>
<li>A generator to have much more control on S-signal (and create other signals too): <a href="https://www.kaggle.com/c/seti-breakthrough-listen/discussion/266805" target="_blank">https://www.kaggle.com/c/seti-breakthrough-listen/discussion/266805</a></li>
</ul>
<p>We quickly realized that it improved the score but it would never close the CV/LB gap. Another magic was required. We’ve tried to analyze the difference between train and test datasets and apply some preprocessings to make models learn better. We’ve trained some AE + clustered the embeddings and we discovered that many similar images/backgrounds were used for generation: <a href="https://www.kaggle.com/c/seti-breakthrough-listen/discussion/266513" target="_blank">https://www.kaggle.com/c/seti-breakthrough-listen/discussion/266513</a></p>
<p>Below is TSNE of images embeddings (train + test) after clustering, small islands have similar backgrounds.</p>
<p><img src="https://i.imgur.com/FGNKvrP.png" alt="TSNE"></p>
<p>We did not go further in this way (even if it was promising) mainly because it looked more like some kind of reverse engineering of the generated data instead of Machine Learning. </p>
<p>At the end, we’ve ensembled 6 groups of models that scored public LB=0.801:</p>
<ul>
<li>Classification models with different backbone and input size: 512x512 for effnetb5-ns and 512x640 for effnetv2-s. Based only on [0,2,4] ON images. ShiftScaleRotate + H/V flip + Resize + Mixup augmentations. Checkpoint averaging. Pseudo labels included. 4 random TTA on inference.Public LB=0.795 </li>
<li>Classification models with different preprocessings, mixup with target multiplicator, additional custom sine augmentation. Pseudo labels included. TTA on inference. Public LB=0.792.</li>
<li>Classification models with more messages from our custom generator/simulator. TTA on inference. Public LB=0.788</li>
<li>Classification models with different input sizes (640x512, 480x384) and various preprocessings. Based only on [0,2,4]. ShiftScaleRotate + H/V + Resize + Mixup + sine augmentation. WeightedRandomSampler to combat imbalance, H/V TTA on inference. Some trained on all data (old_train, old_test, new_train). Public LB=0.781</li>
<li>Single classification model (tf_efficientnetv2_s with keeping high resolution - first conv, only freq axis - and L2 normalize). Based on 2 channels [0,2,4] + [1,3,5]. Mixup (simulating faint signal) + H/V augmentation. Pseudo labels included. H/V TTA on inference. Public LB=0.779</li>
<li>Segmentation model trained with our custom generator/simulator. This one is able to extract the E.T. signal which could be interesting in production.Public LB=0.771</li>
</ul>
<p>What did work:</p>
<ul>
<li>Different preprocessings (normalizations on different axis, <a href="https://scikit-image.org/docs/dev/api/skimage.filters.html#skimage.filters.difference_of_gaussians" target="_blank">difference_of_gaussians</a>) </li>
<li>Different image sizes, one or two channels</li>
<li>Custom sine augmentation</li>
<li>Additional simulator messages</li>
<li>Checkpoint averaging</li>
<li>Pseudo labels</li>
<li>Rank probabilities (a bit better than raw probabilities)</li>
</ul>
<p>What did not work:</p>
<ul>
<li>Shuffle the [0,2,4] channel as augmentation</li>
<li>Cutout augmentation</li>
<li>Label smoothing</li>
<li>Pseudo labels (failed only for me but might be due to calibration issue or training mess-up with test backgrounds as explained by <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a> in his solution).</li>
</ul>
<p>Thanks for reading.</p>
<p>Thank you <a href="https://www.kaggle.com/rafiko1" target="_blank">@rafiko1</a> !</p> |
SETI Breakthrough Listen - E.T. Signal Search | Simple Bronze Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SETI Breakthrough Listen - E.T. Signal Search
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Firstly, congratulations to the winners and everyone else that medaled. Thank you Kaggle and SETI for organizing this competition, it has been fun indeed. </p>
<p>I was very invested in this competition early on, but after the reset, I could not find the time to continue experimenting. I managed to squeeze out a bronze medal, which I am content with, all things considered. Here is what I did:</p>
<h3>Learned Image Resizing</h3>
<p>I released a <a href="https://www.kaggle.com/tuckerarrants/seti-learned-image-resizing" target="_blank">kernel </a> early on in this competition. After rerunning this notebook with the new data, I noticed that my CV-LB gap was much lower than others had reported. I do not really know why, but I used this notebook to generate 2/4 model predictions for my final submission. I am excited to test this 'learned image resizing' in future competitions as I think it can be fairly powerful for certain image datasets. </p>
<h3>Augmentations</h3>
<p>TTA gave me significant CV improvements. I settled for 15 TTA steps and used the same augmentations during inference that I used during training. To be precise: <code>HFlip</code>, <code>VFlip</code>, <code>ShiftScaleRotate(rotate_limit=0, shift_limit=0)</code>, <code>MotionBlur</code>, and <code>IAASharpen</code>. I also used MixUp during training. I only trained with <code>ON</code> cadence snippets to reduce training time and stacked them spatially. I tried randomly swapping the snippets before stacking them as an augmentation technique and saw marginal improvements, but did not run enough experiments to confirm this. </p>
<h3>Simple Ensembling</h3>
<p>I used 5 folds for each model stratified by the target variable. For each model, I averaged the predictions of <code>best_loss</code> and <code>best_score</code> checkpoints. For diversity, I trained different EfficientNets with different image sizes. My final submission was the harmonic mean of 4 models.</p>
<ul>
<li>B0 input_size=512, output_size=256 - CV 0.872, LB 0.761</li>
<li>B0 NS input_size=original, output_size=256 - CV 0.869, LB 0.758</li>
<li>B0 input_size=512 (no resizing network) - CV 0.882, LB 0.766</li>
<li>B2 NS input_size=512 (no resizing network) - CV 0.888, LB 0.771</li>
</ul>
<p>I used <code>timm</code> to train with different EfficientNet pre-trained weights for additional diversity. </p>
<p>My final CV with this simple averaging was 0.8971 with a public score of 0.7764 and a private score of 0.7744, just barely giving me a bronze medal. </p>
<p>All models were trained on a Z by HP Z8 Workstation with a Quadro RTX 8000 </p>
<p>Thank you Chris. Wow, these images / graphs are very interesting. I assumed the resizer network was learning and applying some sort of task-specific filtering, but did not realize how drastic these effects were. How fascinating! I think there is much to be explored with this notion of 'learned image resizing', especially for medical images, where we often have high resolution images and small 'signals'. </p> |
SETI Breakthrough Listen - E.T. Signal Search | Simple solution for solo silver (and one old GTX1080Ti) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SETI Breakthrough Listen - E.T. Signal Search
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for your interest.<br>
This exactly means that I added cast and normalized train and test samples with target=1 from the datasets before competition reset. Finally I had 70012 samles in my new train set including 16012 samples with target=1.<br>
I think that together with "in batch mixup" with batch size=8 it gave higher variety of how "needles" can look to the network. My results started getting significantly better when I introduced this new train set.</p>
<p>Thanks to everybody who presented their great solutions to enable to others to learn from them!<br>
As I didn't notice anybody with similar simple tricks which I used, I decided to share my first humble overview.</p>
<p>I didn't use ON channels only, but I cropped images with "collars" of OFF channels' parts. The first symmetric version:</p>
<pre><code>img = np.vstack((img[0][:][:], img[1][:35][:], img[1][-35:][:], img[2][:][:], img[3][:35][:], img[3][-35:][:], img[4][:][:] ))
img = img.transpose(1, 0)
</code></pre>
<p>and the second non-symmetric version (then couldn't use HFlip):</p>
<pre><code>img = np.vstack((img[0][:][:], img[1][:35][:], img[1][-35:][:], img[2][:][:], img[3][:35][:], img[3][-35:][:], img[4][:][:], img[5][:65][:] ))
img = img.transpose(1, 0)
</code></pre>
<p>It helps the network to exclude potential false positives - detecting a signal which continues to or from OFF channels.<br>
As I was struggling with limited image size (max. 512x512 for EfficientNet-B4) for my GTX, I was trying to add as few more pixels as possible. </p>
<p>In the first part of the competition my models where suffering from high number of false negatives (partly due to low resolution) and I was considering generating some additional positive samples. After the competition reset there was hardly any (computation) time for doing this, so I decided to simply add (cast and normalized) positive only original train and test samples. I got an even bigger CV-LB gap but it helped climbing the leaderboard.</p>
<p>I used pytorch, StratifiedKfold(5) and after several short experiments I trained networks with Resnet34d and EfficientNetB4 backbones, pretrained weights, with mixup (alpha=0.19 or 0.42), HFlip, VFlip, ShiftScaleRotate, MotionBlur, RBContrast.<br>
I tried some post-train ensembles but in the end my best model was single EfficientNet-B4 with non-symetric version cropping, 16 epochs (1 fold training for 15 hours), private LB 0.78191 (unfortunately not chosen for final).</p> |
SIIM-ACR Pneumothorax Segmentation | 12th place solution: UNet with mask scoring head | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for your interest!
I'm sorry I don't have it yet.
We will publish the code as soon as possible if permitted by the company!</p>
<p>Congratulations everyone!
And thank you for my teammates <a href="/masaakikob">@masaakikob</a> and <a href="/card7077">@card7077</a> !</p>
<p>I found something interesting and would like to share it. <br>
The final submission was made only from my models.</p>
<h1>1. My strategy</h1>
<ul>
<li>Predict Segmentaion
<ul><li>img size 896</li></ul></li>
<li>Predict Class
<ul><li>img size 768</li></ul></li>
<li>Integrate Predicts
<ul><li>Remove low probability masks</li></ul></li>
</ul>
<h1>2. Model & Dataset Overview</h1>
<h2>2.1 Data flow</h2>
<p>MS-EUNet means "Mask Scoring UNet with EfifcientNet B4" in this term. <br>
TTA is horizontal flip.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1670024%2Fabc6c556d74d9c3ba141bcea3d3afc01%2F20190905_SIIM_13th_solution.png?generation=1567763118431030&alt=media" alt=""></p>
<h2>2.2 About Dataset</h2>
<p>| Dataset | Description |
|:--|:--|
| Stage1 Train | Kaggle stage 1 train set |
| Stage1 Test | Kaggle stage 1 test set |
| Stage2 Train | Kaggle stage 2 train set(= Stage 1 Train + Stage1 Test) |
| CheXpert | External Data. Removed ambigous labels. All image: 210k, pneumothorax images: 17k |
| CheXpert_1 | Sampled from CheXpert, 50k, pneumothorax : not pneumothorax = 1 : 2 |
| CheXpert_2 | Sampled from CheXpert, 68k, pneumothorax : not pneumothorax = 1 : 3 |
| DatasetSeg_1 | Only pneumothorax data in "Stage1 Train", 3k |
| DatasetSeg_2 | Only pneumothorax data in "Stage2 Train", 3k |
| DatasetCls_1 | Stage1 Train + CheXpert_1, 67k |
| DatasetCls_2 | Stage1 Train + CheXpert_2, 67k |
| DatasetCls_all | Stage1 Train + CheXpert, 220k |</p>
<h2>2.3 Which model was trained on which dataset</h2>
<p>| Model | Used Dataset |
|:--|:--|
| SEResNext(Model1) | DatasetCls_1 |
| SEResNext(Model2) | DatasetCls_2 |
| LightGBM | DatasetCls_all |
| MS-EUNet | DatasetSeg_2 |
| MS-EUNet(with ASPP) | DatasetSeg_1 |</p>
<h1>3. Segmentation</h1>
<h2>3.1 Mask Scoring Head(little different from original)</h2>
<ul>
<li>This is based on Mask Scoring R-CNN (MS R-CNN)
<ul><li><a href="http://openaccess.thecvf.com/content_CVPR_2019/papers/Huang_Mask_Scoring_R-CNN_CVPR_2019_paper.pdf"></a><a href="http://openaccess.thecvf.com/content_CVPR_2019/papers/Huang_Mask_Scoring_R-CNN_CVPR_2019_paper.pdf">http://openaccess.thecvf.com/content_CVPR_2019/papers/Huang_Mask_Scoring_R-CNN_CVPR_2019_paper.pdf</a></li></ul></li>
<li>I was interested in applying this idea to UNet architecture</li>
<li>I repeated the experiment. As a result, this was a useful case for my model</li>
</ul>
<p>```python
class UEfficientNetB4WithMaskScoreHead(nn.Module):
def <strong>init</strong>(self, dropout_rate=0.3, input_size=256):
super(UEfficientNetB4WithMaskScoreHead, self).<strong>init</strong>()
#############
# ~~~~~~~~~ #
#############
kwargs = {'mask_ch': 1, 'num_classes': 1, 'mask_feature_size': 32} # 32 is better than 56
self.mask_iou_head = ROIMaskScoreHead(in_feature_ch=in_feature_ch, **kwargs)</p>
<pre><code>def forward(self, x):
#############
# ~~~~~~~~~ #
#############
deconv1 = self.decoder2(upconv2)
upconv1 = torch.cat([deconv1, conv1], 1) # size: input_size//2
upconv1 = self.dropout(upconv1)
deconv0 = self.decoder0(upconv1)
upconv0 = self.dropout0(deconv0)
output_mask = self.output_layer(upconv0)
# Mask score calc
output_score = self.mask_iou_head(upconv1, output_mask)
return output_mask, output_score
</code></pre>
<p>```</p>
<h2>3.2 Loss function for Mask Scoring UNet</h2>
<p>```python
class BCEDiceLoss(nn.Module):
"""
Loss defined as alpha * BCELoss - (1 - alpha) * DiceLoss
"""
def <strong>init</strong>(self, alpha=0.5):
super(BCEDiceLoss, self).<strong>init</strong>()
self.bce_loss = nn.BCEWithLogitsLoss()
self.dice_loss = DiceLoss()
self.alpha = alpha</p>
<pre><code>def forward(self, logits, targets):
bce_loss = self.bce_loss(logits, targets)
dice_loss = self.dice_loss(logits, targets)
loss = self.alpha * bce_loss + (1. - self.alpha) * dice_loss
return loss
</code></pre>
<p>class DiceScoreL2Loss(nn.Module):
"""
Loss for mask scoring UNet</p>
<pre><code>Caution: This loss for 1 class !!!!!
"""
def __init__(self):
super(DiceScoreL2Loss, self).__init__()
self.smooth = 1e-8
self.sum_dim = (2, 3)
def forward(self, dice_logits, logits, masks_gt):
dice_preds = torch.sigmoid(dice_logits)
# BATCH x 1 x H x W
masks_preds = torch.sigmoid(logits)
# BATCH x 1
intersect = (masks_preds * masks_gt).sum(self.sum_dim).float()
union = (masks_preds.sum(self.sum_dim) + masks_gt.sum(self.sum_dim))
# Calc dice(BATCH x 1)
dice_gt = (2. * intersect + self.smooth) / (union + self.smooth)
# Calc L2 Loss
cond = torch.abs(dice_gt - dice_preds)
loss = 0.5 * cond ** 2
return loss.mean()
</code></pre>
<p>loss_func1 = BCEDiceLoss(alpha=0.7)
loss_func2 = DiceScoreL2Loss()
loss = loss_func1(logits, masks) + 0.2 * loss_func2(logits_score, logits, masks)
```</p>
<h2>3.3 Training params</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1670024%2F9718ce185aad18cc6c69f027d1f5e4c6%2F2019-09-05%2015.04.40.png?generation=1567763379014753&alt=media" alt=""></p>
<h2>3.4 Ablation Study</h2>
<ul>
<li>Dataset: Stage2 Train set(Stage1 Train + Stage1 Test)</li>
<li>Only pneumothorax data.</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1670024%2Fdd40b74d16c4c6d5b0cbeabcb1765f8d%2F2019-09-06%2018.10.57.png?generation=1567763417111811&alt=media" alt=""></p>
<h1>4. Classification</h1>
<h2>4.1 Train 2 binary classifier(seresnext101) by DatasetCls_1 and DatasetCls_2</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1670024%2Fc84aa628cf79664ac9dd4febc7e41d57%2F2019-09-06%2014.01.34.png?generation=1567763240265633&alt=media" alt=""></p>
<p><code>python
optimizer = optim.Adam(model.parameters(), lr=args.init_lr)
milestones = [int(args.num_epochs * r) for r in [0.6, 0.7, 0.8, 0.9]]
scheduler = lr_scheduler.MultiStepLR(
optimizer, milestones, gamma=0.5, last_epoch=-1)
</code></p>
<h2>4.2 Get Embeddings with TTA</h2>
<p>Concat all embeddings, I can got 8,192 dimensional features.</p>
<h2>4.3 Add dicom info</h2>
<ul>
<li>Sex: 0 or 1</li>
<li>Age: No change</li>
<li>ViewPoint: 0 or 1</li>
</ul>
<p>I can make 8,195 dimensional features for each images in DatasetCls_all.</p>
<h2>4.4 Train binary classifier(LightGBM) with optuna</h2>
<h1>Last</h1>
<p>P.S. We're hiring!: <a href="https://lpixel.net/en/careers/">https://lpixel.net/en/careers/</a></p>
<p>Hi <a href="/ivanzhovannik">@ivanzhovannik</a> !</p>
<p>Hmm... this is interesting.
When I first added dicom info, the accuracy was <strong>almost unchanged</strong>.
But check feature importance of lgbm model, Age or VP has little importance(not high). <br>
So I had no reason to drop dicom info.</p>
<p>This is example of feature importance of trained lgbm model.</p>
<p>```
features importance rank
7835 c_3736_y 144 0
7043 c_2944_y 97 1
4995 c_896_y 82 2
5787 c_1688_y 81 3
5667 c_1568_y 72 4
1317 c_1314_x 64 5
4898 c_799_y 44 6
4811 c_712_y 40 7
6918 c_2819_y 39 8
7548 c_3449_y 39 9</p>
<p>features importance rank
1 Age 8 192
2 ViewPoint 1 2844
0 Sex 0 5015
```</p>
<h3>1. Q: Could you please elaborate more on you LightGBM (which hyperparameters did you tune)?</h3>
<p>I tuned bellow params</p>
<ul>
<li>'boosting_type': ['gbdt', 'dart', 'goss']</li>
<li>'num_leaves': 10 ~ 1000</li>
<li>'learning_rate': 1e-5 ~ 1.0</li>
<li>if 'boosting_type' == 'dart':
<ul><li>'drop_rate': 1e-8 ~ 1.0</li>
<li>'skip_drop': 1e-8 ~ 1.0</li></ul></li>
<li>if 'boosting_type' == 'goss':
<ul><li>'top_rate': 0.0 ~ 1.0</li></ul></li>
</ul>
<p>And other params is bellow.</p>
<ul>
<li>'objective': 'binary'</li>
<li>'metric': 'binary_logloss'</li>
</ul>
<h3>2. Q: Did DICOM improve the ensemble score?</h3>
<p>As above, it didn't change that much.</p>
<h3>3. Q: Which probability threshold did you use to cut the false positives?</h3>
<p>The initial threshold was 0.9, but it became lower as the data was increased (CheXpert) and the features were concated.
My final threshold is 0.55 or 0.60(And Private LB did not change with these two).</p>
<p>I would be happy if it would be helpful!</p> |
SIIM-ACR Pneumothorax Segmentation | 19th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>from fastai layers.py</p>
<p>class SelfAttention(Module):
"Self attention layer for nd."
def <strong>init</strong>(self, n_channels:int):
self.query = conv1d(n_channels, n_channels//8)
self.key = conv1d(n_channels, n_channels//8)
self.value = conv1d(n_channels, n_channels)
self.gamma = nn.Parameter(tensor([0.]))</p>
<pre><code>def forward(self, x):
#Notation from https://arxiv.org/pdf/1805.08318.pdf
size = x.size()
x = x.view(*size[:2],-1)
f,g,h = self.query(x),self.key(x),self.value(x)
beta = F.softmax(torch.bmm(f.permute(0,2,1).contiguous(), g), dim=1)
o = self.gamma * torch.bmm(h, beta) + x
return o.view(*size).contiguous()
</code></pre>
<p>First of all, congratulations to the winners.
This 2 month long journey felt like eternity
because at the end I have no improvement for quite long time,
but fortunately thank you for wonderful team member <a href="/ildoonet">@ildoonet</a> joined
at the end, and shed some light with his hypercolumns Unet with efficient b7 backbone.</p>
<p>Architecture:</p>
<p>We used 5 Hypercolumns U Net models, with Resnet34 and Efficient B7(by <a href="/ildoonet">@ildoonet</a> ) backbone.</p>
<p>Model 1: </p>
<p>4 folds of Hypercolumns U Net Resnet34 with bce+dice</p>
<p>I found that bce+dice loss surpass bce loss in terms of convergence speed and dice score.
And rather it seems extreme, we used 5 phases cyclicLR scheduled training and
reduced the last layer's max learning rate at each phase(lr/2,lr/2,lr/4,lr/8,lr/16)
and took best dice score model as an ensemble candidate to minimize uncertainty comes from the updated stage2 dataset.</p>
<p>The best dice score cv results for stage2 retraining were (0.8560/0.84519/0.8367/0.8536)</p>
<p>model 2:</p>
<p>4 folds of same configuration with self attention</p>
<p>Self attentioned model get lower dice scores(0.842/ 0.8535/ 0.8504/ 0.8412) but
There was an improvement when ensembling self attentioned model in stage 1.</p>
<p>model 3, 4, 5:</p>
<p>model 3,4,5 are 4 folds models of <a href="/ildoonet">@ildoonet</a> , Hypercolumns U Net models with Efficient U Net B7 backbone.
It reached its full performace faster compared with model1,2 so we used 3 phases cyclicLR scheduled training.
model3,4 has different learnin rate configuration 8e-3 and 1.6e-2.
The best dice score cv were (0.8452,0.8481,0.8445,0.8550) for model 3 , (-,0.8482, -, 0.8417) for model4(some logs was missed)</p>
<p>and model 5 trained by <a href="/ildoonet">@ildoonet</a> 's configuration, I believe he will explain on his own write up
but single model can scored private LB score of 0.8536.</p>
<p>TTA :</p>
<p>We used only Horizontal Flip.</p>
<p>Postprocessing:</p>
<p>We removed noises with mean probability of the prediction for each cells.</p>
<p>First with fixed probability threshold(0.21), define a cell with its connectivity.
Then by its mean probability over 0.35 remains as pneumothorax, and below this probability discarded.</p>
<p>In stage1, this boosted my model 1's 4 folds ensembled model(mean dice score 0.84515, Phase4) at stage1 LB 0.8729
compared to similar dice score(mean 0.84485, Phase3) with thresholded noise removal at stage1 LB 0.8658.
(Unfortunately I lost my direct comparison when LB was reseted at stage2)</p>
<p>Scores :
0.8568(Stage2), 0.8763(Stage1)</p> |
SIIM-ACR Pneumothorax Segmentation | 1st place solution (with code and configs) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>When you use modern libraries like albumentations, your transforms apply to image and mask together</p>
<p>For me it's a scheduler, that can work a long time without convergence)</p>
<p>No, it was something like
<code>python
df['fold_id'] = np.random.permutation(df.shape[0]) % 5
</code></p>
<p>Video with short explanation: <a href="https://youtu.be/Wuf0wE3Mrxg">https://youtu.be/Wuf0wE3Mrxg</a></p>
<p>Presentation with short explanation: <a href="https://yadi.sk/i/oDYnpvMhqi8a7w">https://yadi.sk/i/oDYnpvMhqi8a7w</a></p>
<p>For training I used only BCE, focal and dice)
Loss.py was inherited from <a href="https://github.com/SpaceNetChallenge/SpaceNet_Off_Nadir_Solutions/tree/master/selim_sef/zoo">[selim_sef SpaceNet 4]</a></p>
<p><a href="/gaoyangliu">@gaoyangliu</a> Something like that:</p>
<p>```python
import albumentations as albu</p>
<p>train_transform = albu.Compose([
albu.HorizontalFlip(),
albu.OneOf([
albu.RandomContrast(),
albu.RandomGamma(),
albu.RandomBrightness(),
], p=0.3),
albu.OneOf([
albu.ElasticTransform(alpha=120, sigma=120 * 0.05, alpha_affine=120 * 0.03),
albu.GridDistortion(),
albu.OpticalDistortion(distort_limit=2, shift_limit=0.5),
], p=0.3),</p>
<pre><code>albu.ShiftScaleRotate(),
albu.Resize(512, 512, always_apply=True),
ToTensor()
</code></pre>
<p>])
albu.save(train_transform, 'train_transforms_complex_512.json')
```
I try to add script for creating all transforms configs soon)</p>
<p>A lot of experiments and observations))</p>
<p>When I decided to use triplets, I tried even more values.</p>
<p>But the main idea of triplets, that top_score_threshold should be much larger than bottom_score_threshold. </p>
<p>Otherwise, this solution is not much different from the classical duplets with score_threshold and area_threshold.</p>
<p>Hi! All you need to do is replace 1024-size transform files (for TRAIN_TRANSFORMS and VALID_TRANSFORMS) to corresponding 512-size file and start training)
<code>yaml
TRAIN_TRANSFORMS:
transforms/train_transforms_complex_512_old.json
#transforms/train_transforms_complex_1024_old.json
VALID_TRANSFORMS:
transforms/valid_transforms_512_old.json #transforms/valid_transforms_1024_old.json
</code></p>
<p>After training, you can specify the folder with the experiment as a PRETRAINED
Moreover if you use my version of albunet model you can set freeze_encoder=True for part0 training (and freeze_encoder=False for other).</p>
<p>For example, if you have already trained on the size 512, you can change the model config on the size 1024 as follows:
```yaml
MODEL:
PRETRAINED:
PIPELINE_PATH: experiments/albunet_512
CHECKPOINTS_FOLDER: checkpoints
PIPELINE_NAME: albunet_512</p>
<p>FREEZE: False
PY: models.ternausnets
CLASS: AlbuNet
ARGS:
pretrained: False
freeze_encoder: True
```</p>
<p>First of all thanks Society for Imaging Informatics in Medicine (SIIM) and Kaggle for a very interesting and challenging competition. It was a pleasure to participate</p>
<p>This was my first experience with image segmentation, and if someone had told me beforehand how it would all end, I would consider him crazy)</p>
<p>My pipeline: <a href="https://github.com/sneddy/kaggle-pneumathorax">https://github.com/sneddy/kaggle-pneumathorax</a></p>
<h2>Model Zoo</h2>
<ul>
<li>AlbuNet (resnet34) from <a href="https://github.com/ternaus/TernausNet">[ternausnets]</a></li>
<li>Resnet50 from <a href="https://github.com/SpaceNetChallenge/SpaceNet_Off_Nadir_Solutions/tree/master/selim_sef/zoo">[selim_sef SpaceNet 4]</a></li>
<li>SCSEUnet (seresnext50) from [<a href="https://github.com/SpaceNetChallenge/SpaceNet_Off_Nadir_Solutions/tree/master/selim_sef/zoo">selim_sef SpaceNet 4]</a></li>
</ul>
<h2>Main Features</h2>
<h3>Triplet scheme of inference and validation</h3>
<p>Let our segmentation model output some mask with probabilities of pneumothorax pixels. I'm going to name this mask as a basic sigmoid mask. I used triplet of different thresholds: <em>(top_score_threshold, min_contour_area, bottom_score_threshold)</em></p>
<p>The decision rule is based on a doublet <em>(top_score_threshold, min_contour_area)</em>. I used it instead of using the classification of pneumothorax/non-pneumothorax.
- <em>top_score_threshold</em> is simple binarization threshold and transform basic sigmoid mask into a discrete mask of zeros and ones.
- <em>min_contour_area</em> is the maximum allowed number of pixels with a value greater than top_score_threshold</p>
<p>Those images that didn't pass this doublet of thresholds were counted non-pneumothorax images.</p>
<p>For the remaining pneumothorax images, we binarize basic sigmoid mask using <em>bottom_score_threshold</em> (another binariztion threshold, less then <em>top_score_threshold</em>). You may notice that most participants used the same scheme under the assumption that <em>bottom_score_threshold = top_score_threshold</em>.</p>
<p>The simplified version of this scheme:
<code>python
classification_mask = predicted &gt; top_score_threshold
mask = predicted.copy()
mask[classification_mask.sum(axis=(1,2,3)) &lt; min_contour_area, :,:,:] = np.zeros_like(predicted[0])
mask = mask &gt; bot_score_threshold
return mask
</code></p>
<h3>Search best triplet thresholds during validation</h3>
<ul>
<li>Best triplet on validation: (0.75, 2000, 0.3).</li>
<li>Best triplet on Public Leaderboard: (0.7, 600, 0.3)</li>
</ul>
<p>For my final submissions I chose something between these triplets.</p>
<h3>Combo loss</h3>
<p>Used [<a href="https://github.com/SpaceNetChallenge/SpaceNet_Off_Nadir_Solutions/blob/master/selim_sef/training/losses.py">combo loss]</a> combinations of BCE, dice and focal. In the best experiments the weights of (BCE, dice, focal), that I used were:
- (3,1,4) for albunet_valid and seunet;
- (1,1,1) for albunet_public;
- (2,1,2) for resnet50.</p>
<h3>Sliding sample rate</h3>
<p>Let's name portion of pneumothorax images as the sample rate.</p>
<p>The main idea is controlling this portion using sampler of torch dataset. </p>
<p>On each epoch, my sampler gets all images from a dataset with pneumothorax and sample some from non-pneumothorax according to this sample rate. During train process, we reduce this parameter from 0.8 on start to 0.4 in the end.</p>
<p>Large sample rate at the beginning provides a quick start of the learning process, whereas a small sample rate at the end provides better convergence of neural network weights to the initial distribution of pneumothorax/non-pneumothorax images.</p>
<h3>Learning Process recipes</h3>
<p>I can't provide a fully reproducible solution because during learning process I was uptrain my models <strong>A LOT</strong>. But looking back for the formalization of my experiments I can highlight 4 different parts:
- <strong>part 0</strong> - train for 10-12 epoches from pretrained model with large learning rate (about 1e-3 or 1e-4), large sample rate (0.8) and ReduceLROnPlateau scheduler. The model can be pretrained on imagenet or on our dataset with lower resolution (512x512). The goal of this part: quickly get a good enough model with validation score about 0.835.
- <strong>part 1</strong> - uptrain the best model from the previous step with normal learning rate (~1e-5), large sample rate (0.6) and CosineAnnealingLR or CosineAnnealingWarmRestarts scheduler. Repeat until best convergence.
- <strong>part 2</strong> - uptrain the best model from the previous step with normal learning rate (~1e-5), small sample rate (0.4) and CosineAnnealingLR or CosineAnnealingWarmRestarts scheduler. Repeat until best convergence.
- <strong>second stage</strong> - simple uptrain with relatively small learning rate(1e-5 or 1e-6), small sample rate (0.5) and CosineAnnealingLR or CosineAnnealingWarmRestarts scheduler.</p>
<p>All these parts are presented in the corresponding experiment folder</p>
<h3>Augmentations</h3>
<p>Used following transforms from [<a href="https://github.com/albu/albumentations">albumentations]</a>
<code>python
albu.Compose([
albu.HorizontalFlip(),
albu.OneOf([
albu.RandomContrast(),
albu.RandomGamma(),
albu.RandomBrightness(),
], p=0.3),
albu.OneOf([
albu.ElasticTransform(alpha=120, sigma=120 * 0.05, alpha_affine=120 * 0.03),
albu.GridDistortion(),
albu.OpticalDistortion(distort_limit=2, shift_limit=0.5),
], p=0.3),
albu.ShiftScaleRotate(),
albu.Resize(img_size,img_size,always_apply=True),
])
</code></p>
<h3>Uptrain from lower resolution</h3>
<p>All experiments (except resnet50) uptrained on size 1024x1024 after 512x512 with frozen encoder on early epoches. </p>
<h3>Second stage uptrain</h3>
<p>All chosen experiments were uptrained on second stage data</p>
<h3>Small batch size without accumulation</h3>
<p>A batch size of 2-4 pictures is enough and all my experiments were run on one (sometimes two) 1080-Ti.</p>
<h3>Checkpoints averaging</h3>
<p>Top3 checkpoints averaging from each fold from each pipeline on inference</p>
<h3>Horizontal flip TTA</h3>
<h2>Best experiments:</h2>
<ul>
<li>albunet_public - best model for Public Leaderboard</li>
<li>albunet_valid - best resnet34 model on validation</li>
<li>seunet - best seresnext50 model on validation</li>
<li>resnet50 - best resnet50 model on validation</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F536790%2F583cf98d504a9726cca2da9472a5d718%2Fdashboard.png?generation=1567837634005064&alt=media" alt="Experiments dashboard"></p>
<h2>Final Submission</h2>
<p>My best model for Public Leaderboard was albunet_public (PL: 0.8871), and a score of all ensembling models was worse.
But I suspected overfitting for this model, therefore, both final submissions were ensembles.</p>
<ul>
<li>The first ensemble believed in Public Leaderboard scores more and used more "weak" triplet thresholds.</li>
<li>The second ensemble believed in the validation scores more but used more "strict" triplet thresholds.</li>
</ul>
<h3>Private Leaderboard:</h3>
<ul>
<li>0.8679</li>
<li>0.8641</li>
</ul>
<p>I suspect that the best solution would be ensemble believed in the validation scores more, but used more "weak" triplet thresholds.</p>
<p>I don't divide because on this step we try to classify non-pneumothorax/pneumathorax images.</p>
<p>The main goal to pick images, not objects)</p>
<p>But this trick with object filtration I'm used in submission preparation. I used very careful threshold (100 pixels) and I don't know how important this step. </p>
<p>In the beginning, I trained using only 1-1-1 scheme and this way I get my best public score. </p>
<p>I noticed that in older epochs, Dice loss is higher than the rest about 10 times. </p>
<p>For balancing them I decide to use a 3-1-4 scheme and it got me the best validation score. </p>
<p>As a compromise I chose 2-1-2 scheme for resnet50)</p>
<p>One 1080ti is enough to training one experiment. My validation scheme demands a lot of RAM (i have 64 GB RAM), but I want to fix it soon </p> |
SIIM-ACR Pneumothorax Segmentation | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The input data of two parts were different, cls part was trained on all of the data, seg part was trained on data with pnumothorax only.
Maybe mixlosses is better, I will try it later. :)</p>
<p>Congrats to all winners. Many thanks to Kaggle and the organizers for holding this amazing competition. And special thanks to my teammates.</p>
<p>Our solution has two parts, classification and segmentation.</p>
<p><strong>Classification:</strong>
This part is used to classify whether an image in related with pneumothorax or not.
Our model is a multi-task model based on unet with a branch for classifying.
Data: all data
Cls loss: BCE + focal loss
Seg loss: BCE
Augmentation: hflip, scale, rotate, bright, blur
Backbone: seresnext 50, seresnext101, efficientnet-b3
Ensemble: stacking</p>
<p><strong>Segmentation:</strong>
There are two models used to segment, unet and deeplabv3.
Data: data with pneumothorax
Loss: dice loss
Augmentation: same as classification
Backbone: seresnext50, seresnext101, efficientnet-b3, efficientnet-b5
Ensemble: average</p>
<p>I’m cleaning source code, and will update it soon.</p>
<p>update
code: <a href="https://github.com/yelanlan/Pneumothorax-Segmentation-2nd-place-solution">https://github.com/yelanlan/Pneumothorax-Segmentation-2nd-place-solution</a></p>
<p>Thank you.</p>
<p>Thank you. Wait your GM title😏 </p>
<p>Thank you. I will update code soon.</p>
<p>Thank you. I will update code soon.</p>
<p>Thank you!</p>
<p>As for classification seresnext50>serenext101>efficientnet-b3.
As for segmentation , seresnext101>efficientnet-b5>seresnext50>efficientnet-b3.</p>
<p>It works now.</p>
<p>I'm sorry, it works now.</p>
<p>Thank you. I caculate mask loss for all images. You can refer to my code for more information</p> |
SIIM-ACR Pneumothorax Segmentation | 31st place solution - single fold 512x512 UNet EfficientNetB4 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all of the winners and participants.</p>
<p>Here is my solution:</p>
<p><strong>Preprocessing</strong>
Histogram equalization (CLAHE) with kernel size 32x32 on original image size 1024x1024 - (this kernel size gives more details than bigger sizes).</p>
<p><strong>Model training</strong>
- UNet with EfficientNetB4 encoder
- Single stratified fold. Train - 83.34%, validation 16.66%.
- Adam
- BCE + Dice loss
- 100 epochs
- not aggressive augmentations with HFlip and low brightness changes (to keep histogram eq. work)
- Input image size 512x512, Batch size: 4
- Gradient accumulation - 4 batches
- LR: Cosine annealing - 4 cycles, from 1e-3 to 1e-6
- Stochastic Weight Averaging - 4 epochs on the lowest LR</p>
<p><strong>Prediction</strong>
- HFilp TTA.
- Two-step thresholding:
1. Find the threshold which fits for the classification task (reduce false positive), every time it is somewhere between 0.95-0.99.
2. Find the threshold, which fits for the segmentation task - usually between 0.01-0.50</p>
<p>So to the predicted sigmoid map (after HFlip TTA) was applied threshold with high value. All masks with less than 2% detected pixels marked as an empty mask. If there are more than 2% detected pixels, I apply the 2nd threshold with a lower value to produce a final mask.</p>
<p>Thanks. </p>
<p>My model also reaches almost the best values on the 15-25 epochs. But it was only for the Dice metric on whole val split, without threshold search (built-in in Qubvel's segmentation models with threshold 0.5).</p>
<p>But also on each epoch, I measured 2 metrics with threshold: 1st is the same Dice on the whole split but with the threshold search and the 2nd is Dice only on the marked images also with the threshold search.
The first metric was reached on 15-25 epochs, not best values, but near to top - 1-1.5% lower than best values at the end.
The second one was reached bad results on the 15-25 epochs, 4-5% lower than best. This metric reached best values only on the 3rd and 4th annealing cycles (50-100 epochs). </p>
<p>By the way, these two metrics with threshold gave me the insight that I should use two thresholds for predictions. Their best thresholds were opposit, 0.99 and 0.01.</p> |
SIIM-ACR Pneumothorax Segmentation | 33 place solo model solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hey, nice question. Yes we just averaged each fold's predictions</p>
<p>No, we train on our server with 4x TeslaV100.</p>
<p>We have server with 4x TeslaV100. Yes, i think we have really chance to advanced our score</p>
<p>Yes, U-Net</p>
<p>dpn92 - 0.8602
resnet34 - 0.8338</p>
<p>All with the end of the competition and congratulations to the winners.</p>
<p>We tried different architectures, starting with resnet18, resnet34, vgg16, seresnext50, dpn92 and densenet169.</p>
<p>The best results were shown by dpn92 and densenet169. As the final solution, we chose densenet169 by the top LB. dpn92 lagged just a little bit behind, but the model's weight was very large (approximately 500MB per fold) so that they did not risk to load them.</p>
<p><strong>We did not retrain the model in the second stage on the new train.</strong></p>
<p><strong>Details of the final decision:</strong></p>
<p>Trained on 5 folds of 50 epochs 3 times.</p>
<ol>
<li>IMAGE_SIZE - 1024</li>
<li>Used a sampler: 60% with masks.</li>
<li>Loss: BCE + DICE + Focal.</li>
<li>OPTIMIZER: Adam, LR = 0.0001</li>
<li>SCHEDULER: CosineAnnealingLR</li>
<li>BATCH_SIZE: 4</li>
<li>GRADIENT CLIPPING: 1</li>
</ol>
<p>Transforms, we used albumentations:
1. Horizontalflip
1. Randomcontrast
1. Randomgamma
1. Randombrightness
1. ElasticTransform
1. Griddistion
1. OpticalDistortion
1. ShiftScaleRotate</p>
<p>Mb we just can't cooked resnet34 fine =(</p> |
SIIM-ACR Pneumothorax Segmentation | 4th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>these weights were selected on validation</p>
<p>thanks, I got an improvement from 0.854 to 0.856 on validation</p>
<p>Hi, kagglers.</p>
<p>My solution is based on UNet with a deep supervision branch for empty mask classification.</p>
<p><img src="https://raw.githubusercontent.com/amirassov/kaggle-pneumothorax/master/figures/model.png" alt="model"></p>
<ul>
<li>Model: UNet.</li>
<li>Backbone: ResNet34 backbone with frozen batch-normalization.</li>
<li>Preprocessing: training on random crops with <code>(512, 512)</code> size, inference on <code>(768, 768)</code> size.</li>
<li>Augmentations: ShiftScaleRotate, RandomBrightnessContrast, ElasticTransform, HorizontalFlip from albumentations.</li>
<li>Optimizer: Adam, batch_size=8</li>
<li>Scheduler: CosineAnnealingLR</li>
<li>Additional feature: the proportion of non-empty samples linearly decreased from 0.8 to 0.22 (as in train dataset) depending on the epoch. It helped to converge faster.</li>
<li>Loss: <code>2.7 * BCE(pred_mask, gt_mask) + 0.9 * DICE(pred_mask, gt_mask) + 0.1 * BCE(pred_empty, gt_empty)</code>. Here <code>pred_mask</code> is the prediction of the UNet, pred_empty is the prediction of the branch for empty mask classification.</li>
<li>Postprocessing:
<code>if pred_empty &gt; 0.4 or area(pred_mask) &lt; 800: pred_mask = empty.</code>
Parameters are selected on the validation set.</li>
<li>Ensemble: averaging the 4 best checkpoints over 8 folds, horizontal flip TTA.</li>
<li>Hardware: 4 x RTX 2080</li>
</ul>
<p>Сode: <a href="https://github.com/amirassov/kaggle-pneumothorax">https://github.com/amirassov/kaggle-pneumothorax</a></p> |
SIIM-ACR Pneumothorax Segmentation | 5th place solution && code updated | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>try 512 * 512 input size </p>
<p><a href="/superbo">@superbo</a> <a href="https://github.com/earhian/SIIM-ACR-Pneumothorax-Segmentation-5th">https://github.com/earhian/SIIM-ACR-Pneumothorax-Segmentation-5th</a></p>
<p>Aspp is a block of deeplabV3, <a href="https://towardsdatascience.com/review-deeplabv3-atrous-convolution-semantic-segmentation-6d818bfd1d74">https://towardsdatascience.com/review-deeplabv3-atrous-convolution-semantic-segmentation-6d818bfd1d74</a></p>
<p>Yes, both classification and segmentation. </p>
<p>Aspp is a block of deeplabV3</p>
<p>Yes</p>
<p>The two methods are actually equivalent. I tried mean teacher and it worked. So i selected mean-teacher. By the way, it can be trained end2end.</p>
<p>5 fold se50 and 2fold se101</p>
<p>1024 is image size</p>
<p>both segmentation and classification</p>
<p>a lot of v100 16g/32g</p>
<p>北航吴彦祖牛逼</p>
<p>HAHAHA, thanks! </p>
<p>Congratulations to all !
My solution is based on semi-supervision, and I added a 2-class classifier for pneumothorax in the network.</p>
<p>Network : Unet with Aspp
Backbone : se50 & se101
Image size: (1024, 1024)
Optimizer : Adam
Loss : 1024 * BCE(results, masks) + BCE(cls, cls_target)
Semi-supervision: mean-teacher[1-2] with NIH Dataset (0.874 ---> 0.880)</p>
<p>Scores:
stage1 0.8821
stage2 0.8643</p>
<p>[1] <a href="https://github.com/CuriousAI/mean-teacher">https://github.com/CuriousAI/mean-teacher</a>
[2] <a href="https://arxiv.org/pdf/1703.01780.pdf">https://arxiv.org/pdf/1703.01780.pdf</a></p>
<p>code : <a href="https://github.com/earhian/SIIM-ACR-Pneumothorax-Segmentation-5th">https://github.com/earhian/SIIM-ACR-Pneumothorax-Segmentation-5th</a></p> |
SIIM-ACR Pneumothorax Segmentation | 6th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, we would like to thank SIIM and ACR for this interesting competition.</p>
<p>Our final solution based on EncodingNet (ResNets, 512 and 1024 size) and UNet (EfficientNet4, se-resnext50, SENet154 with 512, 640 and 1024 sizes).
Our models were trained and tuned independently before team merge. This gave us a lot of diversity in solutions.</p>
<p>Best augmentations were related to crops and rotations. We didn’t use contrast and brightness transformations.
Loss: BCE+Dice (tried Focal, didn’t work)
Lower image sizes reduced score a lot so full sized model should be better.
Tricks:
classification on top of EncondingNet with heavy TTA (11 methods)
small segments of predicted masks was deleted</p>
<p>Also I noticed that Horizontal Flip augmentation reduced my local CV, but didn’t check it on LB.</p>
<p>Link to github:
<a href="https://github.com/yura03101995/siim_pneumo">https://github.com/yura03101995/siim_pneumo</a></p>
<p>UPDATE:
For validation we used 4 folds. Folds were generated using <code>StratifiedKFold(n_splits=8)</code> along <code>isPneumothorax</code>. For each <code>solo model</code> we did
4-fold-blend model. Augmentations we used</p>
<p><code>
AUG = Compose(
[
HorizontalFlip(p=0.5),
OneOf(
[
ElasticTransform(
alpha=300,
sigma=300 * 0.05,
alpha_affine=300 * 0.03
),
GridDistortion(),
OpticalDistortion(distort_limit=2, shift_limit=0.5),
],
p=0.3
),
RandomSizedCrop(min_max_height=(900, 1024), height=1024, width=1024, p=0.5),
ShiftScaleRotate(rotate_limit=20, p=0.5)
],
p=1
)
</code></p>
<p><strong>Model1</strong>:
<strong>EncNet</strong> (from pytorch-encoding) with <strong>Resnet50</strong> pretrained on Pascal dataset.
First, we trained 4-fold-blend model on 512 x 512 resolution with AUG.
Second, we trained 4-fold-blend model on 1024 x 1024 resolution with AUG .
Finally, we avereged two 4-fold-blend models. </p>
<p><strong>Loss</strong>: Dice
<strong>Augmentation</strong>: AUG
<strong>TTA</strong>: Flip, Clahe and <code>np.arange(-20, 21, 5)</code> angles rotations (total 10 TTA)</p>
<p><strong>Model2-3</strong>:
<strong>Two Unet</strong>-like models with <strong>SEResnext50</strong> and <strong>SEResnet152</strong> backbones were trained on
1024 x 1024 resolution. Each model also was averaged along 4 folds.</p>
<p><strong>Loss</strong>: Dice <br>
<strong>Augmentation</strong>: AUG
<strong>TTA</strong>: Flip</p>
<p><strong>Model4</strong>:
<strong>Unet</strong>-like model with <strong>EfficientNetB4</strong> backbone trained on <strong>640 x 640</strong> resolution.
Also were blended along 4 folds.</p>
<p><strong>Loss</strong>: weighted Dice + BCE
<strong>Augmentation</strong>: AUG
<strong>TTA</strong>: no TTA</p>
<p><strong>Final model</strong>:
Average of <strong>Model1</strong>, <strong>Model2</strong>, <strong>Model3</strong> and <strong>Model4</strong></p>
<p><strong>Classification threshold</strong>: ~0.32
<strong>Segmentation threshold</strong>: ~0.375</p> |
SIIM-ACR Pneumothorax Segmentation | 7th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks!</p>
<p>Thanks Peter! =]</p>
<p>Thanks mate!</p>
<ol>
<li><p>We noticed that for this task higher resolutions would increase performance in a decaying fashion. The sweet pot seemed to be 768 pixels. 1024 would use too much VRAM and take much longer to train while giving minimum improvements.</p></li>
<li><p>When I tried the 2nd level I had a holdout set (~10%) that I didn't use in any of the models. Then I'd simply train the 2nd level on this holdout set.</p></li>
</ol>
<p>Cheers mate! Thanks!</p>
<p>Thanks =]</p>
<p>Thanks! I never tested it on stage 2 but if I recall correctly EMANetDilatedResNet34 single fold was about ~0.8660 on stage 1 public LB.</p>
<p>I updated the topic with the link to the code: <a href="https://github.com/arc144/siim-pneumothorax">https://github.com/arc144/siim-pneumothorax</a></p>
<p>I'd like to thank Kaggle and the organizers for hosting this competition and congratulate all the competitors! Also, a special thanks to my partner <a href="/seesee">@seesee</a>. </p>
<h2>TL;DR</h2>
<p>Our solution is quite simple, it is an ensemble (simple average) of four different models (several folds each): <br>
- FPNetResNet50 (5 folds) <br>
- FPNetResNet101 (5 folds) <br>
- FPNetResNet101 (7 folds with different seeds) <br>
- PANetDilatedResNet34 (4 folds) <br>
- PANetResNet50 (4 folds) <br>
- EMANetResNet101 (2 folds) </p>
<p>Models trained at 768x768 (or close to that) using <a href="https://arxiv.org/abs/1711.05101">AdamW</a> optimizer. For the <a href="https://arxiv.org/abs/1612.03144">FPN</a> models Flip TTA was used whilst Scale (1024) TTA was used for the rest. We used two thresholds, one for segmentation and another (of higher value) for classification. </p>
<h2>See's Models</h2>
<p>See trained models based on the <a href="https://arxiv.org/abs/1612.03144">Feature Pyramid Network</a> decoder and ResNet as encoders. The ResNet models were based on the modified version proposed in the <a href="https://arxiv.org/abs/1812.01187">Bag-of-tricks paper</a>. See used <a href="https://arxiv.org/abs/1711.05101">AdamW</a> as optmizer and a linear decay with warm-up for the learning rate schedule. For loss he used a weighted combination for BCE and Dice. Horizontal flip TTA was also used. </p>
<p>In total he trained 17 models:
- FPNetResNet50 (5 folds) <br>
- FPNetResNet101 (5 folds) <br>
- FPNetResNet101 (7 folds with different seeds) </p>
<p>See's code is available in his <a href="https://github.com/see--/pneumothorax-segmentation">GitHub repo</a>.</p>
<h2>Eduardo's Models</h2>
<p>I trained models based on two different types of decoders: <a href="http://arxiv.org/abs/1907.13426">EMANet</a> and <a href="http://arxiv.org/abs/1805.10180">PANet</a>. Both of my own re-implementation with small modifications:</p>
<ul>
<li><p>For the PANet I just reduced the number of channels before going to the FPA module. I didn't notice any loss of performance and the network was much lighter.</p></li>
<li><p>For EMANets I changed the steep bilinear upsampling of 8x for gradual 2x upsampling using the GAU module of PANet. It considerably improved the performance.</p></li>
</ul>
<p>For the encoders I also used the ResNet family with the modifications proposed by the <a href="https://arxiv.org/abs/1812.01187">Bag-of-tricks paper</a>. My models were trained using <a href="https://arxiv.org/abs/1711.05101">AdamW</a> with <a href="https://arxiv.org/abs/1803.05407">Cosine Annealing</a> learning rate schedule. For the loss a simple BCE worked out fine. In total I trained 10 models at 768x768 resolution:</p>
<ul>
<li>PANetDilatedResNet34 (4 folds) </li>
<li>PANetResNet50 (4 folds) </li>
<li>EMANetResNet101 (2 folds) </li>
</ul>
<p>For prediction Scale TTA (768, 1024) was used.</p>
<h2>Ensembling and tresholding</h2>
<p>For the ensembling a simple average of the models' soft predictions (after sigmoid, before thresholding) were used. Finnaly, we used two values for thresholding. One of smaller value (0.2) is used for segmentation and another (0.72) is used for classification of non-pneumothorax images.</p>
<h2>Things that did not work</h2>
<ul>
<li>HRNet</li>
<li>FishNet as encoder (not bad but no improvements either)</li>
<li>A second-level classifier based on gradient boosting using segnets' outputs and metadata as features</li>
</ul>
<h2>Hardware</h2>
<ul>
<li>See had access to V100s</li>
<li>Eduardo used a 1080ti</li>
</ul>
<p>The code is available <a href="https://github.com/arc144/siim-pneumothorax">here</a></p>
<p>Thanks a lot mate =]</p> |
SIIM-ACR Pneumothorax Segmentation | 9th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello everyone.
Firstly thanks to the host and Kaggle for this interesting competition, and congrats to all the winners. My solution is quite chaotic, so thanks also to everyone who can read it to the end:)</p>
<h2>First attempts(not very successful):</h2>
<p>At the beginning I decided to use a two-stage approach in this competition - train separate classification(pneumothorax/non-pneumothorax) and segmentation models.
Therefore, I trained several common classification models: resnet34, resnet50, seresnex50, seresnext101, dpn98, they all gave approximately the same accuracy ~0.9. And I could not improve this result in any way. After that I trained several unet models (only on images containing pneumothorax) with different backbones and got the score in the gold zone. But this approach was not very stable and results on LB did not correlate with my local CV.
And after several unsuccessful attempts to improve this solution, I was disappointed and stopped participating in this competition for about a month.</p>
<h2>Final solution:</h2>
<p>After returning to the competition, I decided to try a slightly different approach, which eventually has became my final solution. It also has two steps:</p>
<h3>Classification:</h3>
<p>For this purpose at that time I decided to use Unet model. And determined whether there are pneumothorax in the pictures by the threshold of the number of pixels. Based on my validation the best threshold was 2000.
Backbone: seresnext50
Data: For this step I used all images and balanced batches(pneumothorax/non-pneumothorax) it greatly accelerated convergence
Data splits: 5 folds and 10 folds stratified by pneumothorax area.
Input size: 768x768
Loss: BCE
Augmentations: hflips, rotations(up to 10 degree), random brightness, contrast and gamma, blur
Lr scheduling: reduce lr on plateau with patience=3 epochs.</p>
<p>This approach gave a pretty good result, but there were remained a rather large number of False Negative examples. Therefore, I decided to use classification models that I trained in the beginning and marked additional pictures as containing pneumothorax if all models attributed them to such.</p>
<h3>Segmentation:</h3>
<p>A core element at this stage was also Unet model) But this time, model predictions were made only in pictures that were identified as containing pneumothorax in the сlassification stage.
Backbone: seresnext50
Data: only images containing pneumothoraxes
Data splits: 5 folds and 10 folds stratified by pneumothorax area
Input size: 928x928, 768x768
Loss: BCE + Dice
Augmentations: same as in the classification stage
Lr scheduling: reduce lr on plateau with patience=5 epochs.</p>
<p>Update:
Code <a href="https://github.com/scizzzo/kaggle-siim-pneumothorax">https://github.com/scizzzo/kaggle-siim-pneumothorax</a></p> |
SIIM-ACR Pneumothorax Segmentation | My first bronze medal(128th place solution) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>yes it is dude,thanks a ton <3</p>
<p>thank you a lot <a href="/dcstang">@dcstang</a> and yeah i had to spend plenty of time on this competition to win my first bronze medal,each and every model training took at least 7 hours per day,i tried at least 60-70 different models with little modification,my team mates also tried so many things during the competition,after all we had to spend a lot of time on this competition to achieve a bronze medal,thank you :)</p>
<p>you are welcome <a href="/kranthi9">@kranthi9</a> </p>
<p>in shah Allah,we will go deeper altogether brother <3 <a href="/frtgnn">@frtgnn</a> </p>
<p>thanks <a href="/shubhampathakdream">@shubhampathakdream</a> and i wish you luck </p>
<p>thank you a lot mate,can you please suggest which classifiers i could test??? <a href="/raghaw">@raghaw</a> </p>
<p>i will try my level best mate,thank you,i wish you do better in the future competitions mate :)</p>
<p>thank you <a href="/brunhs">@brunhs</a> :)</p>
<p>I am absolutely over the moon now.Today i got my first ever bronze medal from my first ever kaggle competition, first up i would like to congratulate my team mates <a href="/dskswu">@dskswu</a> <a href="/muhakabartay">@muhakabartay</a> <a href="/adityaguru149">@adityaguru149</a> and <a href="/mks2192">@mks2192</a> for this little achievement,specially william green,because he helped me a lot,congratulations all.
as a leader of this team i am extremely pleased,from June 24, 2019 (first day of this competition) till 2nd/3rd september (just beforeeee the stage 2 final submission deadline i was a total failure of this competition,we were nowhere near the medal zone,but one thing i can assure you : "we worked really hard".</p>
<p>just before the deadline of final submission my model fortunately got good accuracy and found itself into the medal zone</p>
<p>lesson : " you need to see what happens if you do not give up"</p>
<p>Congratulations everyone,thank you for your support and please keep us in your prayers!</p>
<p>thanks</p>
<p>here is my solution : <a href="https://www.kaggle.com/mobassir/unet-with-se-resnext50-32x4d-encoder-for-stage-2">https://www.kaggle.com/mobassir/unet-with-se-resnext50-32x4d-encoder-for-stage-2</a></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2034058%2F11fd204a6cd4a88fab17734baa0cf366%2Fmedal.PNG?generation=1567751081359915&alt=media" alt=""></p>
<p>thank you for all of your supports <3</p>
<p>glad it helps,you guys are welcome</p>
<p>yes it's just the beginning,,i gotta long way to go,thanks a ton mate</p>
<p>thank you mate,i didn't realize that error!!! thanks :)</p>
<p>thank you <a href="/rhhridoy">@rhhridoy</a> </p> |
SIIM-ACR Pneumothorax Segmentation | The 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The positive sample ratio was relative low, and the masks of many of the positive sample were also small, to make the model converge quickly, I over sampled the positive samples. As the evaluation metric punish the False Positive, I kept the ratio equal, or I could use even less negative samples.</p>
<p>hi, <a href="/mobassir">@mobassir</a>, I can not reach my servers which store all the resources related recently, as I must stay in rural region for a while. I remember I have deleted the datasets as they are large.
I have provide all the source codes to generate the solution above, you may generate it by running it.</p>
<p>I did not compare EMA with best checkpoint, I just used it as I found it make the model stabler. As to the masks of lungs, there are many sources we can use, as listed on the external data page:
<a href="https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation/discussion/97770#602874">https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation/discussion/97770#602874</a>
<a href="https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation/discussion/97770#604177">https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation/discussion/97770#604177</a>
I tried the first one. It's not a problem if we don't have the labels, label 200 images in train set is enough to train a good lung segmentation model with size 256x256, we can check it by calculating the masks out of the predicted lungs.</p>
<p>Congratulations to new grandmaster! :)</p>
<p>Pseudo labels usually improve the segmentation task if we select the samples correctly. </p>
<p>EMA:
<a href="https://github.com/ajbrock/BigGAN-PyTorch">https://github.com/ajbrock/BigGAN-PyTorch</a>
there is an option to use ema.
synch-BN:
<a href="https://github.com/vacancy/Synchronized-BatchNorm-PyTorch">https://github.com/vacancy/Synchronized-BatchNorm-PyTorch</a>
<a href="https://github.com/NVIDIA/apex">https://github.com/NVIDIA/apex</a> </p>
<p>Used in Decoder only</p>
<p>First of all, I am sorry that what I used was not symmetric lovasz, it's lovasz in my final submission indeed, I combined other losses such as SoftDice with lovasz but I found they did not improve the model and if I used the focal loss, the loss of eval set increased after some epochs, so I removed them to keep it simple and efficient. I found loss functions, backbones ,unet structures were not so important in this competition after I reached 0.875 on public LB, so I spent more time on the data and tried to avoid overfitting to public LB. </p>
<p>Congrats to all the winners, thanks Imaging Informatics in Medicine (SIIM),American College of Radiology (ACR), @RadiologyACR, Society of Thoracic Radiology (STR), @thoracicrad and Kaggle
for hosting such a competition which may aid in the early recognition of pneumothoraces and save lives.</p>
<p><strong>Challenges:</strong>
1. Relative large image size.
2. The quality of the external data sets are not so good, but make full use of them may help.</p>
<p><strong>Solution:</strong></p>
<p><img src="https://raw.githubusercontent.com/bestfitting/kaggle/master/siim_acr/slides/pipeline.png" alt="pipeline"></p>
<p><strong>Data:</strong>
As the image sizes are little big, to save memory and computation, I first trained an UNET model to predict lungs from 1024x1024 images, all my models were based on the cropped lungs, 576x576 cropped images were good enough for my models.
<img src="https://raw.githubusercontent.com/bestfitting/kaggle/master/siim_acr/slides/lung_seg.png" alt="lung segmentation unet"></p>
<p>In deep learning tasks, more data are often important, so I tried to make full use of CheXpert and NIH datasets, we could not use them directly after I had read the papers related as the labels of them are not accurate enough, I guessed that I could use pseudo labels, so I used a resnet34 UNET model which trained on competition data (0.858 on public LB) to predict CheXpert positive samples, and selected the positive samples which also predicted by my model as positive samples when training pseudo label models, as time was limited after the postponed DOGs-GAN competition, I did not predict the negative samples of the CheXpert, just treated them as negative. As to NIH dataset, I did not use the labels they provided, just believe my models predictions.
<img src="https://raw.githubusercontent.com/bestfitting/kaggle/master/siim_acr/slides/data_strategy.png" alt="data strategy">
I kept ratio of positive and negative samples equal, when training pseudo label models, I kept the ratio of the pseudo samples be 0.5 of normal samples.</p>
<p><strong>Models</strong></p>
<p>Experiments were done mainly on UNET based on resnet34 and SE-resnext50 backbones, I selected them because resnet34 is light enough to do experiments and SE-resnext50 is deep enough for this competition, I did not have enough time and resources to train larger and deeper models.
<img src="https://raw.githubusercontent.com/bestfitting/kaggle/master/siim_acr/slides/seresnext.png" alt="seresnext">
My final models were 3 SE-Resnext50 models: <br>
m1:704x704 images with no pseudo
m2:576x576 images with CheXpert Pseudo
m3:576x576 images with CheXpert and NIH Pseudo </p>
<p>Although my final submission were ensemble of these three models, but 3 of 10 folds of m2 could reached 0.8809 on public LB and 0.8642 on private LB. The ensemble did not bring too much improvements in this competition.</p>
<p>Attention: CBAM
Loss: Lovasz Loss, I tried Active Contour Loss also, but did not bring local CV increase.
No classification model,No classification loss, I thought pixel level labels were enough and if we introduced classification model, it's hard to select thresholds on the private dataset.
No threshold search, just used 0.5
Optimizer: Adam with 0.0001 learning rate, no learning rate change during training
Epochs: 15
EMA of model parameters after 6 epoch.
Batch Size: 3 per GPU when trained on 576x576 images, so the batch size was 9 when using 3 1080Ti, no accumulations of batch-size, no Synch-BN was used. 2 per GPU on 704x704 images.</p>
<p><a href="https://github.com/bestfitting/kaggle/tree/master/siim_acr">https://github.com/bestfitting/kaggle/tree/master/siim_acr</a></p>
<p>We can use our model to predict the images of the CheXpert, the predicted masks(threshold=0.5) can be treated as Labels, then the images and the pseudo labels can be used as training samples to train the unet model.</p>
<p>I tried all kinds of attention mechanisms last year and found CBAM performed better than scSE in many tasks, so I used it.</p>
<p>The positive pseudo samples of CheXpert is 5595, the negative pseudo samples is 48018.
The positive pseudo samples of NIH is 2373 (remove the samples in train set and test set of stage1), the negative samples is 97697.</p> |
SIIM-ACR Pneumothorax Segmentation | Yet another solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>1024x1024, 512x512</p>
<p>This is an short overview of our solution:</p>
<h3>Ideas:</h3>
<p>Competition metric and dataset balance makes this competition a mix of hidden classification competition with segmentation competition. This lead us to decomposing a problem into following parts:</p>
<p>a) Solve a classification task. We have done this with a mic of binary classificator and segmentation networks. (more about it later)</p>
<p>b) Solve segmentation task, networks that lead to good classification does not necessary lead to good dice scores.</p>
<p>c) With a given estimated classification probability and a given estimated segmentation dice take a decition if the mask should be included in the submission or not. </p>
<p>Just to explain idea: Let's assume that you have 0.9 probability that particular image has pneumotorax but your estimation for a dice score that you can get from this image is 0.05. This clearly means that it is more profitable statistically to exclude this image and set a "-1" in submission</p>
<p>d) Observation - segmentation networks that predicts large max probability for at least some pixels, has a high probability to process this image with a good dice scores. Another observation
if the different networks largely agree about the image in question, and produce a clear mask then this image most likely contains pneumotorax and this pneumotorax most likely is easily
processable by our networks</p>
<p>e) Optimal treshold for the segmentation depends from the max predicted pixel probability </p>
<p>Basing on this ideas, we created several ansembles of different networks: Unet, FPN - as architectures. ResNet34, InceptionV2, Effnet , Densnet121 as backbones
For classification we used : Xception, InceptionRensnetV2, Effnet.</p>
<h3>Ansembling</h3>
<p>Then we used hyperopt to find ansemble composition that leads to highest score. This composition is basically a hard coded decision tree with a set
of comparison of predicted probability of classification ansemble , predicted max pixel probability of segmentation ansemble and level of agreement
between individual networks in segmentation ansemble.</p>
<p>We also used GA like algorithm to find a set of rules for choosing optimal treshold for an image segmentation on a per image basis</p>
<h3>Learning process details:</h3>
<ol>
<li>Loss: BCE + DICE. Some networks just plain BCE</li>
<li>Batch: 1-2 per GPU card</li>
<li>No image normalization at all.</li>
<li>ReduceLROnPlateau, no wormup, no restarts - just plain Adam</li>
<li>15-20 epochs for almost all networks except those that was trained by Lyan.</li>
</ol>
<h3>What did not worked:</h3>
<p>a) End to end segmentation +classification training (I tried it but it worked worse then ansembling separate networks)
b) Using external data
c) Having grand master in the team.
d) Moving networks from 1080ti,2080ti to more potent cards. (Finding a good learning profile for them was a task on its own, and if you just scale the batch and lr, you network performance will be different and usually worse.</p>
<h3>Our biggest faults:</h3>
<p>I was to exhausted to perform a proper retraining on stage 2. So only Lyan ansemble was retrained. This probably costed us gold.</p>
<p>Despite of big number of people in the team we was not able to stick with a single pipeline and perform a proper scientific search.</p>
<p>Pipelines that i used for training:</p>
<p>Segmentation Pipeline: <a href="https://github.com/musket-ml/segmentation_training_pipeline">https://github.com/musket-ml/segmentation_training_pipeline</a>
Classification Pipeline: <a href="https://github.com/musket-ml/classification_training_pipeline">https://github.com/musket-ml/classification_training_pipeline</a></p>
<h3>Hardware:</h3>
<p>I performed couple of experiments on P100,V100 cards and then sticked to using my 2x2080ti and 2x1080ti machines for training (due to differencies in learning profile) , Lyan used a mix of his own 2080 ti's with P100 cards</p>
<p>Regards,
Pavel</p>
<p>Sorry this code is pretty dirty right now, and I have not scored any gold medals yet. I will probably release it later after making this techinque more production ready.</p>
<p>>is "ansemble" different from "ensemble"? </p>
<p>No difference. It is just my weird english. Sorry for confusion.</p>
<p>>Also, what is your search space with hyperopt?</p>
<p>a)weights of ensembles in blending a1*w1+a2*(1-w1), e.t.c
b) vote counts of segmentation networks maximums to score a hit
c) vote treshold
e) pure classificator+segmentator composition treshold
f) if one of components is extremly sure its prediction it still makes sense to count it, some times because probability of correct prediction is not linearly distributed.</p> |
SIIM-ACR Pneumothorax Segmentation | [8th place] Solution Overview + Code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>See this paper about snapshot ensembling: <a href="https://arxiv.org/pdf/1704.00109.pdf">https://arxiv.org/pdf/1704.00109.pdf</a></p>
<p>1 "model" for me is technically 3 models (i.e. snapshots), predictions are a weighted average.
I trained 2 models each on folds 1-4, and 1 model each on folds 5-8 for a total of 12 models. </p>
<p>We first mainly used a single fold to iterate through experiments. Then k-fold CV was used to train models for ensemble at the end. We reserved 10% of the holdout data to use as a test set and dev set for ensembling. We wanted to make sure the models performed well both locally and on the leaderboard. </p>
<p>We tuned different size removal thresholds based on increments of 1024 pixels, and found those 2 to be the best between local CV and stage 1 LB.</p>
<p>We used 0.7 as our threshold (this was after it was multiplied by the segmentation output from the model trained on positive images only, so it is a bit lower). For our single stage models, the best classification threshold was around 0.8 with segmentation threshold at 0.4. </p>
<p>I think the trick was proper weighting of the BCE loss function. We actually used your loss function, modified slightly (thanks!):
<code>loss = (pos_frac*pos*loss/pos_weight + neg_frac*neg*loss/neg_weight).sum()</code></p>
<p>In your original loss function, <code>pos_frac = 0.25</code> and <code>neg_frac = 0.75</code>. I modified it so that <code>pos_frac = 0.1</code> and <code>neg_frac = 4.9</code>. I also used <code>neg_frac = 7.9</code> for some models. </p>
<p>Congratulations to all of the winners and participants. </p>
<p><a href="/felipekitamura">@felipekitamura</a> and I are happy to share our solutions. </p>
<p>Full documentation: <a href="https://docs.google.com/document/d/108xK_J4WVTtuuwMfxBINR7FLXYRNxldnAQV5cLfO-1g/edit?usp=sharing">https://docs.google.com/document/d/108xK_J4WVTtuuwMfxBINR7FLXYRNxldnAQV5cLfO-1g/edit?usp=sharing</a></p>
<p>Code:
<a href="https://github.com/i-pan/kaggle-siim-ptx">https://github.com/i-pan/kaggle-siim-ptx</a></p>
<p>Data split: 10% holdout for ensemble, 10-fold CV on remaining 90%, stratified by pneumothorax size</p>
<p>Architecture: DeepLabV3+</p>
<p>Backbone: ResNet50/101 and ResNeXt50/101 with group normalization</p>
<p>Loss: weighted BCE (trained on all images) or soft Dice (trained on positives only)</p>
<p>Optimizer: Vanilla SGD, momentum 0.9</p>
<p>Training: batch size 4, 1024 x 1024 and batch size 1, 1280 x 1280 for pure segmentation (did not retrain on stage 2)</p>
<p>Schedule: cosine annealing, 100 epochs, 5 snapshots, initial LR 0.01 to 0.0001</p>
<p>Ensemble:
- 12 models total (x3 snapshots/model)
- 4 trained on positives only with soft Dice
- 8 trained on all images with weighted BCE
- I used 4 of the models trained on all images as "classifiers"
- Max pixel value was taken as classification score, averaged across 4 models
- Multiplied pixel-level scores from 4 models trained on positives only by this classification score, then averaged
- Final ensemble: multiplied score as above averaged with pixel-level scores based on other 4/8 models trained on all images
- Hflip TTA</p>
<p>Post-processing:
- Remove images with total mask size <2048, 4096 pixels </p>
<p>What did not work/insights:
- Ensembling model with Felipe: he got to 0.8750 with Unet, EfficientNetB4 kernel using 512 x 512 trained on soft Dice loss, all images. We weren't able to properly ensemble our models and ended up dropping it in favor of my slightly better ensemble.
- I personally tried lower resolutions and Unet, LinkNet, PSPNet, EncNet, HRNet, etc. All did worse than DeepLab, 1024 x 1024
- Whole-image classifiers: did not work as well as segmentation
- SGD worked better than Adam, Adabound optimizers
- Hflip TTA + small mask removal helped
- Weighted BCE loss was most stable for me, Lovasz, soft Dice were unstable or did not converge at all</p>
<p>Local: 0.8737
Stage 1 LB: 0.8780 (remove 4096 pixels)
Stage 2 Private: 0.8627 (remove 2048 pixels)</p> |
SIIM-ACR Pneumothorax Segmentation | [solution] 10-th place | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ACR Pneumothorax Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The core of the pipeline is still quite similar to my <a href="https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation/discussion/99440">previous post</a>.</p>
<p>One major change is the split of the problem into segmentation and classification. Also after Konstantin joined we have tried more complicated models and a lot of them worked better then ResNet34.</p>
<p>Now, when I see, that most of the team tackled the problem jointly, I start to believe, that having the ensemble of both could significantly increase the score.</p>
<p><strong>Common:</strong>
Data split: CV5
Optimizer: Adam
Scheduler: Reduce lr on plateau
Augmentations: relatively aggressive: ShiftScaleRotate, Grid- and Elastic-transformation, GaussianNoise</p>
<p><strong>Classification:</strong>
Models: se_resnext101 (2 snapshots), senet154
Resolution: 768x768
Loss: BCE
Additional features: TTAx2(hlip), pseudo labeling, gradient accumulation (bs = 100-200)</p>
<p>The key here is the model selection. Instead on focusing on accuracy, I focused on f0.5 metric with fixed threshold of 1.0. The motivation is quite simple, the average dice of segmentation model is around 0.58. It means that correctly guessed negative instance contributes to the score with 1, and correctly guessed positive instance only with 0.58.</p>
<p><strong>Segmentation:</strong>
Models: Unets: dpn98, se_resnet101, se_densenet121 (2 snapshots of each)</p>
<p>The train process consist of 4 stages:
1. Loss: BCE + dice; size: 512x512
2. Loss: BCE + dice; size: 1024x1024
3. Loss: BCE; with soft pseudo labels; size: 1024x1024
4. Loss: symmetric lovasz; size: 1024x1024</p>
<p>Additional features: TTAx6(hlip + rescale), gradient accumulation (bs = 50-100)</p>
<p><strong>What did not work:</strong>
- Unets with other encoders worked worse (resnet34, resnet50, dpn107, dpn131, se_resnext50_32x4d, se_resnext101_32x4d, senet154). senet154 worked much better than other models on size 512x512, but completely failed during scaling to 1024x1024.
- Lookahead optimizer didn't improve the optimization process (<a href="https://arxiv.org/abs/1907.08610">https://arxiv.org/abs/1907.08610</a>)
- Other losses for classification: FocalLoss, SoftF1Loss. The accuracy with FocalLoss was better then with BCE, but f0.5 metric was worse. </p>
<p>The code: <a href="https://github.com/SgnJp/siim_acr_pneumothorax">https://github.com/SgnJp/siim_acr_pneumothorax</a></p>
<ol>
<li>It is like <a href="https://en.wikipedia.org/wiki/F1_score">F1 score</a>, where beta is equal to 0.5</li>
<li>Simple mean of predictions</li>
</ol> |
SIIM-FISABIO-RSNA COVID-19 Detection | 10th place with code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-FISABIO-RSNA COVID-19 Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The 0.647 was a blend with the predictions using tabular data (annotations) in RICORD and BIMCV as features. It worked pretty well in public LB but somehow failed in private LB.</p>
<p>I will publish the code in a few days</p>
<p>Training code: <a href="https://github.com/GuanshuoXu/SIIM-FISABIO-RSNA-COVID-19-Detection-10th-place-solution" target="_blank">https://github.com/GuanshuoXu/SIIM-FISABIO-RSNA-COVID-19-Detection-10th-place-solution</a><br>
Inference kernel: <a href="https://www.kaggle.com/wowfattie/notebook68483076ff?scriptVersionId=70714056" target="_blank">https://www.kaggle.com/wowfattie/notebook68483076ff?scriptVersionId=70714056</a><br>
video: <a href="https://drive.google.com/file/d/15l82vX5TCD-ltcmxAcOJ0A-c5vYOcY3Z/view?usp=sharing" target="_blank">https://drive.google.com/file/d/15l82vX5TCD-ltcmxAcOJ0A-c5vYOcY3Z/view?usp=sharing</a></p>
<p>My solution is comprised of a detection model to predict all the targets and some expert models for performance enhancement.</p>
<p>Since there is a detection target, it's more convenient to work with a detection model. To predict all the six targets, the whole image was used as bbox for the four study level labels, 'none' target was not assigned any bbox and was predicted by multiplying the score of the top 3 opacity bbox predictions of each image. The idea was mainly inspired from the Vinbigdata top solutions. My 5-fold efficientdet-D5 trained with this method achieved public LB 0.622 and private LB 0.618. </p>
<p>To improve performance, I added some yolov5 models for opacity detection, efficient-B7&B8 for none prediction, and efficient-B7&B8 for study-level label predictions. I used RICCORD, BIMCV+ and BIMCV- dataset for pseudo-labeling. Blending them all gave me public LB 0.639 and private LB 0.624.</p> |
SIIM-FISABIO-RSNA COVID-19 Detection | 11th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-FISABIO-RSNA COVID-19 Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>You're welcome.</p>
<p>You're welcome</p>
<p>Thank you, looking forward to next time you can get your satisfactory results.</p>
<p>You're welcome. Keep kaggling, you will learn more, I also learned in this community.</p>
<p>Steps</p>
<ol>
<li><p>Training with siim covid dataset.</p></li>
<li><p>Generate pseudo label with step1 model in test dataset ricord dataset and bimcv dataset.(Note:You need remove duplicate images and invalid images)</p></li>
</ol>
<p>First of all, I would like to thank you for organizing this very interesting competition and congratulations to all the winners of this competition. Also thanks to everyone in our team <a href="https://www.kaggle.com/quochungto" target="_blank">@quochungto</a> <a href="https://www.kaggle.com/morizin" target="_blank">@morizin</a> <a href="https://www.kaggle.com/joven1997" target="_blank">@joven1997</a> <a href="https://www.kaggle.com/socom20" target="_blank">@socom20</a> for allowing me to win the first gold medal in this competition. </p>
<p><strong><em>Study Level</em></strong></p>
<ul>
<li>Train classification(4 classes: negative, typical, indeterminate, atypical) + segmentation(boxes to mask) with siim covid dataset .</li>
<li>Train classification(4 classes: negative, typical, indeterminate, atypical) + segmentation(boxes to mask) + pseudo label with siim covid dataset + test dataset + ricord dataset + bimcv dataset.(When dealing with bimcv external data, I would like to thank <a href="https://www.kaggle.com/quochungto" target="_blank">@quochungto</a> and <a href="https://www.kaggle.com/joven1997" target="_blank">@joven1997</a> manually deleting the noise data.)</li>
<li>Augmentation: Resize, HorizontalFlip, RandomBrightness, RandomContrast, OpticalDistortion, GridDistortion, HueSaturationValue, ShiftScaleRotate, CutoutV2.</li>
<li>Loss: <br>
cls loss = BiTemperedLogisticLoss <a href="https://github.com/mlpanda/bi-tempered-loss-pytorch/blob/master/bi_tempered_loss.py" target="_blank">link</a>.<br>
aux loss = 0.9<em>bce + 0.1</em>lovasz.</li>
<li>Optimizer: Adam with init_learning_rate 0.0001.</li>
<li>Scheduler: GradualWarmupSchedulerV2 <a href="https://www.kaggle.com/haqishen/1st-place-soluiton-code-small-ver" target="_blank">link</a>.</li>
<li>Test time augmentation: src+ horizontal flip.</li>
<li>Final submission includes 4 models with diversity of encoders, decoders, input size:<br>
Efficientnet_b5 512, Efficientnet_b7 512, Efficientnetv2_m 512, Swim-Transformer 384.</li>
<li>Ensemble method: more details <a href="https://www.kaggle.com/cdeotte/forward-selection-oof-ensemble-0-942-private" target="_blank">link</a> (Thanks for <a href="https://www.kaggle.com/morizin" target="_blank">@morizin</a> lot of ideas)</li>
</ul>
<p>study level part source code used in this competition has been uploaded to the following github:<br>
<a href="https://github.com/ChenYingpeng/pl-siim-covid19-detection" target="_blank">https://github.com/ChenYingpeng/pl-siim-covid19-detection</a></p>
<ul>
<li><p>What didn't work<br>
Segmentation channel images as RANZCR competition<br>
using more TTA(such as scale=1.25, vertical flip etc)<br>
Training 2 classes (none and opacity) in study level.<br>
Training 2 classes (none and opacity) in image level. (Use the entire image like none as its box)</p>
<table>
<thead>
<tr>
<th>arch</th>
<th>cv (5-folds mAP*(4/6))</th>
<th>lb</th>
</tr>
</thead>
<tbody>
<tr>
<td>effnet-b5</td>
<td>0.387736</td>
<td>0.535</td>
</tr>
<tr>
<td>effnet-b5-pseudo</td>
<td>0.3925984</td>
<td>---</td>
</tr>
<tr>
<td>effnet-b7</td>
<td>0.3821176</td>
<td>0.530</td>
</tr>
<tr>
<td>effnetv2-m</td>
<td>0.3869362</td>
<td>0.532</td>
</tr>
<tr>
<td>swim-transformer</td>
<td>0.3815662</td>
<td>---</td>
</tr>
</tbody>
</table></li>
</ul>
<p><strong><em>Image Level</em></strong></p>
<ul>
<li>YOLOv5 <br>
yolotrs-384, yolotrl-384,yolov5x-384, yolov5x-512, yolov5x-640.<br>
I just do yolotrs-384 and yolotrs-384+pseudo with test dataset in image level part.<br>
yolov5s + transformer <a href="https://www.gitmemory.com/issue/ultralytics/yolov5/2329/808852867" target="_blank">link</a><br>
detection [email protected] opacity class.</li>
</ul>
<table>
<thead>
<tr>
<th></th>
<th>yolotrs-384</th>
<th>yolotrs-384-pseudo</th>
</tr>
</thead>
<tbody>
<tr>
<td>mAP0.5 TTA</td>
<td>0.567</td>
<td>0.576</td>
</tr>
<tr>
<td>LB</td>
<td>0.092</td>
<td>0.095</td>
</tr>
</tbody>
</table>
<p>For more details, please see <a href="https://www.kaggle.com/c/siim-covid19-detection/discussion/264287" target="_blank">https://www.kaggle.com/c/siim-covid19-detection/discussion/264287</a></p>
<p><strong><em>Final Submission</em></strong><br>
Final Submission Notebook: <a href="https://www.kaggle.com/yingpengchen/siim-covid19-detection-inference-fast?scriptVersionId=70717292" target="_blank">https://www.kaggle.com/yingpengchen/siim-covid19-detection-inference-fast?scriptVersionId=70717292</a></p> |
SIIM-FISABIO-RSNA COVID-19 Detection | 12th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-FISABIO-RSNA COVID-19 Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Fisrst of all, many thanks to Kaggle and the hosts for hosting such an interesting competition, and congratulations to all the winners. </p>
<p>I'm happy to have won a solo gold medal in this wonderful competition. I am grateful to Kaggler for teaming up with me so far. I have learned so much from them that I was able to win this gold medal. I also appreciate the Kaggle community. There is always a lot of knowledge being shared in Discussion and Code, and the top teams share their great solutions after the competition. Without it, I wouldn't have been able to grow as much as I am now. Thank you.</p>
<h1>Overview</h1>
<p><img src="https://user-images.githubusercontent.com/37664066/129072710-1463016b-4ca6-410c-a38b-381caaf1838b.jpg" alt="classification_detection_model (1)"></p>
<p>Details are as follows.</p>
<h1>Study level and ‘none’</h1>
<p>In the first month, I had a very hard time getting any good CV score. After nearly 100 experiments, I found that data augmentation was not suitable, and when I changed the augmentation to the one shown by <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a> in the RANZCR solution, the CV increased dramatically.<br>
<a href="https://www.kaggle.com/haqishen/ranzcr-1st-place-soluiton-cls-model-small-ver" target="_blank">https://www.kaggle.com/haqishen/ranzcr-1st-place-soluiton-cls-model-small-ver</a></p>
<p>CV was always better with 5-class classification including ‘none’ than with 4-class + ‘none’ separately. It was better to use sigmoid + bce than softmax + ce.</p>
<h3>Model architecture</h3>
<p><img src="https://user-images.githubusercontent.com/37664066/129072712-fd132e44-aa4c-4f96-a4c1-ffabdec82619.jpg"> <img src="https://user-images.githubusercontent.com/37664066/129072715-21fe7031-6466-42b8-97b8-30cc0e6f08d2.jpg"></p>
<h3>Other tips;</h3>
<ul>
<li>Five fold cross validation split by Patient</li>
<li>model: Efficientnet v2 m, l and swin transformer base(384)</li>
<li>optimizer: AdamW</li>
<li>scheduler: Cosine Annealing (lr 1e-4 -> 1e-5)</li>
<li>epoch: 20 ~ 30</li>
<li>Additional segmentation task, bce + symmetric lovasz (1 : 1)</li>
<li>Aux loss : Cls loss = 1 : 1 (Aux loss is about twice as large as Cls loss)</li>
<li>Pseudo labeling (bimcv+, record)</li>
<li>TTA: HFlip</li>
</ul>
<p>I use the aux loss that <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> shared in the discussion.<br>
<a href="https://www.kaggle.com/c/siim-covid19-detection/discussion/240233" target="_blank">https://www.kaggle.com/c/siim-covid19-detection/discussion/240233</a></p>
<p>Best single model of study level is EffnetV2L ATfL6, CV 0.4248 / LB 0.463 / PB 0.430 (LB/PB with ‘none 1 0 0 1 1’ in Image level).</p>
<p>Swin transformer scored a little lower, but contributed to the ensemble.</p>
<h1>Image level</h1>
<p>In addition to Yolo, I used several mmdet models as shown in the overview picture.<br>
Best model is Cascade RCNN Res2Net101, CV 0.0897 / LB 0.094 / PB 0.096.</p>
<p>WBF (th=0.5) ensemble boosted score to LB 0.098 / PB 0.101.</p>
<p>Detection confidence was calibrated as follow;</p>
<pre><code>det_conf = det_conf * (1 - none)**0.5
</code></pre>
<h2>Inference code</h2>
<p><a href="https://www.kaggle.com/inoueu1/siim-sub145-lp-c2-effv2m-l-p-swin-mmdet?scriptVersionId=70843218" target="_blank">https://www.kaggle.com/inoueu1/siim-sub145-lp-c2-effv2m-l-p-swin-mmdet?scriptVersionId=70843218</a></p>
<p>Thank you!!<br>
I used only LB to see the effect of the confidence calibration. I used it for the final sub because it was said to boost scores in the previous VinBigData competition, and because LB had a slightly better each time. However, the effect is marginal, about at most 0.0005, I think.</p> |
SIIM-FISABIO-RSNA COVID-19 Detection | 1st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-FISABIO-RSNA COVID-19 Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://github.com/dungnb1333/SIIM-COVID19-Detection/blob/main/images/flowchart.png" target="_blank">https://github.com/dungnb1333/SIIM-COVID19-Detection/blob/main/images/flowchart.png</a></p>
<p>just my favorite song</p>
<p>Hi <a href="https://www.kaggle.com/awsaf49" target="_blank">@awsaf49</a> , you can hand-label on the training dataset. You can do anything with train data and don't have to share it publicly.<br>
Top1 of RSNA STR Pulmonary Embolism Detection challenge used a lung detector based on handlabeling trainset. In birdcall challenge, the winner also relabeled the trainset because noise in dataset.</p>
<p><a href="https://lucid.app" target="_blank">https://lucid.app</a></p>
<p>I choose the weights 0.3eb5+0.2eb6+0.2eb7+0.3sr152 based on the highest public LB score</p>
<p>Thanks for the question, I use labelImg tool <a href="https://github.com/tzutalin/labelImg" target="_blank">https://github.com/tzutalin/labelImg</a></p>
<p>Thank you (・ω<)☆</p>
<p>yes, 0.635 is my best private LB</p>
<ol>
<li>Yes, please look at my source code</li>
<li>EfficientDet and yolov5 source code are easy to use but difficult to modify, so I can't pretrain backbone of effdet and yolov5 with chexpert or chest14</li>
<li><a href="https://github.com/ultralytics/yolov5/releases" target="_blank">https://github.com/ultralytics/yolov5/releases</a></li>
<li>please look at my kernel, 'none' in image_level = neg prediction of classification model</li>
</ol>
<p>yes, without multiplying by 4/6 and 1/6 respectively</p>
<p>try <br>
pip install git+<a href="https://github.com/bes-dev/mean_average_precision.git@930df3618c924b694292cc125114bad7c7f3097e" target="_blank">https://github.com/bes-dev/mean_average_precision.git@930df3618c924b694292cc125114bad7c7f3097e</a></p>
<p><a href="https://www.kaggle.com/namgalielei" target="_blank">@namgalielei</a> <br>
Don't use chexpert or chest14, load imagenet pretrain<br>
train SeR152-Unet with rsna pneumonia<br>
epoch, lr, train_iou, train_loss, val_iou, val_loss<br>
0, 0.00010, 0.23082, 0.54119, 0.26705, 0.51223<br>
1, 0.00010, 0.26977, 0.49555, 0.29824, 0.48472<br>
2, 0.00009, 0.28105, 0.48332, 0.30401, 0.47968<br>
…<br>
Or train SeR152-Unet with siim covid<br>
epoch, lr, train_loss, train_iou, ema_val_iou, val_map, ema_val_map<br>
0, 0.00010, 0.47998, 0.35340, 0.43048, 0.36499, 0.35439<br>
1, 0.00010, 0.40955, 0.43500, 0.46247, 0.38370, 0.38243<br>
…<br>
I wonder how you get dice coef = 0 ?🤔😂💯</p>
<p>All parameters have been optimized for each step, don't arbitrarily skip any step in my source code.<br>
You should use hengck's idea and code <a href="https://www.kaggle.com/c/siim-covid19-detection/discussion/240233" target="_blank">https://www.kaggle.com/c/siim-covid19-detection/discussion/240233</a></p>
<p>First of all, I would like to thank Kaggle, SIIM, FISABIO, RSNA for this great competition, as well has everyone here who has shared great discussions and notebooks. Congrats to all the winners!</p>
<p><strong>Solution summary:</strong><br>
<a href="https://drive.google.com/file/d/1TaFsZKXfJVwVhognM3lNIxU7FpVCUltt/view?usp=sharing" target="_blank">https://drive.google.com/file/d/1TaFsZKXfJVwVhognM3lNIxU7FpVCUltt/view?usp=sharing</a></p>
<p><strong>Cross-validation</strong>: split trainset to 5 folds by patientid</p>
<p><strong>Multi-task classification + segmentation</strong></p>
<ul>
<li>Pretrain encoder with chexpert + chest14 dataset</li>
<li>Train classification(normal/pneumonia) + segmentation(opacity boxes to mask) with rsna pneumonia dataset </li>
<li>Train classification(4 classes: negative, typical, indeterminate, atypical) + segmentation(boxes to mask) with siim covid dataset, load weight from rsna pneumonia checkpoint</li>
<li>Predict siim-covid testset, train model with siim covid trainset + pseudo testset, load weight from previous checkpoint</li>
<li>Augmentation: RandomResizedCrop, ShiftScaleRotate, HorizontalFlip, VerticalFlip, Blur, CLAHE, IAASharpen, IAAEmboss, RandomBrightnessContrast, Cutout</li>
<li>Exponential moving average with decay 0.997 [increase LB by +0.001~0.002]</li>
<li>Loss: aux loss = 0.6weighted bce + 0.4dice, weight of weighted bce: 0.2negative + 0.2typical + 0.3indeterminate + 0.3atypical to focus on 2 weak classes (indeterminate and atypical).<br>
[increases both CV and LB ~0.002]</li>
<li>Optimizer: Adam with init_learning_rate 0.0001</li>
<li>Scheduler: CosineAnnealingLR</li>
<li>Test time augmentation: 8TTA (original image, center-crop 80%)->resize->(None, horizontal flip, vertical flip, horizontal+vertical flip), 8TTA increase both CV and public LB by +0.003~0.004. In final submission, I use lung detector model instead of center-crop 80%</li>
<li>Final submission includes 4 models with diversity of encoders, decoders, input size: <br>
SeResnet152-Unet 512, EfficientnetB5-DeeplabV3+ 512, EfficientnetB6-Linknet 448, EfficientnetB7-Unet++ 512<br>
Weighted ensembling: 0.3[eb5/deeplabv3] + 0.2[eb6/linknet] + 0.2[eb7/unet++] + 0.3[sr152/unet]</li>
</ul>
<p><strong>Lung Detector-YoloV5</strong><br>
I annotated the train data(6334 images) using <a href="https://github.com/tzutalin/labelImg" target="_blank">https://github.com/tzutalin/labelImg</a> tool and built a lung localizer with the bboxes. I noticed that increasing input image size improves the modeling performance and lung detector helps the model to reduce background noise.<br>
Single fold lung detector increase LB by +0.003 compared with simple center-crop 80%</p>
<p><strong>Detection</strong><br>
Ensemble of 4 models (Yolov5X6 input size 768 + EfficientDet D7 input size 768 + FasterRNN FPN resnet101 input size 1024 + FasterRNN FPN resnet200 input size 768) using weighted boxes fusion (IoU 0.6)</p>
<ul>
<li>Pretrain backbone of FasterRCNN FPN with chexpert + chest14</li>
<li>Train models with rsna pneumonia boxes</li>
<li>Train models with siim covid trainset, load weight from rsna checkpoint</li>
<li>Predict siim covid testset + external dataset (padchest, pneumothorax + vin)<br>
Keep images that meet the conditions: negative prediction < 0.3 and maximum of (typical, indeterminate, atypical) predicion > 0.7.Then choose 2 boxes with the highest confidence as pseudo labels for each image.</li>
<li>Train model with rsna pneumonia label + external pseudo label (siim covid testset + padchest, pneumothorax + vin)</li>
<li>Train model with siim covid trainset, load weight from checkpoint of pseudo labeling stage</li>
<li>Augmentation: Scale, RandomResizedCrop, Rotate(maximum 10 degrees), HorizontalFlip, VerticalFlip, Blur, CLAHE, IAASharpen, IAAEmboss, RandomBrightnessContrast, Cutout, Mosaic, Mixup</li>
<li>Loss: FocalLoss</li>
<li>Test time augmentation: 2TTA for EfficientDet (original + hflip), 3TTA for Yolov5(original, scale 0.83 + hflip, scale 0.67), 3TTA for FasterRCNN (original, hflip, vflip)</li>
</ul>
<p><strong>Performance of models</strong><br>
classification [email protected] 4 classes: negative, typical, indeterminate, atypical</p>
<table>
<thead>
<tr>
<th></th>
<th>SeR152-Unet</th>
<th>EB5-Deeplab</th>
<th>EB6-Linknet</th>
<th>EB7-Unet++</th>
<th>Ensemble</th>
</tr>
</thead>
<tbody>
<tr>
<td>w/o / 8TTA</td>
<td>0.575/0.584</td>
<td>0.583/0.592</td>
<td>0.580/0.587</td>
<td>0.589/0.595</td>
<td>0.595/0.598</td>
</tr>
</tbody>
</table>
<p>detection opacity class </p>
<table>
<thead>
<tr>
<th></th>
<th>YoloV5X6 768</th>
<th>EffdetD7 768</th>
<th>F-RCNN R200 768</th>
<th>F-RCNN R101 1024</th>
</tr>
</thead>
<tbody>
<tr>
<td>[email protected] TTA</td>
<td>0.580</td>
<td>0.594</td>
<td>0.592</td>
<td>0.596</td>
</tr>
</tbody>
</table>
<p>Public LB/Private LB: 0.658/0.635</p>
<p><strong>What did not work</strong></p>
<ul>
<li>Multi-task classification + detection </li>
<li>Stack multi classification models using cnn or lgbm</li>
<li>Mixup + cutmix for classification models </li>
<li>Bimcv + ricord dataset: most of the images in bimcv and ricord duplicate with siim covid trainset and testset. To avoid data-leak when training, I didn't use them.</li>
<li>Increase the input size of classification model to 1024, the score of both CV and public LB is almost unchanged from 512</li>
</ul>
<p>The complete code used in this competition has been uploaded to the following github: <br>
<a href="https://github.com/dungnb1333/SIIM-COVID19-Detection" target="_blank">https://github.com/dungnb1333/SIIM-COVID19-Detection</a></p>
<p>Inference notebook: <a href="https://www.kaggle.com/nguyenbadung/siim-covid19-2021?scriptVersionId=69474844" target="_blank">https://www.kaggle.com/nguyenbadung/siim-covid19-2021?scriptVersionId=69474844</a></p>
<p>DEMO Notebook to visualize the output of models:<br>
<a href="https://github.com/dungnb1333/SIIM-COVID19-Detection/blob/main/src/demo_notebook/demo.ipynb" target="_blank">https://github.com/dungnb1333/SIIM-COVID19-Detection/blob/main/src/demo_notebook/demo.ipynb</a></p>
<p>I think they are focusing on seti competition instead of this competition otherwise I would not have won. For example Guanshuo Xu just need 2 weeks for 10th place instead of 3 months of continuous experiment like me. I wish I could be professional like them.</p>
<p>Thank you!</p>
<p>Thank you 😄!</p>
<p>Hi <a href="https://www.kaggle.com/vdefont" target="_blank">@vdefont</a> </p>
<ol>
<li>to optimize the inference time. In my experience ensembling of multiple models is much better than single model and multi test time augmentation </li>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage" target="_blank">https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage</a></li>
<li>You use the prediction value on the test set as a soft-label ground-truth</li>
<li>LabelImg <a href="https://github.com/tzutalin/labelImg" target="_blank">https://github.com/tzutalin/labelImg</a></li>
</ol>
<p>My idea is that chexpert+chest14 help the models better in better detecting negative cases. The LB increased by about 0.003-0.004 because negative's score accounts for 2/6 of the total score.</p>
<p>Thank you!</p>
<p>chỉ thay đổi con số ở vị trí bảng xếp hạng thôi ^^</p>
<p>Thank you 😄!</p>
<p>Thank you!</p> |
SIIM-FISABIO-RSNA COVID-19 Detection | 26th Place Solution : Brief Writeup | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-FISABIO-RSNA COVID-19 Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi <a href="https://www.kaggle.com/mrinath" target="_blank">@mrinath</a> you can find the code here <a href="https://github.com/TheoViel/kaggle_siim_covid/blob/main/src/data/extraction.py" target="_blank">https://github.com/TheoViel/kaggle_siim_covid/blob/main/src/data/extraction.py</a></p>
<p>Please let me know if you need anything else</p>
<p>Hi all, <br>
It was a very interesting and challenging competition, congratulations to all the winners and a big thank you to the organizers for the competition !</p>
<h2>Introduction</h2>
<p>We started this competition quite late, it was my first Object detection competition hence a great opportunity to learn. Teaming up with <a href="https://www.kaggle.com/pheadrus" target="_blank">@pheadrus</a> , <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> ,and <a href="https://www.kaggle.com/onodera" target="_blank">@onodera</a> was a great experience for me. </p>
<h2>Overview</h2>
<p>Our Final solution is fairly simple, even though we tried a lot of different things without success, that other top teams managed to make work. In this write-up we’ll focus a bit more on what differentiate our solution from others, as most of the stuff is really similar. </p>
<h2>Solution</h2>
<h3>4-class Classification</h3>
<p>Our architecture is similar to heng’s idea, as it includes an auxiliary segmentation head on the two last layers. We trained a bunch of models (and image sizes) for diversity, in the end the following ones were used in the final ensembles, with horizontal flip tta :</p>
<ul>
<li><a href="https://www.kaggle.com/pheadrus" target="_blank">@pheadrus</a> : v2m (512), b5 (512), b5 (640), v2m (768), v2m + pl (512)</li>
<li><a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> : v2s (512), v2m (512)</li>
</ul>
<p>While Pheadrus used normal preprocessing that is used by most in the competition.<br>
Theo’s model used a different preprocessing technique (automated padding removal & clipping to the 1st and 99th percentiles before normalizing), as well as an auxiliary binary classification head for the none class.</p>
<p><a href="https://ibb.co/m56LM4n"><img src="https://i.ibb.co/CKhkTQj/proc.png" alt="proc"></a><br>
Theo’s preprocessing : windowing & padding removal</p>
<p><a href="https://ibb.co/FVQjjNj"><img src="https://i.ibb.co/bJ488q8/model.png" alt="model"></a><br>
Classification models overview. Phaedrus’ models don’t use the opacity head.</p>
<h3>2-class classification</h3>
<p>In addition to Theo’s model, I trained a b4, b5 and b6. They helped CV a bit.</p>
<h3>Detection</h3>
<p><a href="https://www.kaggle.com/onodonera" target="_blank">@onodonera</a> built a bunch of yolov5 (384) whereas <a href="https://www.kaggle.com/phaedrus" target="_blank">@phaedrus</a> used yolov5 (768). One of our two subs used the former and the other one the latter, along with some other modifications. We probably should’ve spent more time training a bigger variety of detection models. </p>
<h3>Final sub</h3>
<p>Our 0.619 sub uses <a href="https://www.kaggle.com/pheadrus" target="_blank">@pheadrus</a> models for detection, <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> model for both classification tasks and <a href="https://www.kaggle.com/pheadrus" target="_blank">@pheadrus</a> models for the study target. <br>
Our other sub - which we thought would perform better - also included my models for the binary classification and uses <a href="https://www.kaggle.com/onodera" target="_blank">@onodera</a>'s detection model.</p>
<p>We used power ensembling for classification models (p=2), and calibrated predictions of the detection model by multiplying them by the binary model prediction^0.1. This gave a <0.001 boost though.</p>
<h3>What didn’t work for us</h3>
<ul>
<li>Pretraining on the CovidX data for classification</li>
<li>Pseudo-labelling the public test set for classification</li>
<li>Using a lung detection model for preprocessing helped some of the models but not the overall blend</li>
<li>Separate classifiers for the atypical and indeterminate classes</li>
<li>We built and pseudo-labeled a dataset of 25k (covid positive and negative) images from the BimCV, Ricord and CovidX datasets, but couldn’t managed to get any improvement from that either</li>
</ul>
<p><em>Thanks for reading !</em></p>
<p>Thanks chris , congrats on a strong finish once again , looking forward to read your solution 😉</p> |
SIIM-FISABIO-RSNA COVID-19 Detection | 27th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-FISABIO-RSNA COVID-19 Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you <a href="https://www.kaggle.com/xiaojin712" target="_blank">@xiaojin712</a> :) It was a pleasure teaming up with you! I learned a lot from you. I hope next time we get shaken up and not down 😄</p>
<p>I would like to thank kaggle and the hosts SIIM-FISABIO-RSNA for hosting such a special competition in this pandemic.<br>
Also, big thanks to my teammates <a href="https://www.kaggle.com/nicohrubec" target="_blank">@nicohrubec</a> <a href="https://www.kaggle.com/xiaojin712" target="_blank">@xiaojin712</a> and <a href="https://www.kaggle.com/nickuzmenkov" target="_blank">@nickuzmenkov</a> that have been working hard everyday for the past month trying to keep up with the final race and maintain our position in the top of the leaderboard. </p>
<h3>Final scores</h3>
<table>
<thead>
<tr>
<th><strong>Task</strong></th>
<th><strong>CV</strong></th>
<th><strong>public LB</strong></th>
<th><strong>private LB</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Study level</strong></td>
<td>0.398</td>
<td>0.464</td>
<td>0.428</td>
</tr>
<tr>
<td><strong>Image level</strong></td>
<td>0.264</td>
<td>0.282</td>
<td>0.279</td>
</tr>
<tr>
<td><strong>Total</strong></td>
<td>0.662</td>
<td>0.644</td>
<td>0.617</td>
</tr>
</tbody>
</table>
<h2>1- Pretraining</h2>
<p><img src="https://i.ibb.co/7SvCkkC/4.png" alt=""><br>
We started the competition pretraining an efficientnet B7 on NIH dataset, it gave a +0.002 boost to both CV and LB.<br>
The OOFs of this model were saved and used to apply self distillation to all our models.</p>
<ul>
<li>Pretrain on <a href="https://www.kaggle.com/nih-chest-xrays/data" target="_blank">NIH Chest X-Ray Dataset</a>.</li>
<li><a href="https://www.kaggle.com/nickuzmenkov/29-place-solution-siim-covid-tf-study-training" target="_blank">EfficientNet B7 training notebook</a>.</li>
</ul>
<h2>2- Study level classification</h2>
<p><img src="https://i.ibb.co/DVSwVSc/SIIM-COVID-writeup.png" alt=""></p>
<ul>
<li><p><strong>Self distillation:</strong> <br>
We started using self distillation by blending the <code>labels*0.7 + B7 OOFs*0.3</code>. The idea is similar to label smoothing, introduce some noise to the labels instead of feeding one hot encoded representation to the model. We kept increasing the B7' OOF's coefficient and both CV and LB kept increasing until we reached the final <code>labels*0.15 + B7 OOFs*0.85</code> with a total boost of <strong>+0.007</strong> on LB.</p></li>
<li><p><strong>Pseudolabeling:</strong><br>
Nothing special about how we do this. We just take the publicly available part of the test set and label it with our predictions.<br>
We then add the test data as additional data to all the train folds with the predictions as soft labels. This was one of the first improvements we made to our pytorch pipeline. At the time it improved CV by <strong>+0.007</strong> and LB by <strong>+0.012</strong>.</p></li>
<li><p><strong>Segmentation as auxilliary Head:</strong><br>
The aux head code we essentially copied from Heng. The aux head is attached after the 4th convolutional block. The loss is calculated as <code>0.7 LOVASZ + 0.3 BCE</code>.<br>
Interestingly, we found that AUX did indeed improve the performance, but only for models without pseudolabeling. We did not investigate the reason for this any further. One potential reason could be that the predicted masks for the test set are too noisy to be useful.<br>
Effnet V2-M + aux head didn't work as good as it did for other teams, it didn't outperform our best single model trained with pseudolabels. Including it in the blend gave a boost to the ensemble.</p></li>
<li><p><strong>Finetuning:</strong><br>
In the beginning we were working with V2L and image size 512. At some point we switched to image size 384 with Efficientnet V2M.<br>
Additionally, we tuned tuned the learning rate. All these changes together resulted in an increase of CV and LB of about 0.01.</p></li>
</ul>
<h2>3- Opacity/None classification</h2>
<p><img src="https://i.ibb.co/5TBGf3P/3.png" alt=""><br>
Trained the 2 classes (Opacity/None) with the same set up as the study level models. We got a +0.004 boost with respect to the 2 classes classifier public notebook.</p>
<h2>4- Image level detection</h2>
<p><img src="https://i.ibb.co/NnFrPPS/2.png" alt=""></p>
<ul>
<li>Yolov5x was trained with image size 512x512.</li>
<li>Yolov5x6 and YoloV5L6 were trained with image size 1024x1024.</li>
<li>Pseudolabeling improved the score a little bit, not as much as with study level models.</li>
<li>None class was blended with the negative for pneumonia class from the study level ensemble.</li>
<li>Ensemble (WBF) of 3 YoloV5 models + None class gave a final score of 0.282 on LB.</li>
</ul>
<h2>What worked for us:</h2>
<ul>
<li>Heavy augmentations (even very destructive ones).</li>
<li>Self distillation.</li>
<li>Pseudolabelling.</li>
</ul>
<h2>What didn't work for us:</h2>
<ul>
<li>Larger image sizes.</li>
<li>Knowledge distillation: <br>
<code>Loss = BCE(labels) + MSE(unpooled features).</code></li>
<li>Knowledge distillation + Aux loss: <br>
<code>Loss = BCE(labels) + MSE(unpooled features) + Lovasz(masks).</code></li>
<li>TTA (For study level).</li>
<li>EfficientNetV2 in TensorFlow.</li>
<li>ResNet models (seResNet50, seResNet-101, ResNet200D…)</li>
<li>Crossentropy with label smoothing.</li>
</ul>
<h2>Final thoughts</h2>
<p>The study level models ensemble gave us a +0.004 boost on public LB <strong>0.460 -> 0.464</strong>, which helped us reach the final score of 0.644. It also boosted our CV from <strong>0.393 -> 0.398</strong>. However, it didn't work on private LB. The ensemble scored lower than our best single model on private LB, which is the reason behind our shakedown.</p> |
SIIM-FISABIO-RSNA COVID-19 Detection | 2nd place summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-FISABIO-RSNA COVID-19 Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi. we didn't check private score for those parts. Most of our submission was for public test dataset only.</p>
<p>Congrats to you and team too <a href="https://www.kaggle.com/awsaf49" target="_blank">@awsaf49</a> <br>
No, we don't qualify for student prize.</p>
<p>Yes, we use 5 fold CV and we calculate mAP once for the whole oof.</p>
<p>yes. I use my modified wbf from past competition. </p>
<p>yes. none includes some none-negative images (none-negative images without boxes). we use has_box/not_has_box as 5th class</p>
<ul>
<li>the quality of pretrained weights affects Siim 4-class. I got 0.36 AUC NIH 14-class, another model CV 0.29 has significant lower performance on Siim.</li>
<li>Nice idea. I think it should work too.</li>
</ul>
<p>Thanks to SIIM, FISABIO, RSNA and Kaggle for hosting this interesting competition. I am grateful to my teammates <a href="https://www.kaggle.com/steamedsheep" target="_blank">@steamedsheep</a> <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a> <a href="https://www.kaggle.com/dvtoan7997" target="_blank">@dvtoan7997</a> and <a href="https://www.kaggle.com/underwearfitting" target="_blank">@underwearfitting</a> all did a very good job and finally we finished in 2nd place. </p>
<p><em>The following is our approach to this competition.</em></p>
<p><strong>Study level:</strong> we mainly work on 2 pipelines and then ensemble.</p>
<p><strong><em>Pipeline 1:</em></strong> NFnet, Cait: eca_nfnet_l1, eca_nfnet_l2, dm_nfnet_f2, dm_nfnet_f3, cait_xs24_384.</p>
<ul>
<li>Pretraining on NIH dataset: Similar to other team, pre-training improves our CV and LB</li>
<li>5 class classification: we added none as a 5th class.</li>
<li>We didn’t use aux heads in this pipeline since it does not improve our CV score.</li>
<li>To add diversity we use different preprocessing methods (histogram equalization, ben’s preprocessing, and adding a lung segmentation channel) and different model architectures: </li>
<li>No external data</li>
<li>All model are trained at 384x384</li>
</ul>
<p><strong><em>Pipeline 2:</em></strong> EfficientnetV2m </p>
<ul>
<li>Use segmentation aux head</li>
<li>To add diversity we use 5 different augmentation, 4 models trained with external data and BCE loss, 5 models training with CE loss.</li>
<li>Model are trained at 512x512.</li>
</ul>
<p><strong>Image level</strong><br>
<strong><em>None:</em></strong> We ensemble none probability from 3 sources.</p>
<ul>
<li>2 class classification model to detect none</li>
<li>None probability from pipeline 1.</li>
<li>Negative probability from pipeline 2.</li>
</ul>
<p><strong><em>Opacity:</em></strong></p>
<ul>
<li>Yolov5: we use 5 different backbone (resnet52, resnet101, yolov5m, yolov5x, eca_nfnet_l0). All models are train at 384x384</li>
<li>YoloX: We use 2 backbone (yolox-m and yolox-d). Model are trained at 384x384</li>
<li>EffDet D5: trained at 512x512</li>
</ul>
<h1>Result</h1>
<p><strong>Study Level</strong></p>
<table>
<thead>
<tr>
<th>Pipeline</th>
<th>CV</th>
<th>Public LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>Pipeline 1</td>
<td>0.596</td>
<td>0.406</td>
</tr>
<tr>
<td>Pipeline 2</td>
<td>0.598</td>
<td>0.407</td>
</tr>
<tr>
<td>Ensemble</td>
<td>0.604</td>
<td>0.410</td>
</tr>
</tbody>
</table>
<p><strong>None</strong></p>
<table>
<thead>
<tr>
<th>Pipeline</th>
<th>CV</th>
<th>Public LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ensemble</td>
<td>0.822</td>
<td>0.134</td>
</tr>
</tbody>
</table>
<p><strong>Opacity</strong></p>
<table>
<thead>
<tr>
<th>Pipeline</th>
<th>CV</th>
<th>Public LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>yolov5</td>
<td>0.56</td>
<td>0.098</td>
</tr>
<tr>
<td>yolox</td>
<td>0.55</td>
<td>-</td>
</tr>
<tr>
<td>effdetD5</td>
<td>0.53</td>
<td>0.093</td>
</tr>
<tr>
<td>Ensemble</td>
<td>0.59</td>
<td>0.100</td>
</tr>
</tbody>
</table>
<p><strong>Final submission</strong></p>
<table>
<thead>
<tr>
<th>Pipeline</th>
<th>None CV</th>
<th>opacity CV</th>
<th>study level CV</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>ensemble</td>
<td>0.82</td>
<td>0.59</td>
<td>0.604</td>
<td>0.636</td>
<td>0.645</td>
<td>0.634</td>
</tr>
</tbody>
</table>
<h1>Source code:</h1>
<ul>
<li>Training: <a href="https://github.com/nvnnghia/siim2021" target="_blank">https://github.com/nvnnghia/siim2021</a></li>
<li>Inference: <a href="https://www.kaggle.com/nvnnghia/siim2021-final-sub2" target="_blank">https://www.kaggle.com/nvnnghia/siim2021-final-sub2</a></li>
</ul>
<p>Thanks. <br>
we didn't apply pretraining to detection model. We use coco pretrained weights. </p>
<p>Thanks. congrats to you too. winning a solo 1st place a second time.</p>
<p>Thanks <a href="https://www.kaggle.com/yujiariyasu" target="_blank">@yujiariyasu</a> . congrats to you too. See you in another competition.</p> |
SIIM-FISABIO-RSNA COVID-19 Detection | 34th solution EffnetV2 + yolov5 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-FISABIO-RSNA COVID-19 Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><blockquote>
<p>Hello, I am trying to run your notebook v2 covid-submit in kaggle but stuck at some missing file error which goes like this : FileNotFoundError: [Errno 2] No such file or directory: '/kaggle/input/covidmodels/Archive/classify-ep12/f0.pth'</p>
</blockquote>
<p>This is because model checkpoint is hidden</p>
<blockquote>
<p>How can I get these files ? I found your discussion page and executed covid19-classify notebook (I thought this notebook will output the missing model files which I can download and upload), but here also I am hitting dead end because kaggle is closing the notebooks execution after 9 hrs automatically.</p>
</blockquote>
<p>This is because kaggle P100 is too slow to run efficientnetV2-m <br>
it will take 5(fold) * 4 hours more than 20 hours </p>
<p>So when I train it, I used V100 GPU in Colab Pro (rent it with 10$ per month) <br>
it will be 5 times faster so that will only take about 4 hour 5 fold in total </p>
<p>If you want to have a quickly reproduce check<br>
you can use <code>VERSION 8</code> in <br>
<a href="https://www.kaggle.com/drzhuzhe/covid19-classify" target="_blank">https://www.kaggle.com/drzhuzhe/covid19-classify</a> <br>
this is a lightly version with EfficientnetB3 backbone<br>
it will take 2 hour on Kaggle P100 GPU per single fold</p>
<p>I used to use it to debug<br>
It will produce a result slightly worse than EfficientnetV2</p>
<p>b3: (3.76 + 3.92 + 3.85 + 3.7 + 3.6)/5 avg Map 3.766 LB: 4.44<br>
v2m: (0.392 + 0.392 + 0.385 + 0.381 + 0.374) avg Map 0.384 LB score 0.452</p>
<blockquote>
<p>What I learned Most in this competition is how to fight against overfitting</p>
</blockquote>
<h2>Model:</h2>
<p>Efficientnetv2m + yolom tta LB 6.27 PB 6.13 <br>
Efficientnetv2m + yolox tta LB 6.29 PB 6.15</p>
<h2>Ensemble In finnaly submit</h2>
<p>I use efficientnetv2m + efficientnetb3 + yolom + yolox for divierce but still LB 6.29 PB 6.15</p>
<h2>Code:</h2>
<p>Classify model: <a href="https://www.kaggle.com/drzhuzhe/covid19-classify/" target="_blank">https://www.kaggle.com/drzhuzhe/covid19-classify/</a><br>
Detection model: <a href="https://www.kaggle.com/drzhuzhe/covid19-det" target="_blank">https://www.kaggle.com/drzhuzhe/covid19-det</a><br>
Emsemble(Version 17): <a href="https://www.kaggle.com/drzhuzhe/v2-covid-submit" target="_blank">https://www.kaggle.com/drzhuzhe/v2-covid-submit</a></p>
<h1>Summary:</h1>
<h2>Problem definition and analysis</h2>
<ol>
<li><p>In this last week of June I take a tour on discuss of this competition, I think this key in this competition is make use of multi task data to get a better performance </p></li>
<li><p>So that what goal set to my self is customize block for make use information in object detection box to help classification , vice versa.</p></li>
</ol>
<h2>RoadBlock</h2>
<ol>
<li><p>I reproduce Hengck's code very quickly ,<br>
But Frustratedly, whatever I change a bit of this code , I get a very bad result <br>
I try attach AUX head to different block , multi output , gradient accumulate <br>
But due to my mistake of lacking data augment, I cannot move a step in a almost whole July month</p></li>
<li><p>This is most because In beginning I think I must be make a mistake in using TIMM , <br>
So I carefully read TIMM implementation of EfiicientnetV2 and compare it to original <br>
Implementation, I go over concept and idea of NAS. this is really beneficial</p></li>
<li><p>In the end of July I finally overcome this, with only carefully tuning data augment a bit .</p></li>
</ol>
<h2>Solution and Further</h2>
<ol>
<li><p>My solution is detailed in code commit </p></li>
<li><p>Augment in classification and hyperparams in Yolox is far away from tuning to the best</p></li>
<li><p>label smooth mixup also is still waiting to be added, maybe next competition a will try it fully</p></li>
<li><p>Next time a shall try more on unsupervised methods and different Block architectures</p></li>
</ol>
<h1>Thanks</h1>
<ol>
<li><p>When I meet the roadblock I search over many resource and make 2 posts in discuss, I 'm very appreciating for every reply and suggestion</p></li>
<li><p>And want to give a thinks to everyone make advisors and contribute to open source , without this I can hardly learn anything</p></li>
</ol>
<p>Haha , I have been fired from China Tech giant, I'm an independent developer now</p>
<p>你通过kaggle的contact发个联系方式给我吧,你要找工作吗?</p>
<p>effnet == efficientnet</p> |
SIIM-FISABIO-RSNA COVID-19 Detection | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-FISABIO-RSNA COVID-19 Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>adam & CosineAnnealingWarmRestarts.<br>
The schedulers were all pretty much the same.</p>
<p>thank you!</p>
<p>I used cropping to preprocess the images before making study level predictions.<br>
the prediction results of detection will change the crop, so the wisdom of the detection model can be conveyed to the classification model.</p>
<p>Congrats on your second solo win! You deserve even better than GM.<br>
I don't plan to publish it, but I can answer any questions you have here.</p>
<p>like this</p>
<pre><code>def forward(self, x):
mask = self.forward_features(x)
x = self.head(mask)
return x, mask
</code></pre>
<p>And you need to resize the true mask.</p>
<p>Thank you!</p>
<p>The features just before I put it in the head is the input mask for the aux criterion. And head is just nn.Linear().</p>
<p>Thanks! I'm going to proceed little by little.</p>
<p>Thanks! I hope so too.</p>
<p>So do you! Congratulations!</p>
<p>Thank you!</p>
<p>Thanks! In fact, at the end of the competition, we didn't submit study level only. But I think it's probably 0.413. Maybe it's 0.414. We don't know much about LB since we hardly paid attention to it after working with Ian in the first place.</p>
<p>First of all, I would like to thank you for organizing this very interesting competition.<br>
Also, I can't thank IanPan enough. After we teamed up, our score improved very quickly, as if by magic. I have learned a lot from him through this competition.</p>
<p>The following is my part.</p>
<hr>
<h3>study level:</h3>
<p>・Swin Transformer <br>
In my environment, Swin Transformer was the best. cait was not bad, but aux loss did not work well with cait, so I chose Swin Transformer.<br>
I left the resolution at 384. I tried 768, but it was slightly worse. I also did some cropping, so 384 seemed to be sufficient.</p>
<h5>1st stage:</h5>
<p>・Pre training was done with chexpert data.<br>
This is Ian's idea. The data is quite different from ImageNet. we can get a good initial set of weights by pre training.<br>
In addition, further pre-training with the RSNA 2018 competition data before training with chexpert resulted in a slight improvement in CV over using chexpert only.<br>
I used crop, which I will write about later, and did not use aux loss.</p>
<h5>2nd stage:</h5>
<p>・5classes<br>
I included 'none' in the class.<br>
・Cropping with detection and segmentation<br>
By using the predictions of the detection for cropping, I was able to apply the wisdom of the detection model to the classification model, and I also used the lung Segmentator trained by VinBigData datasets. I crop the image only in the area where the detection box (conf > 0.3) or segmentation mask exists.<br>
If the box is used as-is, for example, a pleural effusion may result in an image that is difficult to distinguish, The box has been padded with 100 pixels.<br>
・mixup<br>
Mixup brought about a small improvement. In the end, for my part, I used two different models, one with Mixup and one without.<br>
At the stage of not using aux loss, resize mix was the best, but in the end, mixup was better.<br>
・aux loss<br>
I used binary_cross_entropy_with_logits.<br>
In one of the two models, I set the negative image loss to zero.<br>
・No pseudo label<br>
I tried everything, but the results were slightly worse.<br>
・The average of the 5fold model for the four classes of mAP was 0.397.</p>
<h3>image level</h3>
<p>・YOLOV5<br>
I used l6&l.</p>
<h5>opacity:</h5>
<p>・Pre training was done with RSNA 2018 data.<br>
The same reason as for study level.<br>
・mixup: 0.5<br>
There was a slight improvement.<br>
・Use none predictions<br>
For each image, I found the following improved the score the most<br>
dection_conf = dection_conf * (1-image none prediction)**0.4</p>
<h5>none:</h5>
<p>I used predictions of classification models and detection models like this.<br>
none_pred = cl_none_pred<em>0.7 + (1 - image_conf_max)</em>0.3</p>
<hr>
<p>The following is a message from Ian.</p>
<hr>
<p>I would like to thank Yuji for agreeing to team with me. I reached out later in the competition when he was already doing really well without me. I learned a lot from him, and this competition reminded me how fun and enlightening it is to team up on Kaggle. </p>
<h3>Classification</h3>
<p>For classification, I trained a 5-class classifier: 4 study labels + none at image-level (vs. opacity). We only trained on 1 image/study. Since the vast majority of studies only had a single unique image, this was not really an issue. </p>
<p>First, I pretrained the classifier on the 3 classes from the RSNA 2018 Pneumonia Detection Challenge. Then, I fine-tuned from these weights on this competition's data. This significantly increased performance. </p>
<p>I trained a hybrid classification-segmentation model. One architecture used EfficientNet-B6 and DeepLabV3+ while the other used Swin Transformer and FPN. These were trained using the awesome Segmentation Models PyTorch library (<a href="https://github.com/qubvel/segmentation_models.pytorch)" target="_blank">https://github.com/qubvel/segmentation_models.pytorch)</a>. </p>
<p>The models were trained using AdamW optimizer, 512x512 images, and data augmentation with contrast/brightness adjustment, random noise, and scale/shift/rotate using albumentations. </p>
<p>I used BCE loss for both classification (even though you could use cross-entropy, I found BCE worked better for the metric) and segmentation, weighted 1:1. The segmentation part of the model only had 1 class for opacity. The segmentation label was generated using the bounding box labels. I turned the bounding box into an ellipse and used that as the training label. </p>
<p>I also used mixup, which showed significant improvements. 5-fold ensemble of the EfficientNet-B6/DeepLabV3+ and Swin/FPN models (10 models total) gave 0.408 LB for study-label classification. </p>
<h3>Detection</h3>
<p>I trained detection models using the awesome EfficientDet library (<a href="https://github.com/rwightman/efficientdet-pytorch" target="_blank">https://github.com/rwightman/efficientdet-pytorch</a>) and MMDetection (<a href="https://github.com/open-mmlab/mmdetection)" target="_blank">https://github.com/open-mmlab/mmdetection)</a>. </p>
<p>I also pretrained these models on the RSNA 2018 dataset (images with opacity only) using the bounding box labels. This significantly increased performance for detection as well. I trained 5-fold EfficientDet-D7X on 512x512 images (positives only). I also trained 5-fold EfficientDet-D6 on 640x640 images (positives only). </p>
<p>To add diversity to the ensemble, I trained models in MMDetection as well. I used Swin Transformer with RepPoints, borrowing the code from <a href="https://github.com/SwinTransformer/Swin-Transformer-Object-Detection" target="_blank">https://github.com/SwinTransformer/Swin-Transformer-Object-Detection</a> to use Swin Transformer as a backbone for object detection. These models were trained using multi-scale training (512x512 to 768x768) and during inference 3x multi-scale TTA was used. </p>
<p>I contributed 15 detection models (EffDet-D7X, EffDet-D6, Swin-RepPoints) for the ensemble. </p>
<h3>What Didn't Work For Me</h3>
<ul>
<li>Training classifier with detection as auxiliary loss instead of augmentation</li>
<li>Mixup during detection</li>
<li>Training a multiclass detector (using study labels, instead of just training on single-class opacity)</li>
<li>In my experiments, pseudolabeling did not improve CV</li>
<li>Training a multiclass segmentation model and using that to help refine study-label predictions</li>
<li>Incorporating bounding box predictions into input for classification models</li>
</ul>
<hr>
<p>The following is the part of the ensemble.</p>
<h3>study level</h3>
<p>In the end, we used a simple averaging.<br>
I tried Nelder-Mead, ridge regression using predictions from detection models, etc., but all of them did not work because they had overfiit.<br>
By averaging the predictions of my two models with those of Ian's two models, we made significant improvements in both CV/LB. Since we both developed our models using each other's pipelines, there was a lot of diversity in the models, and by mixing our predictions at the end, the final predictions were more robust.<br>
Our final study level LB was about 0.413.</p>
<h3>image level</h3>
<p>I mixed my yolov5 with Ian's effdet and mmdet Swin Transformer models in wbf. I saw a big improvement. I used the same weights for all the models as I did not see any improvement with different weights. Also, iou threshold 0.6 was the best.</p>
<h3>code</h3>
<p>train: <a href="https://github.com/yujiariyasu/siim_covid19_detection" target="_blank">https://github.com/yujiariyasu/siim_covid19_detection</a><br>
inference: <a href="https://www.kaggle.com/yujiariyasu/fork-of-siim-covid-19-full-pipeline-v2" target="_blank">https://www.kaggle.com/yujiariyasu/fork-of-siim-covid-19-full-pipeline-v2</a></p>
<p>I would like to thank all the participants for their hard work in this competition!</p>
<p>Thanks to you too! I'll try my best to be like you one day:)</p>
<p>Thank you!</p>
<p>Thanks! Ian was amazing!</p>
<p>CV opacity ap was 0.5623. single best was 0.52~0.53. I'm not sure how much lb. </p> |
SIIM-FISABIO-RSNA COVID-19 Detection | 47th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-FISABIO-RSNA COVID-19 Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everyone!</p>
<p>First of all, congratulations to all the winners and did a great job on all participants' hard works! And Thanks to SIIM, FISABIO, RSNA, and Kaggle for hosting this competition.</p>
<h2>TL;DR</h2>
<p>I only got Kaggle GPU/TPU, couldn't experiment with many models with various training recipes. So, I try to implement the training codes which work on TPU as possible as I can! Usually, I trained the object-detection model on GPU, image classifier for study-level on TPU.</p>
<p>Anyway, I hope this solution helps you in some ways :)</p>
<h2>Study-Level</h2>
<p>I used <code>effnet-b7</code> and <code>effnet-b6</code> models w/o the auxiliary branches (for segmentation head). The models were trained on grouped 5 folds and on different resolutions, 640, 800 respectively.</p>
<p>And simply averaging for ensembling the models. LB score for mAP is 0.453. </p>
<p>Additionally, I utilized <code>multi-paths dropout</code> (total 5 paths) to regularize the model & found proper augmentations on my training recipes. Compared to the public notebooks, which introduce <code>effnet-b7</code> as a baseline, I guess those helps to boost CV/LB score.</p>
<p>Lastly, I didn't apply TTA for the study-level because of the limitation of inference time.</p>
<h2>None Classifier</h2>
<p>I just used study-level models' negative confidence scores, and it boosts LB +0.003. Also, I tried to ensemble study-level models and opacity 2class classifier, but it dropped CV/LB scores.</p>
<h2>Image-Level</h2>
<p>Overall, I experimented with 3 types of models, <code>Yolov5</code>, <code>CascadeRCNN</code>, <code>VFNet</code>. In my recipes, <code>VFNet</code> achieves <code>bbox mAP</code> 0.55~, but the LB/PB score is lower than I guess.</p>
<p>And <code>CascadeRCNN</code> didn't go well with Yolov5 models. <code>Yolov5</code> and <code>Yolov5 + CascadeRCNN</code> have comparable CV/LB scores, but only the <code>Yolov5</code> series have better LB/PB scores than the combined. (maybe the correlation of both models is high, I didn't check yet).</p>
<p>Finally, I ensembles 3 series of Yolov5, <code>yolov5x6</code>, <code>yolov5l6</code>, <code>yolov5m6</code> respectively, and applied WBF with the same weights.</p>
<h2>Works for me</h2>
<ul>
<li>label smoothing (0.05 is best on my experiments)</li>
<li>640 ~ 800 resolutions for study-level (It's better than 512 on my experiments)</li>
<li>512 resolution for image-level</li>
<li>WBF (iou_threshold : 0.6, skip_box_threshold : 0.01)</li>
<li>TTA for image-level</li>
<li>inference higher resolution.<ul>
<li>train 512 and inference 640 resolution for image-level (LB +0.003)</li></ul></li>
<li>light augmentations<ul>
<li>HorizontalFlip</li>
<li>CutOut (huge patch, small number of patches)</li>
<li>Brightness</li>
<li>Scale/Shift/Rotate</li></ul></li>
</ul>
<h2>Not-works for me</h2>
<ul>
<li>train on high-resolution for study-level & image-level<ul>
<li>got higher CV score, but comparable LB, PB scores for image-level </li></ul></li>
<li>effnetv2 series</li>
<li>the auxiliary losses</li>
<li>external data (w/ pseudo labeling)</li>
<li>heavy augmentations for study-level models</li>
<li>post-processing<ul>
<li>calibrating the confidence score to filter <code>none</code> class</li>
<li>modified WBF which introduced in <a href="https://www.kaggle.com/shonenkov/wbf-over-tta-single-model-efficientdet" target="_blank">here</a></li></ul></li>
</ul>
<h2>Reflections</h2>
<ul>
<li>One thing I regretted is the diversity of the models. Both study/image-level models of mine have a high prediction correlation because they are the same series (e.g. effnet, yolov5).</li>
<li>trust CV</li>
</ul>
<h2>Source Code</h2>
<p>You can check out my inference pipeline. <a href="https://www.kaggle.com/kozistr/infer-efnb6-7-yolov5m-l-x6?scriptVersionId=69448986" target="_blank">inference code</a></p>
<p>Thank you very much!</p> |
SIIM-FISABIO-RSNA COVID-19 Detection | 4th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-FISABIO-RSNA COVID-19 Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>that's weird. if we use <strong>valid_labels</strong>(leaky) then we get overestimated score hence <code>cv>lb</code> but if we use <strong>train_labels</strong> then cv & lb is comparable for us.<br>
btw if we use <strong>train_labels</strong> again we may face <strong>overfitting</strong> it could be the reason …</p>
<p>Yes, if you <strong>train_labels</strong> then you can easily avoid leaks…</p>
<p>First of all, I would like to thank Kaggle, SIIM, FISABIO & RSNA for this amazing competition and also to this amazing community for constantly sharing. Congratulations to all the winners, medalists, and amazing people of this community.</p>
<p>Thanks to all the members of our team <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <a href="https://www.kaggle.com/zaber666" target="_blank">@zaber666</a> <a href="https://www.kaggle.com/nexh98" target="_blank">@nexh98</a> <a href="https://www.kaggle.com/artemenon" target="_blank">@artemenon</a>. This would not be possible without you. Special thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for teaming up with us. We've learned a lot from you. No wonder you're one of the best <strong>kagglers</strong>.</p>
<h2>Data Processing:</h2>
<ul>
<li><strong>Data-Cleaning</strong>: We removed duplicates manually for both classification and detection. Also for detection, instead of taking one image per patient we only removed unannotated images. For this reason, our <strong>CV</strong> was low comparing most people but we had a good <strong>lb</strong> correlation.</li>
<li><strong>Cross-Validation</strong>: StratifiedGroupKFold.</li>
</ul>
<h2>Study-Level:</h2>
<ul>
<li><strong>Pretraining</strong>: We pretrained our models on chexpert. So, all our models use it except(Chris's aux-loss model).</li>
<li><strong>Models</strong>: EfficientnetB6 & EfficientnetB7. (Others models didn't do well for us not even <code>effnetv2</code>, <code>resnet200d</code>, <code>nfnet</code>, <code>vit</code>).</li>
<li><strong>Loss</strong>: CategoricalCrossEntropy.</li>
<li><strong>Label-Smoothing</strong>: 0.01</li>
<li><strong>Augmentation</strong>: HorizontalFlip, VerticalFlip, RandomRotation, CoarseDropout, RandomShift, RandomZoom, Random(Brightness, Contrast), Cutmix-Mixup.</li>
<li><strong>Scheduler</strong>: WarmpupExponentialDecay.</li>
<li><strong>Image-Size</strong>: 512, 640, 768.</li>
<li><strong>Pseudo-Labeling</strong>: BIMCV + RICORD + RSNA.</li>
<li><strong>Knowledge-Distillation(KD)</strong>: We used around 5 of our best models to generate <strong>soft-labels</strong> for it and one of our submissions uses 3 KD models.</li>
<li><strong>Aux-Loss</strong>: One of our submissions uses <strong>aux-loss</strong> model by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> which uses FPN and EfficientnetB4 as backbone. You can check out the post <a href="https://www.kaggle.com/c/siim-covid19-detection/discussion/263676" target="_blank">here</a> for more details. We also used <strong>aux-loss</strong> from <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> 's discussion and used it for generating <strong>soft-labels</strong> for KD model. </li>
<li><strong>Post-Processing:</strong> We used <strong>geometric-mean</strong> to re-rank our confidence score(inspired from <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>'s solution of VinBigData <a href="https://www.kaggle.com/c/vinbigdata-chest-xray-abnormalities-detection/discussion/229637" target="_blank">here</a>). Our <strong>4cls</strong> models were very dominating that this had very little impact on the score. So finally we decided not to use it even though it did improve the <strong>CV</strong> a little bit.<br>
<img src="https://i.ibb.co/WtPVw7x/eq1.png" alt="eq1"></li>
</ul>
<h2>2cls Model:</h2>
<p>We used only one <strong>2cls</strong> model. Including more didn't have that much impact.</p>
<ul>
<li><strong>Model</strong>: EfficientNetb7</li>
<li><strong>Image-Size</strong>: 640</li>
<li><strong>External Data</strong>: We used <strong>RICORD</strong> dataset and their labels posted by <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> from <a href="https://www.kaggle.com/c/siim-covid19-detection/discussion/240187" target="_blank">here</a>. We simply took <strong>max-voting</strong> to get the labels from different annotators.</li>
</ul>
<h2>Image-Level:</h2>
<ul>
<li><strong>Pretraining</strong>: We pretrained all backbones of detection models.</li>
<li><strong>External Data</strong>: RSNA - <code>opacity</code> only (6k+).</li>
<li><strong>Models</strong>: yolov5x-transformer(thanks to <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> for posting it), yolov5x6, yolov3-spp.</li>
<li><strong>Augmentation</strong>: HorizontalFlip, VerticalFlip, Random(Brightness, Contrast), Mosaic-Mixup.</li>
<li><strong>Scheduler</strong>: WarmpupCosineDecay.</li>
<li><strong>Image-Size</strong>: 512. (Increasing image size worsen our result cuz our backbones were pretrained on <strong>512</strong> image size. We didn't have the time to pretrain on large image-size)</li>
<li><strong>Ensemble</strong>: We used <strong>WBF</strong> to merge boxes from different models. (thanks for <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for making it work).</li>
<li><strong>Post-Processing:</strong> We used <strong>geometric-mean</strong> to re-rank our confidence score(inspired from <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>'s solution of VinBigData <a href="https://www.kaggle.com/c/vinbigdata-chest-xray-abnormalities-detection/discussion/229637" target="_blank">here</a>). We used both <strong>4cls</strong> and <strong>2cls</strong> models' prediction here.<br>
<img src="https://i.ibb.co/jR3RXBN/eq2.png" alt="eq2"></li>
<li><strong>BBox-Filter</strong>: We filtered out abnormal boxes based on their features. This has little effect on the both <strong>CV</strong> and <strong>LB</strong> still we kept it.</li>
</ul>
<h2>Final Submissions:</h2>
<p>We tried to keep our two final submissions as different as possible. So two of our submissions use different classification models. When we looked back we noticed our best <strong>CV</strong> submission doesn't use any pseudo so we tried to incorporate them in our best <strong>LB</strong> submission. But as we had very little time we went for full data training with <strong>BIMCV</strong> + <strong>RICORD</strong> + <strong>RSNA</strong> pseudo. But we didn't want to take too much risk as we had very few submission left so ensembled full data models with our <strong>LB 0.652</strong> submission. For both submissions, 2cls model & detection models are kept the same.</p>
<ul>
<li>Best <strong>CV</strong>:<ul>
<li>b6 | 512 | KD</li>
<li>b6 | 640 | KD</li>
<li>b7 | 512 | KD</li>
<li>FPN | b4 | 512 | aux_loss</li></ul></li>
<li>Best <strong>LB</strong>:<ul>
<li>(b6, b7) x (512, 640, 768) - 6 models | BIMCV+RICORD+RSNA pseudo | Full Data</li>
<li>b6 | 512 | KD</li>
<li>b6 | 512 | BIMCV+RICORD pseudo</li>
<li>b6 | 512</li></ul></li>
</ul>
<h2>Result:</h2>
<ul>
<li><strong>CV</strong>:</li>
</ul>
<table>
<thead>
<tr>
<th>none</th>
<th>opacity</th>
<th>study</th>
<th>final</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.823</td>
<td>0.562</td>
<td>0.607</td>
<td>0.635</td>
</tr>
</tbody>
</table>
<ul>
<li><strong>LB</strong>:</li>
</ul>
<table>
<thead>
<tr>
<th>public/private</th>
<th>none</th>
<th>opacity</th>
<th>study</th>
<th>final</th>
</tr>
</thead>
<tbody>
<tr>
<td>public</td>
<td>0.810</td>
<td>0.594</td>
<td>0.628</td>
<td>0.654</td>
</tr>
<tr>
<td>private</td>
<td>0.822</td>
<td>0.594</td>
<td>0.592</td>
<td>0.631</td>
</tr>
</tbody>
</table>
<p><strong>Inference Code:</strong> <a href="https://www.kaggle.com/awsaf49/siim-covid19-detection-final-infer" target="_blank">https://www.kaggle.com/awsaf49/siim-covid19-detection-final-infer</a></p>
<h2>Thank you for reading such a long post :)</h2>
<p>not exactly, finding duplicates wasn't that tough. As duplicates will be from the same patient we had to compare images from the same patients. </p>
<p>Initially, we tried to use it for <strong>4class</strong> it didn't work, but when we used it for <strong>2class</strong> it did improve the <strong>cv</strong> and <strong>lb</strong></p>
<p>We had a bunch of models in the end so we used them for <strong>Knowledge Distillation</strong> . Here's our tentative improvement,</p>
<ul>
<li>baseline : <code>0.38+</code></li>
<li>aux-loss : <code>0.39+</code></li>
<li>kd: <code>0.40+</code></li>
</ul>
<p>I don't think it was only due to <code>chexpert</code>, our <strong>data-split</strong> always created lower cv but better <strong>lb</strong>. Even when I tried <strong>public-kernels</strong> in our <strong>split</strong> it scores lower <strong>cv</strong> but better <strong>lb</strong>.</p>
<p>Well, we used it as <strong>hard labels</strong> but <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> thought of using it as <strong>soft-labels</strong> but somehow we forgot to give it shot. I guess <strong>soft-labels</strong> would've been better.</p> |
SIIM-FISABIO-RSNA COVID-19 Detection | 5th place solution - brief summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-FISABIO-RSNA COVID-19 Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes we indeed used PyTorch and timm models. I tried Tensorflow at the start but couldn't find a reliable way to add aux loss so I switched to PyTorch just before merging with my teammates.</p>
<p>Congrats to you and your team as well. I am waiting on your solution.</p>
<p>Which part of v2 do you need ? We'll eventually be releasing all our code.</p>
<p>TTA is test time augmentation and Hflip is horizontal-flip.</p>
<p>So, basically during infernce we predicted results for the origninal image as well as the horizontally flipped image and took the mean of the predictions</p>
<p>Thanks a lot 😃</p>
<p>First of all I would like to thank my teammates <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> and <a href="https://www.kaggle.com/shivamcyborg" target="_blank">@shivamcyborg</a> without whom this achievement would not have been possible. We merged on On 12th July and from then onwards we explored and implemented a lot of new ideas.</p>
<p>I would also like to thank Kaggle, SIIM, FISABIO, RSNA for this great competition, as well has everyone here who has shared great discussions and notebooks. Congrats to all the winners!</p>
<p>Trusting our maximised cv score worked once again for us. Most of our time were spent on Classification models which were the reason we secured a decent position on leaderboard. </p>
<p>We did not use any external training data. All the models were trained on the competition data after removing the duplicates as suggested <a href="https://www.kaggle.com/c/siim-covid19-detection/discussion/246597" target="_blank">here</a>.</p>
<p><strong>Cross-validation:</strong> Stratified Group KFOLD</p>
<p><strong>Part 1: Study</strong></p>
<ol>
<li>We used an ensemble of 11 models.</li>
<li>Base architectures = <em>Efficientnet v2m, Efficientnet v2l, Efficientnet B5, Efficientnet B7</em></li>
<li>Models were trained on different image sizes ranging from [512, 720] and different augmentations.</li>
<li>Segmentation as aux loss suggested by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> </li>
<li>v2 models used a custom <a href="https://github.com/jfhealthcare/Chexpert" target="_blank">pcam</a> pool head + attention.</li>
<li>b5, b7 used multi head + concat pool + attention suggested by <a href="https://www.kaggle.com/ttahara" target="_blank">@ttahara</a> .</li>
<li>We also used noisy student training method suggested by <a href="https://www.kaggle.com/moewie94" target="_blank">@moewie94</a> in <a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/226616" target="_blank">here</a>.</li>
<li>Hflip TTA was used during inference.</li>
</ol>
<p><strong>Part 2: Binary (<code>none</code> class predictions)</strong></p>
<ol>
<li>We used an ensemble of 2x five fold models (efficientnetv2m , efficientnetb6)</li>
<li>Models were trained on 512 image size.</li>
<li>Segmentation as aux loss suggested by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> </li>
<li>v2m models used a custom <a href="https://github.com/jfhealthcare/Chexpert" target="_blank">pcam</a> pool head + attention.</li>
<li>b6 was trained using the pipeline provided by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>.</li>
<li>v2m model as actually trained on 4 class study data. We used the <code>negative</code> predictions as <code>none</code> and merged these with the <code>binary</code> predictions from the above models.</li>
<li>Hflip TTA was used during inference.</li>
</ol>
<p><strong>Part 3: Detection (<code>opacity</code> predictions)</strong></p>
<ol>
<li>We used an ensemble of 5x five fold models.</li>
<li>Models used = <code>efficientdetD3, efficientdetD5, yolov5x, yolov5l6, retinanet_x101_64x4d_fpn</code></li>
<li>image sizes used = <code>896, 512, 620, 620, (1330, 800)</code></li>
<li>we first trained d5, d3, yolov5x, yolov5l6 on only opacity predictions. We used WBF (iou=0.62) to generate pseudo lables of public test. We then used these labels to train <code>d3, yolov5x, yolov5l6</code> models again. </li>
<li>In the pseudo label training d3 was trained with ema and yolov5 models were trained with a few <code>none</code> images as well. </li>
<li>D5 was trained with EMA from the beginning and pseudo labelling did not bring any improvements to both D5 and RetinanNet so we did not include them in the final ensemble.</li>
<li>At inference predictions from the 5 models were merged using WBF(iou=0.625).</li>
</ol>
<p><strong>Part 4: Postprocessing</strong></p>
<ol>
<li>Power blending with maximum opacity confidence from part 3 and none predictions from part 2</li>
<li>We optimized both <code>none</code> predictions as well as <code>opacity</code> confidences.</li>
<li>This was inspired from <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> 's <a href="https://www.kaggle.com/c/vinbigdata-chest-xray-abnormalities-detection/discussion/229637" target="_blank">post</a>.</li>
</ol>
<p><em>We have released our inference kernel : <a href="https://www.kaggle.com/nischaydnk/604e8587410a-v2m-bin-weighted" target="_blank">https://www.kaggle.com/nischaydnk/604e8587410a-v2m-bin-weighted</a> and in the coming days we will release our full code as well. We didn't had time to prepare written solution during competitions, loads of work had to be done during the competition and also we need to clean our code a lot</em></p>
<p><strong>Update : Code will be available <a href="https://github.com/benihime91/SIIM-COVID19-DETECTION-KAGGLE" target="_blank">here</a></strong></p>
<p>EMA is Exponential Moving Average </p>
<ul>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage" target="_blank">https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage</a></li>
<li><a href="https://github.com/rwightman/pytorch-image-models/blob/master/timm/utils/model_ema.py" target="_blank">https://github.com/rwightman/pytorch-image-models/blob/master/timm/utils/model_ema.py</a></li>
</ul>
<p>And regarding the <code>private score on Study Level and Image Level</code>, we didn't really check them separately.</p>
<p>Thanks a lot 😊</p> |
SIIM-FISABIO-RSNA COVID-19 Detection | 6th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-FISABIO-RSNA COVID-19 Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I would like to thank Kaggle, SIIM, FISABIO, RSNA for hosting this exciting competition for saving many people's lives! I also thank to the other team members <a href="https://www.kaggle.com/sleepywill" target="_blank">@sleepywill</a> and <a href="https://www.kaggle.com/beckymo" target="_blank">@beckymo</a> and all Kagglers for great help. <br>
I am very surprised noticing we get 6th place on private LB while we only rank 33rd place on public LB. If somebody has an idea why our model has such great potential on generalization, please kindly share your thoughts. 😂</p>
<p><em>The following is our approach to this competition.</em></p>
<p><strong>Solution Summary</strong><br>
<a href="https://github.com/b02202050/2021-SIIM-COVID19-Detection#solution-summary" target="_blank">https://github.com/b02202050/2021-SIIM-COVID19-Detection#solution-summary</a></p>
<p><strong>General Settings</strong></p>
<ul>
<li>Classification:<ul>
<li>Architecture: EfficientNet-B7 (This is the only backbone architecture we use.)</li>
<li>Input size: 512</li></ul></li>
<li>Detection:<ul>
<li>Architecture: FasterRCNN-EfficientNet-B7-FPN</li>
<li>Loss: classical RPN and RoI loss</li>
<li>Input size: 800</li></ul></li>
</ul>
<p><strong>Classification pre-training</strong></p>
<ul>
<li>Datasets: kaggle_chest_xray_covid19_pneumonia + kaggle_covidx_cxr2 + kaggle_chest_xray_pneumonia + kaggle_curated_chest_xray_image_dataset_for_covid19 + kaggle_covid19_xray_two_proposed_databases + kaggle_ricord_covid19_xray_positive_tests + CXR14 + CheXpert</li>
<li>Architecture: Shared-backbone multi-head classification</li>
<li>Loss: CE Loss</li>
<li>Early stopping: We choose the model of the best average validation AUC of all pneumonia and COVID related tasks.</li>
<li>Augmentations: Flip (H/V) + rotation + color jitter + shift + aspect ratio jitter + scale jitter</li>
</ul>
<p><strong>Study-level training</strong></p>
<ul>
<li>ensembles: 5-fold CV</li>
<li>TTA: horizontal flip</li>
<li>Training tricks:<ul>
<li>Inverse focal loss: replace <code>(1-p_t) ** gamma</code> with <code>p_t ** gamma</code> in the original focal loss to suppress outlier samples</li>
<li>Use sigmoid instead of softmax (increase ~0.003 mAP on my validation set)</li>
<li>Sharpness-aware minimization</li>
<li>Augmentations: Flip (H/V) + rotation + color jitter + shift + RandomResizedCrop + random_perspective + elastic_deformation + RandAugment</li></ul></li>
</ul>
<p><strong>Detection pre-training</strong></p>
<ul>
<li>Datasets: RSNA pneumonia</li>
<li>Augmentations: Flip (H/V) + rotation + color jitter + shift</li>
</ul>
<p><strong>Image-level training</strong></p>
<ul>
<li>ensembles: 5-fold CV</li>
<li>TTA: horizontal flip</li>
<li>Training tricks:<ul>
<li>Stochastic weight averaging</li>
<li>Sharpness-aware minimization</li>
<li>Attentional-guided context FPN</li>
<li>Attentional feature fusion</li>
<li>Fixed feature attention: use feature pyramid from the classification model for attention</li>
<li>Augmentations: Flip (H/V) + rotation + color jitter + shift + RandomResizedCrop + random_perspective + elastic_deformation</li></ul></li>
</ul>
<p><strong>Results</strong></p>
<table>
<thead>
<tr>
<th>image-level ensemble mAP\@0.5</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>w/o fixed feature attention</td>
<td>0.563</td>
</tr>
<tr>
<td>with fixed feature attention</td>
<td>0.567</td>
</tr>
</tbody>
</table>
<hr>
<table>
<thead>
<tr>
<th></th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>Study-level ensemble mAP</td>
<td>0.585</td>
</tr>
</tbody>
</table>
<hr>
<table>
<thead>
<tr>
<th>Study-level ensemble + Image-level ensemble</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>w/o TTA w/o fixed feature attention</td>
<td>0.633</td>
<td>0.621</td>
</tr>
<tr>
<td>w/o TTA</td>
<td>0.633</td>
<td>0.625</td>
</tr>
<tr>
<td>study-level TTA</td>
<td>0.634</td>
<td>0.626</td>
</tr>
<tr>
<td>image-level TTA</td>
<td>0.635</td>
<td>0.627</td>
</tr>
<tr>
<td>both TTA</td>
<td>0.636</td>
<td>0.628</td>
</tr>
</tbody>
</table>
<hr>
<p>For more algorithmic and implementation details, please refer to:</p>
<ul>
<li>our code in this github repository: <a href="https://github.com/b02202050/2021-SIIM-COVID19-Detection" target="_blank">https://github.com/b02202050/2021-SIIM-COVID19-Detection</a></li>
<li>our Kaggle submission kernel: <a href="https://www.kaggle.com/terenceythsu/siim-covid19-2021-6th-place?scriptVersionId=70677701" target="_blank">https://www.kaggle.com/terenceythsu/siim-covid19-2021-6th-place?scriptVersionId=70677701</a></li>
</ul>
<p>Hi, I may release the checkpoints after getting the permission from the host. If you want to quickly train a model that can be run with our inference kernel, you can ignore the external datasets pre-training. Also modify the config files to ignore loading pre-trained weights as described in my repository, Thanks.</p>
<p>Thanks for your appreciation.</p>
<ol>
<li>Yes. Each batch of images are sampled from all datasets and trained for each iteration.</li>
<li>I map some similar pathologies from CheXpert to CXR14 in <a href="https://github.com/b02202050/2021-SIIM-COVID19-Detection/blob/main/dataset/CheXpert-v1.0/CheXpert_task_map_to_NIH.txt" target="_blank">this file</a>.</li>
<li><ol>
<li>Except that some CheXpert pathologies may share heads with CXR14, all the other dataset has their own classifier head.</li>
<li>Yes, I use CXR14, CheXpert pnaumonia and all the other Kaggle datasets for validation. Here I made <a href="https://github.com/b02202050/2021-SIIM-COVID19-Detection/blob/57e7875ffabd4db8251181d9014bb71837336973/src/classification/train_multitask_classification.py#L307" target="_blank">a mistake</a> that I only use "AUROC of class 1" even if there exists multiple classes.</li></ol></li>
</ol> |
SIIM-FISABIO-RSNA COVID-19 Detection | 7th place solution with Code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-FISABIO-RSNA COVID-19 Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you!</p>
<ol>
<li><p>5 folds UniverseNet101: 57.7 / 57 / 54.8 / 54 / 53 mAP (images without bboxes are kept). LB: 0.096 / 0.096 / 0.094 / 0.096 / 0.096.</p></li>
<li><p>No external dataset.</p></li>
</ol>
<p>Thanks! </p>
<ol>
<li><p>We haven't tried all the combinations, it was rather sporadic. From all of our experiments we chose 14 that had decent validation mAP.</p></li>
<li><p>Not including. I don't have the exact private score for that since we didn't submit all 14 models separately. We have ensemble of 8 models that gave us 0.414 on public, and 0.392 on private. </p></li>
</ol>
<p>First, thanks for organizing this interesting competition and congrats to all winners!</p>
<p><strong>Validation:</strong></p>
<p>We used iterative-stratification with 5 folds (<a href="https://github.com/trent-b/iterative-stratification" target="_blank">https://github.com/trent-b/iterative-stratification</a>) stratified by study level classes and number of boxes.</p>
<p><strong>Study level:</strong></p>
<p>We used efficientnet-b7, v2s, v2m, and v2l with aux branches after 3 different blocks. The models were trained on 3 folds and on different image resolutions (512, 640, 768) to produce 14 classifiers. </p>
<p>We used simple averaging for ensembling the models. LB mAP for study level was ~41.5-41.6</p>
<p><strong>Image level:</strong></p>
<p>We used mmdetection library to train detectoRS50, universeNet50, and universeNet101. detectoRS50 and universeNet50 were trained on one fold, and universeNet101 was trained on each fold + pseudo labels for public data using the universeNet50 model.</p>
<p>WBF did not work for us, so we decided to use NMW for ensembling from <a href="https://github.com/ZFTurbo/Weighted-Boxes-Fusion" target="_blank">https://github.com/ZFTurbo/Weighted-Boxes-Fusion</a>.</p>
<p>TTA: HorizontalFlip for detectoRS and multi-scale TTA for all universeNet models on [(640, 640), (800, 800)].</p>
<p>Binary classifiers were trained in the same manner as study level models, 3 fold ensemble was used.</p>
<p><strong>Augmentations:</strong></p>
<p>Our augmentations include HorizontalFlip, RandomCrop (for study level), ShiftScaleRotate, CLAHE, RandomGamma, Cutout from albumentations library (<a href="https://albumentations.ai/" target="_blank">https://albumentations.ai/</a> ).</p>
<p>UPD: Code available at <a href="https://github.com/AidynUbingazhibov/SIIM-FISABIO-RSNA-COVID-19-Detection" target="_blank">https://github.com/AidynUbingazhibov/SIIM-FISABIO-RSNA-COVID-19-Detection</a></p>
<p>Inference kernel: <a href="https://www.kaggle.com/aidynub/validation-pipeline-4cls-10mdlswp-bincl-3mdls-d?scriptVersionId=70822471" target="_blank">https://www.kaggle.com/aidynub/validation-pipeline-4cls-10mdlswp-bincl-3mdls-d?scriptVersionId=70822471</a></p> |
SIIM-FISABIO-RSNA COVID-19 Detection | 8th Place Solution [Full Code Attached] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-FISABIO-RSNA COVID-19 Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks!<br>
I used the one in timm library. It provides you the module and pretrained models with few lines of code. You can check the example usage on the kernel I mentioned in the article or my solution github repository <code>model/timm.py</code>.</p>
<blockquote>
<p>One more question is on the "Lung mask obtained by this heuristics and opacity bbox mask))". Is there a link for the tool available to use?</p>
</blockquote>
<p>This one is done with the series of cv2 and skimage functions. Here is the snippet link <a href="https://github.com/lorenzo-park/kaggle-solution-siim-fisabio-rsna-covid19-detection/blob/master/dataset.py#L71" target="_blank">dataset.py file (line71-line84)</a> in my repo.<br>
I think the quality of the lung mask would be not good as DL based method, but the advantage is clear - No need additional trainings for Lung mask segmentation DL model.</p>
<p>I'd love to share my thoughts on any questions. Feel free to do even more :)</p>
<blockquote>
<p>After you obtained the checkpoint by early stopping, did you perform a fune-tuning with smaller lr? Thanks!</p>
</blockquote>
<p>For this part, we don't to re-train the model. The 5 checkpoints for each fold obtained by early stopping is used to submission (averaging results of 5 models). If you meant training with NIH pretrained weight, we tried with 0.1*(original LR) to encoder but using the same LR gives better CV. </p>
<p>Also, the best private LB is come with 0.635 public LB submission which uses WBF for different object detection models. So WBF with different kind of model can be the reason to relatively small decrease in private LB.</p>
<p>Thank you!<br>
I used pytorch lightning and it has the early stopping callback. <br>
If you are asking how early stopping works, we set patience and if validation loss is not decreased for more epochs than the patience, then we stop the training and use the checkpoint at minimum validation loss.</p>
<p>I used LR scheduler, so LR is changing during the training for each fold, but the LR should be the same across all fold.</p>
<p>Thanks! You can check <a href="https://www.kaggle.com/c/siim-covid19-detection/discussion/240233" target="_blank">this thread</a> out. Basically, it is multi-task training. Model gets the ability to extract more important feature for classification task by training with segmentation decoder together using segmentation auxiliary loss. </p>
<p>*submitted kernel: <a href="https://www.kaggle.com/jihunlorenzopark/siim-covid-sub-mmdet?scriptVersionId=70842068" target="_blank">https://www.kaggle.com/jihunlorenzopark/siim-covid-sub-mmdet?scriptVersionId=70842068</a><br>
*full training scripts on github: <a href="https://github.com/lorenzo-park/siim-covid-kaggle-model-sub" target="_blank">https://github.com/lorenzo-park/siim-covid-kaggle-model-sub</a></p>
<hr>
<p>Thank you for hosting the competition to Kaggle, SIIM, FISABIO, RSNA. I am so happy to get the first gold for two years of Kaggling. I and my teammates keep working hard until the last night and finally the last day's submission ranked at 8th on private leaderboard!</p>
<h1>Solution</h1>
<p>Our model consists of three parts as most of the people have used: <br>
Study-level prediction, two class prediction and opacity bounding box detection. </p>
<h3>1. Study-Level 4-class Classification (negative, typical, indeterminate, atypical)</h3>
<ul>
<li><p><strong>CV Strategy</strong> Stratified Grouped 5 Fold, using PatientID for grouping</p></li>
<li><p><strong>Optimizer</strong> AdamW with CosineAnnealingWarmRestarts</p></li>
<li><p><strong>Model</strong></p>
<ul>
<li>Unet++ (Efficientnetv2)</li>
<li>Unet++ (Efficientnetv2) pretrained on NIH Chest X-rays with 5 epochs</li>
<li>Swin Transformer pretrained on NIH Chest X-rays with 5 epochs</li>
<li><a href="https://drive.google.com/file/d/1SyDLq5LJBs5bm9sj9heUJ61Ss-nUPY-2/view?usp=sharing" target="_blank">Architecture overview</a></li></ul></li>
<li><p><strong>Loss</strong> </p>
<ul>
<li>Classification loss (CrossEntropy) + Segmentation auxiliary loss (1/3<em>BCELossWithLogits + 2/3</em>LovaSz loss) for Unet++ <br>
(2 class for opacity bounding box binary mask and lung binary mask, Lung mask obtained <a href="https://drive.google.com/file/d/1jsq_zAERAwVPYEBGXhETxwBE3L3smom1/view?usp=sharing" target="_blank">by this heuristics</a> and opacity bbox mask))</li>
<li><strong>Note that samples with ground truth empty masks are not contributing to segmentation loss calculation by multiplying zero to both of outputs and targets</strong> mentioned in <a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/229706" target="_blank">this RANZCR 4th solution</a></li>
<li>Classification loss (CrossEntropy) for Swin Transformer</li></ul></li>
<li><p><strong>Augmentation</strong></p></li>
</ul>
<pre><code> A.Compose([
A.Resize(img_size,img_size),
A.HorizontalFlip(p=0.5),
RandomBrightness(limit=0.1, p=0.75),
A.ShiftScaleRotate(shift_limit=0.2, scale_limit=0.2, rotate_limit=30, border_mode=0, p=0.75),
A.Cutout(max_h_size=int(img_size * 0.3), max_w_size=int(img_size * 0.3), num_holes=1, p=0.75),
ToTensorV2(p=1.0),
])
</code></pre>
<ul>
<li><strong>Ensemble</strong> Unet++_640x640 + Unet++_512x512 + Unet++_NIH_640x640 + Swin Transformer_384x384</li>
</ul>
<h3>2. Image-Level Opacity Detection for “opacity” by <a href="https://www.kaggle.com/normalkim0" target="_blank">@normalkim0</a></h3>
<ul>
<li><p>Ensemble of 3 YOLO models and 1 CascadedRCNN model </p>
<ul>
<li>yolov5x6 input size 640 (best CV score)</li>
<li>yolov5x6 input size 1280</li>
<li>yolov5x input size 512</li>
<li>cascadedRCNN input size 640 (refer to <a href="https://www.kaggle.com/sreevishnudamodaran/siim-effnetv2-l-cascadercnn-mmdetection-infer" target="_blank">https://www.kaggle.com/sreevishnudamodaran/siim-effnetv2-l-cascadercnn-mmdetection-infer</a> <a href="https://www.kaggle.com/sreevishnudamodaran" target="_blank">@sreevishnudamodaran</a>)</li>
<li>used also Stratified Grouped 5 Fold for each, using PatientID for grouping</li>
<li>In case of Yolo, use ‘none’ image for background images. 0~10% backgound images helps to reduce False Positives.</li>
<li>In case of cascadedRCNN, only use ‘opacity’ images.</li></ul></li>
<li><p>use YOLOv5 Hyperparameter Evolution</p>
<ul>
<li>Hyperparameter evolution is a method of Hyperparameter Optimization using a Genetic Algorithm (GA) for optimization. (<a href="https://github.com/ultralytics/yolov5/issues/607" target="_blank">https://github.com/ultralytics/yolov5/issues/607</a>) </li>
<li>it improved LB scores +0.003</li></ul></li>
<li><p>Ensemble method : weighted boxes fusion</p>
<ul>
<li>IoU threshold : 0.5</li>
<li>doubled the weight on the best model (yolov5x6 input size 640)</li>
<li>the key to improving scores was diversity. we used various input image size and added cascadedRCNN at the end. There was another yolo model using 640 input size with higher score than cascadedRCNN, however the result of ensemble was reversed. (Of course, it would be better to use all the models. However it caused the timeout error.)</li></ul></li>
</ul>
<h3>3. Image-Level Binary Classification for "none"</h3>
<ul>
<li><p>Fine-tuned by replacing the last layer of Study-Level 4-class Classification</p></li>
<li><p>Sharing the exact same train/valid fold of 4-class training stage to prevent potential leakage</p></li>
<li><p><strong>Ensemble</strong> Unet++_384x384 + Swin Transformer_384x384</p></li>
</ul>
<h1>Reproducibility</h1>
<ul>
<li>Results can be reproducible with above github repository</li>
<li>Note that some models require 48G VRAM and we've used RTX A6000 and RTX3090. Image-level opacity model will be released soon.</li>
</ul>
<h1>Others</h1>
<ul>
<li>Believed CV than LB</li>
<li>Used <strong>early stopping</strong> for all training runs except pretraining runs. I think it contributes a lot for our public LB (0.635) to be close to private LB (0.625).</li>
<li>For multiple images in single study, we choose the one with opacity bbox. (if all images have no bounding box, choose the random one.)</li>
</ul>
<h1>Reference</h1>
<ul>
<li><p><strong>Tools</strong></p>
<ul>
<li>segmentation_models.pytorch <a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">https://github.com/qubvel/segmentation_models.pytorch</a> </li>
<li>pytorch-image-models <a href="https://github.com/rwightman/pytorch-image-models" target="_blank">https://github.com/rwightman/pytorch-image-models</a> </li>
<li>YOLOv5 <a href="https://github.com/ultralytics/yolov5" target="_blank">https://github.com/ultralytics/yolov5</a> </li>
<li>Mmdetection <a href="https://github.com/open-mmlab/mmdetection" target="_blank">https://github.com/open-mmlab/mmdetection</a> </li></ul></li>
<li><p><strong>Idea threads</strong></p>
<ul>
<li>Segmentation loss: <a href="https://www.kaggle.com/c/siim-covid19-detection/discussion/240233" target="_blank">https://www.kaggle.com/c/siim-covid19-detection/discussion/240233</a></li>
<li>Segmentation loss calculation: <a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/229706" target="_blank">https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/229706</a> </li></ul></li>
<li><p><strong>Kernels</strong></p>
<ul>
<li>Load dcm, resize and save to png: <a href="https://www.kaggle.com/c/siim-covid19-detection/discussion/239918" target="_blank">https://www.kaggle.com/c/siim-covid19-detection/discussion/239918</a></li>
<li>Timm usage: <a href="https://www.kaggle.com/cpmpml/stft-transformer-infer" target="_blank">https://www.kaggle.com/cpmpml/stft-transformer-infer</a> </li>
<li>Segmentation_models.pytorch usage: <a href="https://www.kaggle.com/haqishen/1st-place-soluiton-code-small-ver" target="_blank">https://www.kaggle.com/haqishen/1st-place-soluiton-code-small-ver</a> </li>
<li>CascadeRCNN: <a href="https://www.kaggle.com/sreevishnudamodaran/siim-mmdetection-cascadercnn-weight-bias" target="_blank">https://www.kaggle.com/sreevishnudamodaran/siim-mmdetection-cascadercnn-weight-bias</a> </li>
<li>Weighted boxes fusion: <a href="https://www.kaggle.com/shonenkov/weightedboxesfusion" target="_blank">https://www.kaggle.com/shonenkov/weightedboxesfusion</a> </li></ul></li>
<li><p><strong>External datasets</strong><br>
NIH Chest X-rays: <a href="https://www.kaggle.com/nih-chest-xrays/data" target="_blank">https://www.kaggle.com/nih-chest-xrays/data</a> </p></li>
</ul>
<p>Thank you for reading and congratulations to all winners.<br>
There could be missing details in the solution. I will keep update the solution as I get the missing details in my mind back.</p>
<hr>
<p>*20210811: Add data duplication in Others part. <a href="https://www.kaggle.com/c/siim-covid19-detection/discussion/263945" target="_blank">5th solution</a> makes me remind this detail. <br>
*20210811: Update used GPUs in Reproducilibity part*<br>
*20210813: Add Image opacity detection part solution written by <a href="https://www.kaggle.com/normalkim0" target="_blank">@normalkim0</a><br>
*20210817: Attach submitted kernel link<br>
*20210818: Add reproducible github repository link</p> |
SIIM-FISABIO-RSNA COVID-19 Detection | 9th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-FISABIO-RSNA COVID-19 Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Has a different FPN layout, you can check in timm effdet package.</p>
<p><code>efficientdet_q2</code></p>
<p>Thanks to Kaggle and the hosts for this interesting competition. In the following, we want to give a summary of the solution of Team Watercooled: <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a>, <a href="https://www.kaggle.com/ilu000" target="_blank">@ilu000</a>, <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a>. As always, thanks to all team members contributing equally to the solution.</p>
<h3>Summary</h3>
<p>Our solution is based on a blend of separate study (classification) and image (detection) models. Most study models are regularized by additional segmentation loss. Detection models include EfficientDet and Yolo models and are blended using WBF. We only rely on provided competition data and do not utilize any external data.</p>
<h3>Preprocessing and CV</h3>
<p>For common preprocessing, we transform the original DICOM files to PNG images and rescale the size to either 25% or 50% of the original size to speed up data loading and training runtimes. We handled duplicate images as suggested by the hosts (<a href="https://www.kaggle.com/c/siim-covid19-detection/discussion/246597" target="_blank">https://www.kaggle.com/c/siim-covid19-detection/discussion/246597</a>) by only taking images with boxes if several images belong to the same study. </p>
<p>For validation, we employ 5-fold cross validation where the same study is not overlapping between folds. Overall, we can see quite decent CV and LB correlation, within a certain random range which is to be expected given the small data and nature of the problem and validation metric.</p>
<h3>Study models</h3>
<p>Our final blend contains nine models, each one fitted across 5 folds, leading to an overall average of 45 fits. For each model, we use cross entropy loss on all four classes, and also process the outputs with softmax before averaging. For us, softmax was clearly superior as each image can always only have a single target. All study models are regularized by additional segmentation loss using a Unet decoder. In detail, these are the different models blended:</p>
<ol>
<li>Backbone: tf_efficientnet_b0, Image size: (512,512)</li>
<li>Backbone: tf_efficientnet_b7_ns, Image size: (1024,1024)</li>
<li>Backbone: tf_efficientnet_b7_ns, Image size: (1024,1024)</li>
<li>Backbone: tf_efficientnet_b7_ns, Image size: (1024,1024)</li>
<li>Backbone: tf_efficientnet_b5_ns, Image size: (1024,1024)</li>
<li>Backbone: tf_efficientnet_b5_ns, Image size: (1024,1024)</li>
<li>Backbone: tf_efficientnetv2_l, Image size: (512,512), Note: Ben preprocessing</li>
<li>Backbone: xcit_small_24_p16_224_dist, Image size: (640,640)</li>
<li>Backbone: xcit_small_24_p16_224_dist, Image size: (640,640)</li>
</ol>
<p>For augmentations we utilized a mix of ShiftScaleRotate, HorizontalFlip, RandomBrightnessContrast and Cutout. The elaborated models can slightly differ in minor hyperparameter settings.</p>
<h3>Image models</h3>
<p>For detection, we used both EfficientDet and Yolo models.</p>
<h4>Efficient Det models</h4>
<p>For most EfficientDet models, we regularize the detection part with additional study classification, but only use the detection part as output. For a further discussion on hybrid models, see below. In detail, we trained the following EfficientDet models, each for 5-folds:</p>
<ol>
<li>Backbone: tf_efficientdet_d0, Image size: (512,512)</li>
<li>Backbone: tf_efficientdet_d0, Image size: (512,512)</li>
<li>Backbone: tf_efficientdet_d3, Image size: (512,512)</li>
<li>Backbone: efficientdet_q2, Image size: (768,768)</li>
</ol>
<p>For augmentations we utilized a mix of ShiftScaleRotate, HorizontalFlip, RandomBrightnessContrast and Cutout. The elaborated models can slightly differ in minor hyperparameter settings.</p>
<h4>Yolo models</h4>
<p>We trained several yolo models, were 3/ 4 uses opacity as the only class and has no bounding box if the image had none as label. For one yolo model, however we created a full-image boundingbox with label “none” as a second label</p>
<ol>
<li>yolov5s (512) opacity</li>
<li>yolov5x (512) opacity</li>
<li>yolov5m (640) opacity</li>
<li>yolov5s (512) opacity + none</li>
</ol>
<h4>None predictions</h4>
<p>One peculiarity of this competition was to figure out how to best predict none for the detection part. Several competitors decided to have a separate binary classification model, but this appeared to be quite limiting to us. So in the end, we decided to make none predictions by combining image and study predictions. In detail, we calculate:</p>
<p>none = 0.5 * (1-max(box_confidence)) + 0.5 * negative + 0.2 * atypical</p>
<p>The reason for including atypical for the prediction for none, is the fact that "Bounding boxes were not placed on pleural effusions, or pneumothoraces.” and these fall into the category of atypical.</p>
<h4>Blending</h4>
<p>We blended all outputs using WBF with IOU 0.5. To speed up blending, we only take the top75 boxes for each model output for a given image.</p>
<h3>A note on hybrid models</h3>
<p>Originally, we started training only hybrid models, i.e. combining study and image predictions in a combined EfficientDet model where we re-use the EfficientNet encoder then for both detections and classifications. This worked really well, and also has a good built-in regularization effect for both detections and classifications. On study level this actually worked similarly as the segmentation regularization. However, as we got tiny improvements after splitting up the models, we stuck to that. But a more streamlined solution can totally use these hybrid models with very similar expected results.</p>
<h3>A note on external data</h3>
<p>Reading other solution posts, it appears we have, not for the first time, wasted some further improvements by not utilizing external data. We tried a little bit to incorporate ChestX, without much success, but have not dug too deep and not approached other datasets. It appears to be specifically useful in Chest XRay competitions to use all data available, even if you do not see immediate gains on CV. This might be specifically true as it is hard to employ mixup/cutmix techniques to multiply the actual available data. Sometimes data also stems from similar/same datasets and small effects might also only appear on the leaderboard. </p>
<h3>Links</h3>
<p>Training: <a href="https://github.com/ChristofHenkel/kaggle-siim-covid-detection-9th-place" target="_blank">https://github.com/ChristofHenkel/kaggle-siim-covid-detection-9th-place</a><br>
Inference: <a href="https://www.kaggle.com/ilu000/siim-covid-detection-9th-place" target="_blank">https://www.kaggle.com/ilu000/siim-covid-detection-9th-place</a><br>
Video: <a href="https://www.youtube.com/watch?v=lRJyB-gi-nk" target="_blank">https://www.youtube.com/watch?v=lRJyB-gi-nk</a></p>
<p>No, not really time to go into detail here. It probably also needs a few iterations to get the right settings.</p>
<p>At least you pseudo-tagged, we didnt even do that 😄</p>
<p>Interesting, did the ones without pseudo score better in private? Was one of your selected ones without?</p>
<p>Actually the hosts posted this also somewhere :)</p>
<p>Yes we did but didnt give any noticable improvements. In general, image size didnt seem to be important in this competition as the models (over)fit the data very easily.</p>
<p>Yes 40 models sounds about right.<br>
We blended all 40 models in the end with WBF.</p> |
SIIM-FISABIO-RSNA COVID-19 Detection | Private 19th Solution - Rabbit part | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-FISABIO-RSNA COVID-19 Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you!<br>
Actually score is almost the same, but we can save more inference time if predicted 5 classes at the same time than 4+1 ones.</p>
<p><a href="https://www.kaggle.com/drtausamaru/final-submission-ver001?scriptVersionId=70376836" target="_blank">https://www.kaggle.com/drtausamaru/final-submission-ver001?scriptVersionId=70376836</a><br>
final submission notebook.</p>
<p>Thanks for the comment.<br>
About +0.002 I guess.</p>
<p>First of all, congratulations to all the winners, and thanks so much for hosting such an interesting competition.<br>
Also really appreciate for my teammates, <a href="https://www.kaggle.com/ryunosukeishizaki" target="_blank">@ryunosukeishizaki</a> and <a href="https://www.kaggle.com/rinnqd" target="_blank">@rinnqd</a> !!</p>
<p>I'm an anesthesiologist in Japan.<br>
COVID-19 patients are increasing here and there, facing with many chest X-rays.<br>
Classifying and detecting COVID-19 pneumonia by machine learning is quite interesting and meaningful, which is my motivation to participate in this task.</p>
<p>Here I’ll share my part of the solution.</p>
<p>What I did consists of mainly 3 parts, 2 types of baseline classification modeling & opacity detection modeling & post-processing.</p>
<p>All experiments and submissions were done using 512x512 images.<br>
I selected which images should be used for training, base on the discussion (<a href="https://www.kaggle.com/c/siim-covid19-detection/discussion/246597)" target="_blank">https://www.kaggle.com/c/siim-covid19-detection/discussion/246597)</a>, picked up 1 image_id per study_id (if there are several images per study_id, only image with bbox are used)<br>
Cross Validation: multilabel stratified kfold</p>
<h1>Classification</h1>
<p>Surely models with aux-loss were better than ones without it.<br>
My baseline models are 2 types of classification EfficientNets with 5 classes (Negative, Typical, Indeterminate, Atypical, and None).</p>
<p>・Type1<br>
class Model(nn.Module):<br>
def --init--(self):<br>
super(Model, self).--init--()<br>
###blocks###<br>
self.logit = nn.Linear(n_features, 5)<br>
self.mask = …<br>
def forward(self, x):<br>
###blocks###<br>
mask = self.mask(x)<br>
logit = self.logit(x)<br>
return logit, mask</p>
<p>・Type2<br>
class Model(nn.Module):<br>
def --init--(self):<br>
super(Model, self).--init--()<br>
###blocks###<br>
self.logit1 = nn.Linear(n_features, 4)<br>
self.logit2 = nn.Sequential(<br>
nn.Softmax(),<br>
nn.Linear(4, 4),<br>
nn.ReLU(),<br>
nn.Linear(4, 1))<br>
self.mask = …<br>
def forward(self, x):<br>
###blocks###<br>
logit1 = self.logit1(x)<br>
logit2 = self.logit2(logit1)<br>
return logit1, logit2, mask</p>
<p>I made custom loss function:<br>
CCE * 0.8 (4 class loss) + BCE * 0.1 (1 class loss) + BCE * 0.1 (segmentation_loss)</p>
<p>Augmentations: horizontalflip, shiftscalerotate, randombrightness, cutout</p>
<p>After team merge, they improved my baseline models using pretraining method!<br>
Also <a href="https://www.kaggle.com/ryunosukeishizaki" target="_blank">@ryunosukeishizaki</a> tried various types of Efnets, which was necessary for ensemble. Thank you.<br>
Final sub includes 4x efnetb3 and 3x efnetv2l.</p>
<p><strong>Final Ensemble CV: 0.521 / 0.833 (5 folds 7 models ensemble)</strong><br>
<strong>Public: ???</strong></p>
<h1>Detection</h1>
<p>Because this competition was my first time to tackle object detection, I spent much time here.<br>
5 folds 4 models were finally prepared: YOLOv5x, EfficientDetD3, EfficientDetD4, EfficientDetD5.<br>
After pretraining all models with pneumonia detection dataset (<a href="https://www.kaggle.com/c/rsna-pneumonia-detection-challenge)" target="_blank">https://www.kaggle.com/c/rsna-pneumonia-detection-challenge)</a>, each model training was done using train data I picked up above.<br>
YOLO augmentations: degrees, translate, scale, fliplr<br>
Efdet augmentations: horizontalflip, randombrightness, cutout</p>
<p>Each CV is the followings (5 folds mean):<br>
YOLOv5x: 0.5428 / 1.0 (0.0904 / 0.1666)<br>
EfdetD3: 0.5526 / 1.0 (0.0921 / 0.1666)<br>
EfdetD4: 0.5436 / 1.0 (0.0906 / 0.1666)<br>
EfdetD5: 0.5375 / 1.0 (0.0895 / 0.1666)</p>
<p>For ensemble, I used WBF to maximize the CV score.<br>
While transforming bbox info into string, I removed unnecessary bboxes like too small, too big, around corner area, and boxes overriding the midline.<br>
I set these values based on train data to maximize the CV score.</p>
<p>・small box<br>
((xmax - xmin) * (ymax - ymin)) / (img_width * img_height) < 0.001</p>
<p>・large box<br>
((xmax - xmin) * (ymax - ymin)) / (img_width * img_height) > 0.362</p>
<p>・edge box<br>
((xmax + xmin)/2 < img_width * 0.05) | ((xmax + xmin)/2 > img_width * 0.95) | ((ymax + ymin)/2 < img_height * 0.05) | ((ymax + ymin)/2 > img_height * 0.95)</p>
<p>・horizontal box<br>
(xmax - xmin) > img_width * 0.6</p>
<p><strong>Final Ensemble CV: 0.0981 / 0.1666</strong><br>
<strong>Public: 0.100</strong></p>
<h1>Post-Processing</h1>
<p>I hypothesized that classification and detection result should be mutually related.<br>
test[’Negative’] *= (1 - test[‘max_bbox_confidence’])<br>
test[’None’] *= (1 - test[‘max_bbox_confidence’])<br>
This improved the CV by 0.002.</p>
<p>And, Negative and None values were joined to one value per image.<br>
nn_values = test['Negative'] * 0.55 + test['None'] * 0.45<br>
test['Negative'] = na_values<br>
test['None'] = nn_values<br>
This improved the CV by 0.001.</p>
<p><strong>Final Ensemble CV: 0.525 / 0.833 (5 folds 7 models ensemble)</strong><br>
<strong>Public: 0.537</strong></p>
<h1>Final Submission</h1>
<p>CV: 0.525+0.098=0.623<br>
Public: 0.637<br>
Private: 0.621<br>
I was really confused about debugging process in using 2 timm versions inside the same notebook for >1wk, and <a href="https://www.kaggle.com/rinnqd" target="_blank">@rinnqd</a> kindly gave me a solution. Thank you.</p>
<h1>What didn’t work for me</h1>
<p>・High resolution (1024x1024) for both classification and detection<br>
・Pseudo labeling including external and public test<br>
・Denoising & Relabeling<br>
・Mixup<br>
・BCELoss for classification<br>
・Focal loss<br>
・Lovasz hinge loss<br>
・Label smoothing<br>
・Image cropping around lung area using detection output<br>
・lung segmentation with opacity<br>
・Using detection result as input too (4 or 5 or 6 ch)<br>
・4 class detection modeling to postprocess classification result</p> |
SIIM-FISABIO-RSNA COVID-19 Detection | Public 59th / Private 78th Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-FISABIO-RSNA COVID-19 Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Acknowledgments</h1>
<p>Thanks to Kaggle and the hosts for holding this exciting competition! I'm new to object detection tasks, and I've learned many things in this competition. Also thanks to all participants especially my teammate <a href="https://www.kaggle.com/schulta" target="_blank">@schulta</a>!</p>
<h1>Detection</h1>
<p>We used two models, EfficientDet(<a href="https://github.com/rwightman/efficientdet-pytorch" target="_blank">https://github.com/rwightman/efficientdet-pytorch</a>) and Yolo(<a href="https://github.com/ultralytics/yolov5" target="_blank">https://github.com/ultralytics/yolov5</a>)<br>
Scores of each model were as follows;</p>
<table>
<thead>
<tr>
<th>model</th>
<th>image size</th>
<th>AP</th>
</tr>
</thead>
<tbody>
<tr>
<td>EfficientDetD7</td>
<td>512</td>
<td>0.2858</td>
</tr>
<tr>
<td>Yolov5x</td>
<td>512</td>
<td>-</td>
</tr>
</tbody>
</table>
<p>What worked for us was as follows;</p>
<ul>
<li>Train models to predict only opacity class. (We predicted none class by classification models)</li>
<li>Ensemble bboxes by WBF. IOU threshold was 0.65.</li>
<li>TTA</li>
<li>SAM optimizer</li>
<li>Cutmix</li>
<li>Modify the confidence of bboxes by the outputs of 2 class models like this;</li>
</ul>
<blockquote>
<p>confidence = confidence * (1 - Probability of none) ** 0.3</p>
</blockquote>
<p>Here, 0.3 is the arbitrary value which optimize oof score.</p>
<h1>Classification</h1>
<p>CV Scores and details of our models is as follows;</p>
<table>
<thead>
<tr>
<th>model</th>
<th>image size</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>EfficientNet V2L</td>
<td>512</td>
<td>0.5613</td>
</tr>
<tr>
<td>ResNet200D</td>
<td>512</td>
<td>0.5620</td>
</tr>
</tbody>
</table>
<p>We used two types of classification models, 4class classification model and 2class classification model. The former classifies Negative, Typical, Indeterminate and Atypical, which compose study class. The latter classifies the presence or absence of boxes in images that belong to the image class. We trained both models using 2step, which was used in the RANZCR competition (<a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/205243" target="_blank">link</a>). </p>
<ul>
<li>1st step<br>
We created annotation masks using bboxes of image class and trained teacher models by annotated images.<br>
The following is an example of an annotated image.<br>
<img alt="スクリーンショット 2021-08-10 19 34 12" src="https://user-images.githubusercontent.com/70050531/128852437-b8c67fa5-6d13-43a7-938b-2e1236c88e42.png"></li>
<li>2nd step<br>
We trained student models without annotated images. In this step, auxiliary loss was used, which was the mean squared loss between final feature maps of student models and teacher models.</li>
<li>3rd step<br>
We fine-tuned student model without auxiliary loss.</li>
</ul>
<p>Other methods we used were as follows;</p>
<ul>
<li>TTA</li>
<li>SAM optimizer</li>
</ul>
<p>What didn't work was as follows;</p>
<ul>
<li>mixup</li>
<li>external data</li>
<li>Dual Attention Head(paper: <a href="https://arxiv.org/pdf/1809.02983.pdf" target="_blank">https://arxiv.org/pdf/1809.02983.pdf</a>)</li>
<li>increasing the image size than 512x512</li>
<li>increasing batch size</li>
</ul>
<p>If you have a question about this solution, feel free to ask!<br>
Thank you for reading!<br>
(I'll add overview figure soon.)</p>
<p>Thank you for your reply!!!<br>
I'm very sorry but I'm very busy after this competition, so I have no time to modify my code. (hide my personal information etc.)<br>
I used these 2 notebooks as references.<br>
<a href="https://www.kaggle.com/shonenkov/training-efficientdet" target="_blank">https://www.kaggle.com/shonenkov/training-efficientdet</a><br>
<a href="https://www.kaggle.com/its7171/2class-object-detection-training" target="_blank">https://www.kaggle.com/its7171/2class-object-detection-training</a><br>
If you have a question, feel free to ask :)</p>
<p>Very sorry for late reply…<br>
The score of my 4 class model increased from 0.395 to 0.401 (Public score) by SAM optimizer, it's very surprised :)<br>
Yeah, I feel exactly the same way. I saw improvement in every competitions I took part in by just using SAM optimizer.</p> |
SIIM-FISABIO-RSNA COVID-19 Detection | Some Useful external Datasets Used helped boost score | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-FISABIO-RSNA COVID-19 Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Belo are some of the datasets i prepared to train models with more data. <br>
I regret i dint use much of RSNA data that would helped move .001 up in LB to come in gold range. </p>
<p>If any one wants to make use of this as part post competition practice make use of this to see if they are useful for you.</p>
<p><a href="https://www.kaggle.com/jaideepvalani/bimcv-dataset" target="_blank">https://www.kaggle.com/jaideepvalani/bimcv-dataset</a> </p>
<p><a href="https://www.kaggle.com/jaideepvalani/dataset" target="_blank">https://www.kaggle.com/jaideepvalani/dataset</a> -Vinbig Supposedly Atypical case<br>
<a href="https://www.kaggle.com/jaideepvalani/vinbig-indeterminate" target="_blank">https://www.kaggle.com/jaideepvalani/vinbig-indeterminate</a> -Vinbig Supposedly indeterminate </p>
<p><a href="https://www.kaggle.com/jaideepvalani/ricord-downscale" target="_blank">https://www.kaggle.com/jaideepvalani/ricord-downscale</a> -downscaled by 2<br>
<a href="https://www.kaggle.com/jaideepvalani/ricord-pl-035-045" target="_blank">https://www.kaggle.com/jaideepvalani/ricord-pl-035-045</a> -PLs</p>
<p><a href="https://www.kaggle.com/jaideepvalani/rsna-normal-images" target="_blank">https://www.kaggle.com/jaideepvalani/rsna-normal-images</a> -Normal Images using more of these should have helped more I used only 500 of these,still get boost of 0.001</p>
<p><a href="https://www.kaggle.com/jaideepvalani/vin-big-opacity" target="_blank">https://www.kaggle.com/jaideepvalani/vin-big-opacity</a> -Opacity Class of Vin Big</p>
<p><a href="https://www.kaggle.com/jaideepvalani/rsna-p-train" target="_blank">https://www.kaggle.com/jaideepvalani/rsna-p-train</a> -RSNA P images</p>
<p><a href="https://www.kaggle.com/jaideepvalani/rsna-meta-data" target="_blank">https://www.kaggle.com/jaideepvalani/rsna-meta-data</a> -csv containing the list of Hi and Med Prob Pneumonia images as per list uploaded by host</p>
<p><a href="https://www.kaggle.com/jaideepvalani/test-ds-pl" target="_blank">https://www.kaggle.com/jaideepvalani/test-ds-pl</a> -Test data PL and downscaled images</p> |
SIIM-ISIC Melanoma Classification | 105 place solution (0.9413 on LB). | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Summary</h2>
<ol>
<li>Ensemble of 6 models gave a Private LB of 0.9413; The ensemble code in <a href="https://www.kaggle.com/mpsampat/final-simple-oof-ensembling-methods-6-models" target="_blank">here</a></li>
<li>The models are just simple modifications of <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">baseline</a></li>
<li>We will share these later</li>
</ol>
<h2>Details:</h2>
<ol>
<li><p>We used <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">baseline</a> as a starter. </p></li>
<li><p>Then we train various different models. In the end we had 12 different models. (lost some models because i forgot to save oof.csv :( ). </p></li>
<li><p>then for ensemble it seems liked number of models combined was important. </p></li>
<li><p>So we ranked the 12 models by their OOF_AUC. </p></li>
<li><p>The ensemble was done as per the <a href="https://www.kaggle.com/steubk/simple-oof-ensembling-methods-for-classification" target="_blank">starter</a> kernel provided by <a href="https://www.kaggle.com/steubk" target="_blank">@steubk</a></p></li>
<li><p>This was modified for 4,5,6,8,9,12 models. </p></li>
<li><p>This is version with <a href="https://www.kaggle.com/mpsampat/final-simple-oof-ensembling-methods-6-models" target="_blank">6 models</a> which gave us the Private LB score of 0.9413. </p></li>
<li><p>Then we found the OOF_AUC with Bayesian Optimization looked promising. </p></li>
<li><p>the difference between OOF_AUC_Bayesian_Optimization and LB was not very large. </p></li>
<li><p>For ensemble of 6 models we should have run 12<em>11</em>10<em>9</em>8*7 combinations; </p></li>
<li><p>but i did not have time to run this. so I picked top 6 models; and a few different combinations of top 6 models. </p></li>
<li><p>The summary is in this <a href="https://docs.google.com/spreadsheets/d/1Gw7xJwkxxp-61YHHOEkS2DImxD4jkQ5HYRi-0j72QJM/edit?usp=sharing" target="_blank">spreadsheet</a></p></li>
<li><p>Heuristics used to select models: </p></li>
<li><p>We have 4 measures of ensemble AUC: <br>
14a OOF_avg_auc, OOF_rank_auc, OOF_pow_auc,OOF_bo_auc</p></li>
<li><p>I took the average of these 4 models. **If a model is good, them my heuristic is that all 4 OOF_AUC measures should be high. **</p></li>
<li><p>I also prefer, the highest OOF_bo_AUC or highest OOF average **with lowest number of models. **</p></li>
<li><p>We did not finish full extensive analysis but ensemble of 6 models scored very high; <br>
18 Lastly, the difference between OOF_bo_AUC and Public LB is quite low for this combination.</p></li>
</ol> |
SIIM-ISIC Melanoma Classification | 11th Place Solution Writeup | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I agree. In this competition, it was particularly relevant since some models used meta-data that demonstrated distribution differences between train and test samples. Excluding features -- or models -- that rely on such information helps to reduce the sampling bias.</p>
<p>Thanks <a href="https://www.kaggle.com/piantic" target="_blank">@piantic</a>! This is correct. We also tried a couple of models initializing from noisy-student weights, but this did not seem to help much.</p>
<p>Thank you <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>! We tried starting from both random and imagenet weights in the pre-trained model and tracked cross-entropy loss on 1/15 of 2020 data as validation. The best results were achieved when starting from imagenet, so we used these starting weights in the pre-trained model.</p>
<p>First of all, I would like to thank my teammates who all did a great job: <a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a>, <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a>, <a href="https://www.kaggle.com/skgone123" target="_blank">@skgone123</a> and <a href="https://www.kaggle.com/coolcoder22" target="_blank">@coolcoder22</a>. This was a great learning and collaboration experience and exchange of ideas! Also thanks to Kaggle and competition organizers for setting up this interesting competition!</p>
<p>In our solution, we were trying to build a large ensemble of diverse models with a goal to be more robust against overfitting and resist the future shakeup. Results have shown that our strategy worked well: trusting CV and diversity was important to achieve a good result on private LB.</p>
<h2>Data</h2>
<p>We used 5-fold cross-validation with the data partitioning scheme and <code>tfrec</code> files that were kindly provided by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> in his <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">great notebook</a>. We used 2020 data for validation and different combinations of 2017/18/19/20 data for training. </p>
<p>We also applied image segmentation to detect and crop the lesions and constructed the cropped data set. Several models in the ensemble were trained using the cropped images.</p>
<p>The malignant images in the training folds were upsampled for some of the models. </p>
<h2>Image Processing</h2>
<p>We considered a wide range of augmentations in different models:</p>
<ul>
<li>horizontal/vertical flips</li>
<li>rotation</li>
<li>circular crop (a.k.a <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/159476" target="_blank">microscope augmentation</a>)</li>
<li>dropout</li>
<li>zoom/brightness adjustment</li>
<li>color normalization</li>
</ul>
<p>On the TTA stage, we used the same augmentations as on the training stage and usually varied the number of augmentations between 10 and 20.</p>
<h2>Models</h2>
<p>We were focusing on EfficientNet models and trained a variety of architectures on different image sizes. Most of the final models were using <code>B5</code> with <code>512x512</code> images. We experimented with adding attention layers and meta-data for some models and played with the learning rates and label smoothing parameters. We also explored Densenet and Inception architectures but observed worse performance.</p>
<p>Some models were initialized from the pre-trained weights. To get the pre-trained weights, we fitted CNNs on the complete train + test + external data to predict <code>anatom_site_general_challenge</code> as a surrogate label. Initializing from the pre-trained weights instead of the Imagenet weights improved our CV. I set up a <a href="https://www.kaggle.com/kozodoi/pre-training-on-full-data-with-surrogate-labels" target="_blank">notebook</a> demonstrating the pre-training approach.</p>
<p>The best single model was <code>EN-B5</code> trained on <code>384x384</code> with attention and meta features, which achieved private LB of <code>0.9380</code>.</p>
<h2>Ensembling</h2>
<p>By the end of the competition, the size of our ensemble reached 91 models. To address potential train/test inconsistencies we filtered models using two criteria: </p>
<ul>
<li>Removing models where the mean correlation of predictions with the other models demonstrated a large gap between OOF/test predictions</li>
<li>Removing models that ranked high in the adversarial validation model</li>
</ul>
<p>This reduced our set to 58 models with OOF AUC in between <code>0.8751</code> and <code>0.9377</code>. Based on this set of models, we decided to go with three diverse submissions:</p>
<ol>
<li>Conservative solution aimed at being robust against overfitting: ranked average of the top-performing models. This reached: <code>CV 0.9474, Public 0.9521, Private 0.9423</code></li>
<li>Solution that achieved the best CV. We blended two ensembles: (i) average of top-k model predictions and (ii) stacking with meta-features and prediction stats across the models (min, max, range). The <code>k</code> and ensemble weights were optimized on CV: <code>CV 0.9532, Public 0.9576, Private 0.9459</code></li>
<li>Solution that achieved the best public LB. Here, we blended (2) with some of the best public LB submissions including public notebooks. This resulted in: <code>Public 0.9690, Private 0.9344</code></li>
</ol>
<p>The second solution delivered the best private LB performance and secured us the 11th place. </p>
<h2>Conclusion</h2>
<p>As expected, aiming for a high public LB score in this competition was dangerous due to the small size of the public test set and high class imbalance. Trusting CV and building a diverse set of models helped us to survive the shakeup.</p>
<p><a href="https://www.kaggle.com/mrinath" target="_blank">@mrinath</a> sorry for a late reply! Imagine you have two models. The correlation between their OOF predictions is 0.8, but the correlation between their test predictions is 0.5. The gap is therefore 0.3. A large gap means that model predictions behave differently between local validation and test set, so it is possible that one of the models overfits the local data. For instance, this might happen if a feature on which one of the models heavily relies has different distribution between train and test data. </p>
<p>Now imagine you have many models and compute their mean gaps between the pairwise correlations. Filtering models by such correlation gaps might help to remove some dangerous models :)</p>
<p>I don’t think so. One image is too little to make any judgements.</p> |
SIIM-ISIC Melanoma Classification | 126th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes, LookAhead has slight improvement in CV as compared to Adam alone in all of our experiments.</p>
<p>In addition to Nirjhar's comment:</p>
<ol>
<li>For basic augs 30 epochs with ES patience=10, then the model is finetuned for around 10 epochs with advanced augs. There is a constant increase in CV around 0.001-0.002 in second training cycle. No we didn't change the model much in between.</li>
<li>We did a lot of experiments with different schedulers and based on CV we picked the best. The convergence of first model is somewhere between 20-30 epochs and for 2nd model its somewhere between 3-10 epochs.</li>
<li>From GradCam results we are sure that model is paying a lot of attention to the corner of images, so we had added extra augs to improve that. </li>
</ol>
<p>First of all, I want to thank kaggle and the organizers for hosting such interesting competition.<br>
And i like to thank my teammates <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a> <a href="https://www.kaggle.com/phoenix9032" target="_blank">@phoenix9032</a> <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> <a href="https://www.kaggle.com/shivamcyborg" target="_blank">@shivamcyborg</a> for such an interesting competition journey. And congratulations to all the winners!</p>
<p>It's been a great competition, and my team has spend a lot of time and computational resources in this competition.</p>
<h1>Brief summary of our solution:</h1>
<ul>
<li>Our final solution is an ensemble of around 150 models (yes you heard it right <strong>150 models</strong>), 140 trained on Pytorch and 10 trained on TF.</li>
<li>With a simple rankblend of Pytorch and TF models at 50-50 ratio, we are able to get 0.9528 public LB and 0.9409 private LB.</li>
</ul>
<h2>Pytorch models overview:</h2>
<ul>
<li><p>For CV, we have used <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">Triple Stratified Leak-Free KFold CV</a> shared by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>.</p></li>
<li><p>For scheduler we have used both <code>CosineAnnealingLR</code> and <code>ReduceLROnPlateau</code>.</p></li>
<li><p>For optimizer we have used <code>Lookahead optimizer</code> on top of Adam.</p></li>
<li><p>Loss : focal , rankloss , bce</p></li>
<li><p>We trained 140 models with different base models ranging from <code>EffB0-EffB7</code>, <code>Renset</code>, <code>Resnext</code>, <code>DenseNet</code> and a lot more and on various image sizes (256, 384, 512, 768) and even few with 1024 image size.</p></li>
<li><p>Thanks to <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a> and the huge experience he brought to the team, we have also trained few double scale models, transformer models, and models with bigger heads.</p></li>
<li><p>For augmentations we have used 2 types of augmentations, basic augs and then advanced augs.</p></li>
<li><p>Most of Pytorch models are trained 2 times, first with basic augs and then later finetuned with <br>
advancaed augs.</p></li>
</ul>
<h3>Pytorch Augmentations:</h3>
<p><strong>Basic Augs:</strong></p>
<pre><code>train_transform_basic = transforms.Compose([
DrawHair(),
transforms.RandomResizedCrop(size=IMG_SIZE, scale=(0.7, 1.0)),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.ColorJitter(brightness=32. / 255.,saturation=0.5),
transforms.ToTensor(),
transforms.RandomErasing(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225])
])
</code></pre>
<p><strong>Advanced Augs:</strong></p>
<pre><code>SKIN_TONES = [
[75, 57, 50],
[180,138,120],
[90,69,60],
[105,80,70],
[120,92,80],
[135,100,90],
[150,114,100],
[165,126,110],
[195,149,130],
]
train_transform_albument = A.Compose([
ColorConstancy(p=1),
A.OneOf([
A.RandomSizedCrop(min_max_height=(int(IMG_SIZE * 0.8), int(IMG_SIZE * 0.8)), height=IMG_SIZE, width=IMG_SIZE),
A.RandomSizedCrop(min_max_height=(int(IMG_SIZE * 0.85), int(IMG_SIZE * 0.85)), height=IMG_SIZE, width=IMG_SIZE),
A.RandomSizedCrop(min_max_height=(int(IMG_SIZE * 0.9), int(IMG_SIZE * 0.9)), height=IMG_SIZE, width=IMG_SIZE),
A.RandomResizedCrop(height=IMG_SIZE, width=IMG_SIZE, interpolation=cv2.INTER_LINEAR),
A.RandomResizedCrop(height=IMG_SIZE, width=IMG_SIZE, interpolation=cv2.INTER_CUBIC),
A.RandomResizedCrop(height=IMG_SIZE, width=IMG_SIZE, interpolation=cv2.INTER_LANCZOS4),
], p=0.7),
A.OneOf([
DrawScaleTransform(p=0.2, img_size=IMG_SIZE),
MicroscopeTransform(p=0.1),
DrawHairTransform(n_hairs=6)
], p=0.5),
A.HorizontalFlip(p=0.5),
A.VerticalFlip(p=0.5),
A.RandomRotate90(p=0.5),
A.HueSaturationValue(hue_shift_limit=20, sat_shift_limit=30, val_shift_limit=20, p=0.5),
A.OneOf([
A.Cutout(num_holes=1, max_h_size=64, max_w_size=64, fill_value=random.choice(SKIN_TONES)),
], p=0.5),
A.OneOf([
A.Cutout(num_holes=1, max_h_size=64, max_w_size=64, fill_value=random.choice(SKIN_TONES)),
], p=0.5),
A.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
</code></pre>
<blockquote>
<p>Note: 140 models are trained with different combinations of augs mentioned above.</p>
</blockquote>
<h2>Tensorflow model overview:</h2>
<ul>
<li>Tensorflow training pipeline is almost similar to this kernel by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, but with different augs.</li>
</ul>
<h3>Analysis</h3>
<ul>
<li>We also have used GradCAM to visualize our Pytorch model results, as shown below:</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3982638%2F119ac2b52aec821096c4253e157c4582%2FScreenshot%202020-08-18%20at%201.21.27%20PM.png?generation=1597737186953608&alt=media" alt="!"></p>
<h3>Ensemble Techniques:</h3>
<ul>
<li>Stacking: For stacking we have used both LGBM with Bayesian Optimization and Neural Networks.</li>
<li>RankBlend</li>
</ul>
<h1>What didn't Worked:</h1>
<ul>
<li>For us we didn't find much improvement using meta features</li>
<li>Tiles: Breaking images into 3x3 tiles.</li>
</ul>
<h1>Not pursued but started :</h1>
<ul>
<li>We have tried SAGAN using Self attention to generate images and also DCGAN used in this paper to generate 256*256 images. However didn't pursue this because of lack of time and lack of variety of images. Here are some images: </li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3982638%2F908f5f953c1813b72ed81bda3c11d96e%2Fimage%20(1).png?generation=1597740337658687&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3982638%2F106e926e20ec5d29102ed90a2910c8d9%2Fimage.png?generation=1597740394187819&alt=media" alt=""></p>
<ul>
<li><p>We tried to break images into 3x3 tiles and then trained models on top of it. Here are some images:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3982638%2F259b42655ce883c7acef46da77d307b8%2FScreenshot%202020-08-18%20at%202.08.46%20PM.png?generation=1597740065473900&alt=media" alt=""></p></li>
<li><p>We also tried to solve this problem as multiclass classification for Melanoma, Benign, Nevus.</p></li>
<li><p>And a lot more…</p></li>
</ul>
<p><strong>[END]: At the end i would like to say that we have tried a lot of things in this competition, i can't even list all of them here, its been a great learning opportunity for each one of us, and we tried to make most out of it.
The results are not as we expected but i am glad that we worked hard and learned a lot of new things.</strong></p> |
SIIM-ISIC Melanoma Classification | 12th Place unexpected result | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>😊😃 My final placement seems written for a comedy bit. Thanks so much again for sharing your awesome work and insights. Very indebted. </p>
<p>Thank you!!</p>
<p>Thanks, I think it speaks to the strength of the TFR notebook. I did follow recommendations in those notebooks and discussion posts associated with them such as experimenting with simpler models and lower resolutions . Regarding approach, I tried to implement a link to a post in the TFR notebook about last year's winner's strategy. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3704899%2Fd7b303ee4407cbcb63711df74200dc86%2Flast_year.png?generation=1597754631529458&alt=media" alt=""> so i tried to stack something similar B0-128, B0-192, B2-192, B1-256 etc. But the models for last years winners solution did not differ by much, whereas here found that there were differences among models by as much as 5%. So I included them in the final solution but with really low weights in the ensemble. </p>
<p>Then I also included some models that I had tried out initially when trying to implement the TFR notebook but had good performance. In all these models I implemented coarse dropout and upsampling of positive images from Chris notebook, except for <a href="https://www.kaggle.com/sebastianji/triple-stratified-kfold-js-1024-ef7-fold4" target="_blank">1024x1024 B7</a> due to TPU time constraints on Kaggle. </p>
<p>In the end I had 15 models and posted a question in a <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/174058" target="_blank">discussion thread about how to ensemble them</a>. I got really good input but I had not saved my OOF prediction files, so thats something I learnt. Without a better way to ensemble models, I ordered my models by performance and up-voted higher performing models. </p>
<p>Thanks much!! </p>
<p>:-) Standing on the shoulders of Grandmasters. </p>
<p>I only used Chris Deotte's <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">Triple Stratified Kfolds with TFRecords</a>. 5-folds for all models with consistent specifications like he suggested. To this notebook I also added his code from <a href="https://www.kaggle.com/cdeotte/tfrecord-experiments-upsample-and-coarse-dropout" target="_blank">TFRecord Experiments - Upsample and Coarse Dropout</a>. He also provided a link to last year's winner solution, I tried to mimic that starting from B0 with 128. not sure if this was a good idea, the models performed poorly I weighted them very low. So, in brief I made just changes to his published kernels. </p>
<p>I wanted to use the notebook on how to create 1024x1024 TFRecords <a href="https://www.kaggle.com/sebastianji/triple-stratified-kfold-js-1024-ef7-fold4" target="_blank"></a>, but the competition provide 1024 size TFRecords, so I did not need to create them. Here is my notebook for using <a href="https://www.kaggle.com/sebastianji/triple-stratified-kfold-js-1024-ef7-fold4" target="_blank">1024x1024 TFRecords </a>. but its exactly the same. </p>
<p>This was my 2nd competition in which I invested time. My strategy was this: I ordered notebooks and discussion posts by Chris Deotte <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> by date and basically implemented each of those kernels and suggestions from discussion posts. I did training in colab and tried to follow his recommendation of first experimenting with lower resolution images and simpler models. Also, he posted a link to previous years winners where they used ensembles of models with progressively more complex model architectures using progressively higher resolutions. That idea did not work that well for me, the lower complexity models did not perform very well. But I stuck with them for the final submission and weighted the lower models (BO with 128, B1 with 192 etc, ) very very low. </p>
<p>I do not think this gold medal is deserved. I was hoping for a top 20% finish. I think that would have been a fair reflection of my intellectual contribution. They're still working with the results, so it could still be that the final standing is more accurate. I think the one thing I did learn is that ultimately I need to be able to write notebooks like the TFR Triple Stratified from scratch. Similarly, I learned from various discussion posts threads.</p> |
SIIM-ISIC Melanoma Classification | 12th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all thanks to the Kaggle community for active participation in the discussions and notebooks. A special mention to the pipeline set by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> in his great notebook Triple Stratified KFold with TFRecords (<a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords)" target="_blank">https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords)</a>. It helped us (<a href="https://www.kaggle.com/vikrant06" target="_blank">https://www.kaggle.com/vikrant06</a> and me) quickly try out multiple configurations of EffNet. What also seemed to work was the 'noisy-student' Effnet weights. </p>
<p>We used 3 folds including the data of 2018 and 2019 on 384x384 images along with brightness and contrast augmentation on B5, B6 and B7 (imagenet and noisy student weights for each) ensembled using power averaging with the power of 2. </p>
<p>The resultant ensemble when ensembled (weighted average) against the top 4 public leaderboard solutions (which were ensembled using power averaging with the power of 2) gave us Public LB score of 0.9644 and private LB score of 0.9458.</p>
<p>Yes, I was able to run 16 epochs on B7. Batch size 32 using <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> 's notebook in 3 folds for 384x384 images of 2018, 2019 and 2020. Otherwise try with 256x256 once.</p>
<p>You will have to add efn.EfficientNetB7 in EFNS list.</p>
<p>Thank you so much</p> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.