Title of Competition
stringclasses 343
values | Title of Writeup
stringlengths 6
139
| User
stringclasses 1
value | Writeup
stringlengths 419
80.4k
|
---|---|---|---|
SIIM-ISIC Melanoma Classification | 139th Place Summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/fahad7" target="_blank">@fahad7</a> : it was just averaging the predictions for ensembling. And for CV, it was stratified K Fold cross validation. </p>
<p>Congrats to winners.. </p>
<p>I would like to thank my teammate Sebastien (@Anhmeow) for his amazing contribution in getting this position. Our best solution was rank ensemble of 8 different models which we had created in parallel and ensembled it using stacking. </p>
<p>We had created 80 models in total and selected only those models which were less correlated.</p>
<ol>
<li>EfficientNet B4 - Image size - 512 - Sebastien</li>
<li>EfficientNet B4 - Image Size - 768 - Sebastien</li>
<li>EfficientNet B6 - Image Size - 768 - Sebastien</li>
<li>EfficientNet B5 - Image Size - 384 - Manoj</li>
<li>EfficientNet B6 - Image Size - 512 - Including external data - Manoj</li>
<li>EfficientNet B5 - Image size - 384 - without external data - Manoj</li>
<li>EfficientNet B5 - Image size - 384 - Including external data - Manoj</li>
<li>EfficientNet B6 - Image size - 512 - without external data - Sebastien</li>
</ol>
<p>The mean ensemble of these models scored around 0.9432 in private LB while rank ensemble scored 0.9406. </p>
<p>Special thanks to <a href="https://www.kaggle.com/Cdeotte" target="_blank">@Cdeotte</a> for providing wonderful notebooks and discussion threads</p>
<p>What we tried but couldn't complete:</p>
<ol>
<li><p>GAN's - Inspired by this paper <a href="https://www.paperswithcode.com/paper/melanoma-detection-using-adversarial-training-1" target="_blank">https://www.paperswithcode.com/paper/melanoma-detection-using-adversarial-training-1</a> and Rohit's post. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F413189%2F3dd7a16f80823987e41fcd2d9f252904%2FGAN%20Sample.png?generation=1597751688287238&alt=media" alt=""></p></li>
<li><p>Chipnet - Even though the performance was impressive, we couldn't complete it due to paucity of time. </p></li>
</ol> |
SIIM-ISIC Melanoma Classification | 157th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all the winners and to everyone who took part in this. It was a great learning experience for me and I'll share what finally worked for me. There is still so much to learn by going through the solution overviews and more discussions/kernels. The best performance I was able to get was public LB: 0.9570, private LB: 0.9403.</p>
<h2>Data</h2>
<p>My best model is an ensemble (just the mean of the predictions) of 4 models. I am using <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>'s Triple Stratified Split. 3 models were trained on data from 2017 + 2018 + 2020. One of the models was trained on 2017 + 2018 + 2019 + archives + 2020.</p>
<h2>Image sizes</h2>
<p>3 models were trained on 512x512 images and 1 model on 384x384.</p>
<h2>Augmentations</h2>
<p>2 models use the following augmentations (using <code>kornia</code>):</p>
<pre><code>- name: Rescale
params:
value: 255.
- name: RandomAffine
params:
degrees: 180
translate:
- 0.02
- 0.02
- name: RandomHorizontalFlip
params:
p: 0.5
- name: ColorJitter
params:
saturation:
- 0.7
- 1.3
contrast:
- 0.8
- 1.2
brightness: 0.1
- name: Normalize
params:
mean: imagenet
std: imagenet
</code></pre>
<p>The meaning of different params can be found from <code>kornia</code>'s <a href="https://github.com/kornia/kornia/blob/master/kornia/augmentation/augmentation.py" target="_blank">documentation</a>.</p>
<p>The other 2 models additionally use <code>Cutout</code>:</p>
<pre><code>- name: RandomErasing
params:
p: 0.5
ratio:
- 0.3
- 3.3
scale:
- 0.02
- 0.1
</code></pre>
<h2>Sampling</h2>
<p>To iterate faster, instead of upsampling the minority class, I downsample the majority class per epoch. However, to avoid wasting data, I sample different instances from the majority class per epoch.</p>
<h2>Network</h2>
<p>The same network architecture is used for all the 4 models - EfficientNet-B5 features followed by a Linear layer.</p>
<h2>Optimization</h2>
<pre><code>Optimizer: AdamW
weight decay: 0.1
epochs: 20
LR scheduler: 1cycle with initial LR = 5e-6, max LR = 2e-4 (LR range test)
</code></pre>
<h2>Things I wanted to try but couldn't</h2>
<ul>
<li>Train on images of different sizes for the 3 best configs on 512x512 and ensemble</li>
<li>Incorporate meta-data by stacking it with the features from the deep model</li>
<li>Spend time improving my ensembles</li>
<li>More augmentations</li>
<li>Custom head</li>
</ul>
<p>Thanks, Chris. Happy to know that I could contribute an idea to your playbook! :) </p>
<p>Yes, the trick is to start on the competition early enough. And I missed that too. All the best! </p>
<p>Thanks <a href="https://www.kaggle.com/realsid" target="_blank">@realsid</a>. Don't worry, it's very important to ask questions! :) </p>
<p>So, for my best models, I only added a single linear layer on top of the features from the backbone (to avoid overfitting). But in general, and also looking at the solutions of the winners, it is better to add more layers on top of the convolutional features. Is that clearer?</p> |
SIIM-ISIC Melanoma Classification | 16 Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>worth a try :)</p>
<p>It's possible same patient at the same time gets a different diagnosis from different doctors… <br>
But we could do feature engineering in stacking, and let CV tells if this information (grouped predictions) is predictive :)</p>
<p>Indeed, I believe more information about the patient might help!</p>
<p>I'm confused the same as you do after doing some patient level EDA. Maybe it's the noise in the label(diagnosis)</p>
<p>My intuition for this is</p>
<ol>
<li><p>For the same patient, model might have unstable predictions, referencing other observations by taking MEAN and STD of predictions from the same patient might help during stacking (which turns out not from CV)</p></li>
<li><p>If patients get malignant at age X, they might be still having malignant in age X+5 in the same parts (which is bad for patients not getting better…). But after doing some EDA, I could tell this mostly is not the case in our dataset.</p></li>
</ol>
<p><a href="https://www.kaggle.com/serigne" target="_blank">@serigne</a> Congrats to you too! I understand stacking for the first time from your kernel in the house price prediction when I join Kaggle :D, and continue to learn more stacking from <a href="https://www.kaggle.com/kazanova" target="_blank">@kazanova</a> </p>
<p><a href="https://www.kaggle.com/ajtryt2" target="_blank">@ajtryt2</a> Adam with LR=2e-5* tpu_replicas, constant scheduling with 15 epochs, using TTA from 11 to 15 epochs' model weights.</p>
<p>Your cv is amazing for a single model, did you add external data to the validation folds or keep it the same as without external data? What is your settings for LR schedule? Thanks in advance </p>
<p>Congrats to all winners! Thank you to the organizers and Kaggle for hosting this competition. We really hope that the winning models can make a difference here. Thank you <a href="https://www.kaggle.com/kazanova" target="_blank">@kazanova</a> for all the works we have tried. It was fun!</p>
<p>We also want to extend our gratitude to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> (<a href="https://www.kaggle.com/cdeotte" target="_blank">https://www.kaggle.com/cdeotte</a>) for the great work that he did throughout the competition, providing very useful insights as well as preparing the datasets in a good format to make them easy to use.</p>
<p>Our approach is an ensemble of 30+ different models built with various combinations of image sizes and networks. We used a 5-fold CV and we optimized for logloss. <em>We found logloss to be a bit more stable than AUC when assessing what the best model is.</em> We could run the same model with different seed and (even though there was a lot of TTA ) the differences in AUC could be +- 0.02 in a single fold, whereas logloss was much more stable.</p>
<p>The augmentations that worked best for us (apart from the usual ones like rotations and flips) were (in this order):</p>
<ul>
<li>Coarse dropout </li>
<li>Grid mask</li>
<li>Cutmix</li>
<li>mixup</li>
</ul>
<p><strong>Network-wise, we only used EfficientNet models.</strong> We experimented with other pretrained networks, but they did not perform as well. The best performing combination of model and size was EfficientNet b5 with 512. Most of our models were built in tensorflow using TPUs (in colab or Kaggle). The TPU environment made quite a big difference for us – we were able to accelerate training and experimentation which fundamentally helped us to find good training schemas for this competition. We had some models built in pytorch too. Cv-wise both frameworks were close, with the tensorflow implementation being a bit better here (in both cv and LB).</p>
<p><strong>Constant scheduling, checkpoint averaging and a lot of tta (with 30 different combinations of augmentations) worked very well.</strong> Each fold prediction used 150 inferences (5 checkpoint x 30 augmentations). This helped a lot to create stability for our models. Apart from averages, for every model, we also used maximum, minimum, standard deviation and geometrical mean (of all those 150 predictions). <em>We were quite surprised that standard deviation was quite often performing better than mean in both logloss and AUC. The interpretation could be that the more uncertain we are about what the actual prediction is, the more likely it is to be malignant.</em></p>
<p><strong>For our final model, we used stacking using ExtraTreesClassifier</strong> (<a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesClassifier.html" target="_blank">https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesClassifier.html</a>) . Our cv was 0.9536 (and LB 0.9457). Our best performing stacking model (version 38) was (almost) the best at private LB. Our previous version (37) was the best. There is very good relationship between cv and LB for all our stacking models.</p>
<p><strong>Detailed recipes that work\not work for us</strong></p>
<p><strong>What works well:</strong></p>
<ul>
<li>2018, and external data (with 30K samples)</li>
<li>Multiple Checkpoint + TTA with Mean\Max\Min\Std </li>
<li>Different sized images and efficientnets (efb5, 512 is best in cv)</li>
<li>Pretrain with external data, then finetune on 2020 data</li>
<li>Stacking with extraTreesClassifier</li>
</ul>
<p><strong>What works not well:</strong></p>
<ul>
<li>Train a multiclassifier</li>
<li>Focal loss</li>
<li>Patient-level stacking: aggregate predictions by patient</li>
<li>Tuning with SWA/AdamW/stochastic depth for efficientnets</li>
<li>Batch accumulation for larger image sizes (768\1024)</li>
</ul> |
SIIM-ISIC Melanoma Classification | 176th Place Write-up | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank kaggle and the organizers for hosting such interesting competition and thank for my team <a href="https://www.kaggle.com/seshurajup" target="_blank">@seshurajup</a> <a href="https://www.kaggle.com/shravankoninti" target="_blank">@shravankoninti</a> <a href="https://www.kaggle.com/jakkulasasikiran" target="_blank">@jakkulasasikiran</a> <a href="https://www.kaggle.com/mounikabellamkonda" target="_blank">@mounikabellamkonda</a>. This was great learning and collaboration.</p>
<p>special thanks to <a href="https://www.kaggle.com/shravankoninti" target="_blank">@shravankoninti</a> we worked hard for the last two months.</p>
<h2>[Summary]</h2>
<ul>
<li>First, we trained a simple model using <strong>E1 256X256</strong> using 5Fold <strong>CV: 0.88</strong></li>
<li>Then we analyze the predictions mostly <strong>FP</strong> and <strong>FN</strong></li>
</ul>
<p><strong>FP</strong> <code>Target = 1 and Predict < 0.2</code><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2058044%2F6d599f0c9466d695b86a10bad076b821%2FFP.png?generation=1597827587126099&alt=media" alt=""><br>
<strong>FN</strong> <code>Target = 0 and Predict > 0.8</code><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2058044%2Fc123c7520e87fcb326cb52a2cc3dd1c6%2FFN.png?generation=1597827662475852&alt=media" alt=""></p>
<ul>
<li>we removed <strong>FP</strong> and <strong>FN</strong> base on the threshold </li>
<li>trained the same model for training, we exclude <strong>FP</strong> and <strong>FN</strong> cv increased to <strong>0.9012</strong></li>
<li>using same <strong>E1</strong> model to find <strong>FP</strong> and <strong>FN</strong> on <strong>2017-2018-2019</strong> data</li>
<li>we removed <strong>FP</strong> and <strong>FN</strong> on <strong>2017-2018-2019</strong> </li>
<li>same model trained again on <strong>2017-2018-2019-2020</strong> we get CV:0.92+ LB:0.95+ PLB:0.925+</li>
</ul>
<h2>[PyTorch]</h2>
<p>for PyTorch setup, we are used DeepFake Winners solution scripts: <a href="https://github.com/selimsef/dfdc_deepfake_challenge" target="_blank">link</a> Big thanks to <a href="https://www.kaggle.com/selimsef" target="_blank">@selimsef</a></p>
<ul>
<li>For CV we are used <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">Triple Stratified Leak-Free KFold CV</a> thanks to <a href="https://www.kaggle.com/cdotte" target="_blank">@cdotte</a> </li>
<li>For training, we are used Pytorch 1.6 Native AMP. Notebook: <a href="https://www.kaggle.com/gopidurgaprasad/siim-pytorch-1-6-native-amp" target="_blank">link</a></li>
<li>Optimizer: <code>Adam</code></li>
<li>Schedule : <code>LearningRateScheduler</code> for <a href="https://www.kaggle.com/cdotte" target="_blank">@cdotte</a> notebook</li>
<li>CUTMIX + MIXUP it working some time </li>
<li>Sampling: Upsample(US) + Downsample(DS) + Nosampling(NS) for same model at the end we take simpleaverg for those. <br>
<code>sampler = BalanceClassSampler(labels=train_dataset.__get_labels__(), mode="downsampling")</code><br>
you found WeightedClassSampler from this notebook : <a href="https://www.kaggle.com/gopidurgaprasad/pytorch-weightedclasssampler" target="_blank">link</a></li>
<li>Loss: simply<code>BinaryCrossentropy</code></li>
<li>Augmentations:</li>
</ul>
<pre><code>A.Compose([
A.OneOf([
A.VerticalFlip(),
A.HorizontalFlip(),
A.Flip(),
A.Rotate()
], p=0.5),
A.Transpose(p=0.2),
A.ShiftScaleRotate(shift_limit=0.0625, scale_limit=0.1, rotate_limit=15),
A.RandomRotate90(p=0.1),
A.OneOf([
A.RandomGridShuffle(p=0.1),
A.Cutout(num_holes=8, max_h_size=size//8, max_w_size=size//8, fill_value=0, p=0.2),
A.CoarseDropout(max_holes=4, max_height=size//8, max_width=size//8, p=0.2),
A.GridDropout(p=0.2),
RandomEraser(p=0.2),
BitMask(size=size, p=0.1),
], p=0.11),
A.RandomBrightness(limit=(-0.2,0.2), p=0.1),
A.OneOf([
IsotropicResize(max_side=size, interpolation_down=cv2.INTER_AREA, interpolation_up=cv2.INTER_CUBIC),
IsotropicResize(max_side=size, interpolation_down=cv2.INTER_AREA, interpolation_up=cv2.INTER_LINEAR),
IsotropicResize(max_side=size, interpolation_down=cv2.INTER_LINEAR, interpolation_up=cv2.INTER_LINEAR),
], p=1),
A.PadIfNeeded(min_height=size, min_width=size, border_mode=cv2.BORDER_CONSTANT)
</code></pre>
<h4>Pytorch Models</h4>
<table>
<thead>
<tr>
<th>Model</th>
<th>SIZE</th>
<th>RESIZE</th>
<th>SAMPLE</th>
<th>FOLDS</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>E4</td>
<td>512</td>
<td>380</td>
<td>DS</td>
<td>5</td>
<td>0.9175</td>
</tr>
<tr>
<td>Sk50</td>
<td>512</td>
<td>512</td>
<td>DS</td>
<td>5</td>
<td>0.9178</td>
</tr>
<tr>
<td>E6</td>
<td>768</td>
<td>528</td>
<td>DS</td>
<td>10</td>
<td>0.9266</td>
</tr>
<tr>
<td>E5</td>
<td>512</td>
<td>456</td>
<td>DS</td>
<td>5</td>
<td>0.9288</td>
</tr>
<tr>
<td>E7</td>
<td>512</td>
<td>512</td>
<td>DS</td>
<td>5</td>
<td>0.9224</td>
</tr>
<tr>
<td>Dpn92</td>
<td>512</td>
<td>512</td>
<td>DS</td>
<td>5</td>
<td>0.9218</td>
</tr>
<tr>
<td>E5</td>
<td>512</td>
<td>456</td>
<td>DS</td>
<td>10</td>
<td>0.9229</td>
</tr>
<tr>
<td>D201</td>
<td>512</td>
<td>512</td>
<td>DS</td>
<td>5</td>
<td>0.9266</td>
</tr>
<tr>
<td>D201</td>
<td>512</td>
<td>512</td>
<td>US</td>
<td>5</td>
<td>0.9240</td>
</tr>
<tr>
<td>E6</td>
<td>768</td>
<td>528</td>
<td>US</td>
<td>5</td>
<td>0.9203</td>
</tr>
<tr>
<td>E4</td>
<td>512</td>
<td>380</td>
<td>US</td>
<td>10</td>
<td>0.9242</td>
</tr>
<tr>
<td>E5</td>
<td>512</td>
<td>456</td>
<td>NS</td>
<td>5</td>
<td>0.9153</td>
</tr>
<tr>
<td>D201</td>
<td>512</td>
<td>224</td>
<td>NS</td>
<td>5</td>
<td>0.9185</td>
</tr>
<tr>
<td>B5</td>
<td>512</td>
<td>456</td>
<td>US</td>
<td>10</td>
<td>0.9234</td>
</tr>
<tr>
<td>D201</td>
<td>512</td>
<td>224</td>
<td>US</td>
<td>5</td>
<td>0.9068</td>
</tr>
</tbody>
</table>
<h4>SimpleAverage</h4>
<ul>
<li>simply average all models CV:<strong>0.9372</strong> LB:<strong>0.9544</strong> PLB<strong>0.9388</strong></li>
</ul>
<h4>OptimizeAUC</h4>
<ul>
<li>for optimizing OOF AUC we are using OptimizeAUC function from <a href="https://www.amazon.in/dp/B089P13QHT/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1#:~:text=This%20book%20is%20for%20people,learning%20and%20deep%20learning%20problems." target="_blank">Approaching (Almost) Any Machine Learning Problem</a> by <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a> Big thanks to him.</li>
<li>you found the OptimizeAUC in this notebook: <a href="https://www.kaggle.com/gopidurgaprasad/optimize-auc-using-oof" target="_blank">link</a></li>
<li>CV:<strong>0.94741</strong> LB:<strong>0.9492</strong> PLB<strong>0.9344</strong></li>
</ul>
<p><strong>Pytorch Training Notebook:</strong> <a href="https://www.kaggle.com/gopidurgaprasad/siim-final-d201-ns" target="_blank">link</a></p>
<h2>[TF]</h2>
<ul>
<li>we are using <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">Triple Stratified KFold with TFRecords</a> again Big thanks to <a href="https://www.kaggle.com/cdotte" target="_blank">@cdotte</a></li>
<li>thanks to <a href="https://www.kaggle.com/shravankoninti" target="_blank">@shravankoninti</a> running those TF models one by one every data</li>
</ul>
<table>
<thead>
<tr>
<th>Model</th>
<th>SIZE</th>
<th>FOLDS</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>E7</td>
<td>512</td>
<td>15</td>
<td>0.8826</td>
</tr>
<tr>
<td>E6</td>
<td>512</td>
<td>15</td>
<td>0.9194</td>
</tr>
<tr>
<td>E5</td>
<td>512</td>
<td>15</td>
<td>0.9243</td>
</tr>
<tr>
<td>E7</td>
<td>512</td>
<td>10</td>
<td>0.9163</td>
</tr>
<tr>
<td>E6</td>
<td>512</td>
<td>10</td>
<td>0.9027</td>
</tr>
<tr>
<td>E5</td>
<td>512</td>
<td>10</td>
<td>0.9275</td>
</tr>
<tr>
<td>E4</td>
<td>512</td>
<td>10</td>
<td>0.9114</td>
</tr>
<tr>
<td>E3</td>
<td>512</td>
<td>10</td>
<td>0.9133</td>
</tr>
<tr>
<td>E2</td>
<td>512</td>
<td>10</td>
<td>0.9154</td>
</tr>
<tr>
<td>E1</td>
<td>512</td>
<td>15</td>
<td>0.9142</td>
</tr>
<tr>
<td>E7</td>
<td>768</td>
<td>5</td>
<td>0.9076</td>
</tr>
<tr>
<td>E6</td>
<td>768</td>
<td>5</td>
<td>0.9161</td>
</tr>
<tr>
<td>E5</td>
<td>768</td>
<td>5</td>
<td>0.9063</td>
</tr>
<tr>
<td>E4</td>
<td>768</td>
<td>5</td>
<td>0.9144</td>
</tr>
<tr>
<td>E3</td>
<td>768</td>
<td>5</td>
<td>0.9019</td>
</tr>
<tr>
<td>E2</td>
<td>768</td>
<td>5</td>
<td>0.8917</td>
</tr>
<tr>
<td>E1</td>
<td>768</td>
<td>5</td>
<td>0.8866</td>
</tr>
<tr>
<td>E7</td>
<td>384</td>
<td>5</td>
<td>0.9212</td>
</tr>
<tr>
<td>E6</td>
<td>384</td>
<td>5</td>
<td>0.9237</td>
</tr>
<tr>
<td>E5</td>
<td>384</td>
<td>5</td>
<td>0.9278</td>
</tr>
</tbody>
</table>
<h4>SimpleAverage</h4>
<ul>
<li>simply average all TF models CV:<strong>0.9301</strong> LB:<strong>0.9503</strong> PLB<strong>0.9376</strong></li>
</ul>
<h4>OptimizeAUC</h4>
<ul>
<li>CV:<strong>0.9446</strong> LB:<strong>0.9538</strong> PLB<strong>0.9348</strong></li>
</ul>
<h2>[PyTorch + TF]</h2>
<h4>SimpleAverage</h4>
<ul>
<li>simply average all PyTorch and TF models CV:<strong>0.9402</strong> LB:<strong>0.9538</strong> PLB:<strong>0.9399</strong></li>
<li>This is our <strong>best-selected submission</strong></li>
</ul>
<h4>OptimizeAUC</h4>
<ul>
<li>CV:<strong>0.949354</strong> LB:<strong>0.9496</strong> PLB:<strong>0.9362</strong></li>
</ul>
<h2>🎊 Congratulations to all Winners and Learners 🎊</h2>
<p>Thanks brother 😍 keep rocking </p>
<p>Thank you 😍</p> |
SIIM-ISIC Melanoma Classification | 1868th place in 1st competition! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I joined the competition very late, just 10 days ago. So, I forked a notebook: <a href="https://www.kaggle.com/niteshx2/full-pipeline-dual-input-cnn-model-with-tpus" target="_blank">https://www.kaggle.com/niteshx2/full-pipeline-dual-input-cnn-model-with-tpus</a> </p>
<p>So, what did I do differently? I tried changing epochs, not using coarse drop out, removing duplicates and finally I used a trick that gave a 0.007 improvement. The trick was to take max of past 2 predictions. </p>
<p>"Kill them with success and bury them with a smile." :)</p> |
SIIM-ISIC Melanoma Classification | 1st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I'm not aware papers on this trick. (There are too many AI papers these days to keep up with)</p>
<p>And it's a common trick in ML, probably not significant enough for a paper.</p>
<p>Check out Chris's post about cross validation: <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/175614" target="_blank">https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/175614</a></p>
<p>In short, there's never overlap between training and cv data</p>
<p>Currently, best computer vision architectures are different version of Efficient Nets. Generally you don't need to worry about changing it, especially for beginners </p>
<p>Here it probably doesn't matter. It might be better than Relu depending on the data and network.</p>
<p>yeah we used 2019 data that Chris shared. We used it <em>because of</em> its different distribution, e.g. higher positive sample proportion </p>
<p>you're right. It is based on <code>patient_id</code> information. (Although dropping it doesn't hurt our score.)</p>
<p>malignant vs benign information is a subset of diagnosis information</p>
<p>By using diagnosis as target, you're giving the model more information. It can help the model learn representation better.</p>
<p>Imagine you have a bunch of images of dog, cat, wolf, bird, fox …<br>
Method 1: You label them as dog vs not dog<br>
Method 2: You label them as dog, cat, wolf, bird, fox separately</p>
<p>In method 2, the model could learn decision boundary of dog vs wolf better, because now it knows what's what.</p>
<p>cv means cross-validation. People use "cv" to refer to cross validation score</p>
<p>lb means leaderboard. People use "lb" to refer to leaderboard score</p>
<p>It depends on your definition of survive. 😏 If we emerged as a survivor after a major event (whether it's shakeup or shakedown), I think we can say we survived.</p>
<p>Anyways, thanks for your notebook showing how to use meta data.</p>
<p>The mapping was based on the diagnosis description on last year competition's website: <a href="https://challenge2019.isic-archive.com/" target="_blank">https://challenge2019.isic-archive.com/</a></p>
<p>Note that models using meta data tend to overfit more easily than models with images only, because augmentations do not apply to meta data</p>
<p>Not all combinations. That would be too many 😄</p>
<p>For example, 896 can only be used on the smaller B3</p>
<p>Say your model 1's probability distribution has mean p=0.05; model 2's probability distribution has mean p=0.5. If you simply take average without ranking first, then model 2 would dominate model 1. </p>
<p>By using rank, you can force them to make equal contributions.</p>
<p>It's actually replace <code>a</code> and <code>b</code> with 1/2 in your example. I don't know what will happen if you submit probabilities larger than 1.</p>
<p>For any 2-dimensional image, rotate90 or rotate180 can be achieved using flip and transpose operations only. </p>
<p>For example, transpose+vertical flip equals to rotate90</p>
<p>It's hard to do ablation on augmentations -- the cv score variance is too large to draw meaningful conclusions when comparing two runs that only diff by an augmentation parameter.</p>
<p>When we tune the augmentations, we only check 1 fold's score without TTA, to have a general idea. However, even 5 fold cv with TTAx8 have noise (i.e. same code with different seeds lead to somewhat different scores).</p>
<p>So the augmentation hyper parameters we used are not optimal, but should be good enough.</p>
<p>Thanks. Yeah the model is pretty vanilla. We have posted our code here: <a href="https://www.kaggle.com/haqishen/1st-place-soluiton-code-small-ver#Define-Dataset" target="_blank">https://www.kaggle.com/haqishen/1st-place-soluiton-code-small-ver#Define-Dataset</a></p>
<p>just a simple average of ranked probabilities</p>
<p>There are different components in the pipeline where pytorch is easier to use or more flexible than TF. For example, in TF, it's hard to achieve augmentations similar to ours.</p>
<p>We didn't trust LB so we didn't submit every single model. </p>
<p>On cv, adding meta helped <code>cv_all</code> but hurt <code>cv_2020</code>. Overall, combining models with meta and models without meta seems like good diversity for ensemble.</p>
<blockquote>
<p>do you modify a bunch of parameters/augmentation method together?</p>
</blockquote>
<p>Usually make one change at a time</p>
<blockquote>
<p>Is there some systematic way to explore relatively optimal augmentation hyperparameters?</p>
</blockquote>
<p>Use smaller model and smaller image for faster iteration</p>
<blockquote>
<p>If we rank predictions firstly, I guess all pred_probs values are actually converted to percentile rank values. Therefore, the absolute difference between the original pred_probs has no influence anymore. which forces 2 models to contribute equally. Is this what you're saying?</p>
</blockquote>
<p>Yes precisely</p>
<blockquote>
<p>So this trick is designed for the AUC metric of his competition specifically?</p>
</blockquote>
<p>It applies to any metric where only the relative order of predictions (not absolute values) matter</p>
<p>Personally I don't think there's much (if any) additional information beyond single images and meta data. It's just my hunch, not conclusion based on experiments.</p>
<p>Thanks. No we didn't. We treat each data sample independently.</p>
<p>Think of any given single model's LB score as a normal random variable with big standard deviation. Now, if you take the average of 5 such i.i.d. normal random variables, the standard deviation will be smaller, according to law of large number.</p>
<p>Similarly, ensemble model's variation (i.e. "shake") is generally smaller than single model. The variation is averaged out.</p>
<p>Congratulations to all winners, especially my teammates Qishen <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a> and Gary <a href="https://www.kaggle.com/garybios" target="_blank">@garybios</a> who are both computer vision competition veterans. I learned a lot from you. So happy to see you both on top of another LB after the deepfake drama.</p>
<p>Big shoutout to my colleague Chris <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> who I feel like served as an unofficial host of this competition with all your datasets, notebooks and tutorials. The competition wouldn't be this popular with all your contribution. We used your resized jpg images and triple stratified leak-free folds. Thanks Chris.</p>
<h2>How we survived the shake</h2>
<p>The test set is very small, with very small proportion of positive samples. So public LB has huge variance. Even the 2020 train data is not big enough for validation purpose, due to its small positive samples. </p>
<p>To have stable validation, we used 2018+2019+2020's data for both train and <strong>validation</strong>. We track two cv scores, <code>cv_all</code> and <code>cv_2020</code>. The former is much more stable than the latter.</p>
<p>Second key to LB survival is ensemble. Our single model's LB-cv correlation is essential 0, but the bigger the ensemble, the more stable the LB. In the last few days, our ensemble's LB was steadily increasing as we added better models.</p>
<p>Our final ensemble 1 optimizes <code>cv_all</code> and final ensemble 2 optimizes <code>cv_2020</code></p>
<p>Ensemble 1: <code>cv_all=0.9845, cv_2020=0.9600, public=0.9442, private=0.9490</code> (1st place)<br>
Ensemble 2: <code>cv_2020=0.9638, public=0.9494, private=0.9481</code> (3rd place)</p>
<p>Our best single model has <code>cv_2020=0.9481</code></p>
<p>All scores above are 5 fold, TTAx8.</p>
<h2>TPU vs GPU, TF vs torch</h2>
<p>TPU with TF seems to be dominating the public notebooks in this competition. It seems to be faster than torch/GPU on EfficientNets. However, our experiments showed that its high LB scores are due to better luck with the public LB. Our torch/GPU models have much better CV, thanks to the better PyTorch ecosystem and flexibility for faster experiments.</p>
<h2>Models</h2>
<p>Our ensembles consists of EfficientNet B3-B7, se_resnext101, resnest101. There are models with or without meta data. Input size ranges from 384 to 896. (All input are from Chris's resized jpgs. For example, for 896 input we read 1024 jpgs and resize to 896.) </p>
<h2>Meta data</h2>
<p>In some (not all) of our models, we used 14 meta data from <a href="https://www.kaggle.com/nroman/melanoma-pytorch-starter-efficientnet" target="_blank">this</a> and <a href="https://www.kaggle.com/awsaf49/xgboost-tabular-data-ml-cv-85-lb-787#Image-Size" target="_blank">this</a> public notebooks, as illustrated below.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1120704%2Fae8a6c8597a1f1ab4649fc72627c9378%2Fmeta_NN.png?generation=1597734243315450&alt=media" alt=""></p>
<h2>Targets</h2>
<p>We find that using diagnosis as targets with cross entropy loss instead of binary target with BCE loss can boost score by ~0.01. </p>
<p>2020 data's diagnosis sets and 2019 data's diagnosis sets are different. We manually map 2020's to 2019's. There are 9 labels.</p>
<pre><code>2020 seborrheic keratosis -> BKL
2020 lichenoid keratosis -> BKL
2020 solar lentigo -> BKL
2020 lentigo NOS -> BKL
2020 cafe-au-lait macule -> unknown
2020 atypical melanocytic proliferation -> unknown
2020 nevus -> NV
2020 melanoma -> MEL
</code></pre>
<p>For prediction, we simply take the <code>MEL</code> class's softmax probability.</p>
<h2>Augmentations</h2>
<pre><code>transforms_train = A.Compose([
A.Transpose(p=0.5),
A.VerticalFlip(p=0.5),
A.HorizontalFlip(p=0.5),
A.RandomBrightness(limit=0.2, p=0.75),
A.RandomContrast(limit=0.2, p=0.75),
A.OneOf([
A.MotionBlur(blur_limit=5),
A.MedianBlur(blur_limit=5),
A.GaussianBlur(blur_limit=5),
A.GaussNoise(var_limit=(5.0, 30.0)),
], p=0.7),
A.OneOf([
A.OpticalDistortion(distort_limit=1.0),
A.GridDistortion(num_steps=5, distort_limit=1.),
A.ElasticTransform(alpha=3),
], p=0.7),
A.CLAHE(clip_limit=4.0, p=0.7),
A.HueSaturationValue(hue_shift_limit=10, sat_shift_limit=20, val_shift_limit=10, p=0.5),
A.ShiftScaleRotate(shift_limit=0.1, scale_limit=0.1, rotate_limit=15, border_mode=0, p=0.85),
A.Resize(image_size, image_size),
A.Cutout(max_h_size=int(image_size * 0.375), max_w_size=int(image_size * 0.375), num_holes=1, p=0.7),
A.Normalize()
])
transforms_val = A.Compose([
A.Resize(image_size, image_size),
A.Normalize()
])
</code></pre>
<h2>Post processing</h2>
<p>When ensembling different folds, or different models, we first rank all the probabilities of each model/fold, to ensure they are evenly distributed. In pandas, it can be done by <code>df['pred'] = df['pred'].rank(pct=True)</code></p>
<h2>Code</h2>
<p><a href="https://github.com/haqishen/SIIM-ISIC-Melanoma-Classification-1st-Place-Solution" target="_blank">https://github.com/haqishen/SIIM-ISIC-Melanoma-Classification-1st-Place-Solution</a></p>
<p>We did use meta (i.e. tabular) data in some of our models, similar to <a href="https://www.kaggle.com/nroman/melanoma-pytorch-starter-efficientnet" target="_blank">this</a> public notebook. Let me add more information on that and update the original post. </p> |
SIIM-ISIC Melanoma Classification | 21st Public - 53rd Private - Trust Your CV | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>UPDATE: I posted another discussion <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/175614" target="_blank">here</a> explaining How To CV and How To Ensemble</p>
<p>Thank you Baron</p>
<p>Don't feel bad. It takes skill to read the dozens of public notebooks and decide which ones can be trusted. For example, my best sub in Melanoma comp was also using my work plus some public notebooks that I trusted. </p>
<p>In most comps, I use my second final submission as a mix of my work with public notebooks that I trust.</p>
<p>In my popular public notebook <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">here</a>, just change <code>prepare_image()</code> function to the following. </p>
<pre><code>IMG_SIZES = [512]*FOLDS
CROP = 384
def prepare_image(img, augment=True, dim=256):
img = tf.image.decode_jpeg(img, channels=3)
img = tf.cast(img, tf.float32) / 255.0
if augment:
if dim!=CROP: img = tf.image.random_crop(img, [CROP, CROP, 3])
img = transform(img,DIM=CROP)
img = tf.image.random_flip_left_right(img)
img = tf.image.random_saturation(img, 0.7, 1.3)
img = tf.image.random_contrast(img, 0.8, 1.2)
img = tf.image.random_brightness(img, 0.1)
elif dim!=CROP:
img = tf.image.central_crop(img,CROP/dim)
img = tf.reshape(img, [CROP,CROP, 3])
return img
</code></pre>
<p>And then when you build the model, use</p>
<pre><code>model = build_model(dim=CROP, ef=EFF_NETS[fold])
</code></pre>
<p>Great question. After training, all OOF predictions use <code>TTA = 21</code>. So the validation OOF predictions are the average of 21 random crops. (Also the test set predictions are the average of 21 random crops).</p>
<p>During training, TF computes the validation loss and validation AUC using a single 384 center crop. (Assuming we're training with 512 and random cropping 384). This is the validation loss used for early stopping or model checkpoint.</p>
<p>Thanks XY=ABCD. Congrats for your bronze medal</p>
<p>Thanks Mark. Congrats on finishing 49th solo silver. That's fantastic.</p>
<p>I see what you're trying to do. That's an interesting idea.</p>
<p>Unfortunately, that won't work because the OOF are not an ensemble. In the OOF, each training image only has 1 prediction. Whereas in the test dataset, each image has 5 different predictions. So the test predictions are an ensemble, but the OOF predictions are just a "merging". As such you can't use "merging" weights to decide ensemble weights.</p>
<p>I see what you're trying to do. That's an interesting idea.</p>
<p>Unfortunately, that won't work because the OOF are not an ensemble. In the OOF, each training image only has 1 prediction. Whereas in the test dataset, each image has 5 different predictions. So the test predictions are an ensemble, but the OOF predictions are just a "merging". As such you can't use "merging" weights to decide ensemble weights.</p>
<p>Those are strong models Sogna. Great CV. Congrats on solo silver finish!</p>
<p>UPDATE: I posted a starter notebook <a href="https://www.kaggle.com/cdeotte/forward-selection-oof-ensemble-0-942-private" target="_blank">here</a> showing how to ensemble OOF files using forward selection. I use my 39 of my Melanoma models' OOF files. Forward selection chooses 8 of them and achieves CV 0.950, Public LB 0.958, Private LB 0.942.</p>
<p>Thanks Janek. 28th postion! Your team did great. Congratulations.</p>
<p><a href="https://www.kaggle.com/swadeshjana" target="_blank">@swadeshjana</a> <a href="https://www.kaggle.com/ritachetadas" target="_blank">@ritachetadas</a> <a href="https://www.kaggle.com/nitimohan" target="_blank">@nitimohan</a> Your team did fantastic. 62nd out of 3400 teams. Great job.</p>
<p>My best single model is the top row of the table above. It was 512x512 then random crop 384x384, EfficientNetB5 with 2018 external data and all my extra malignant data. It got triple stratified CV 0.936 and LB 0.956</p>
<p>All my models were pretty simple, just Adam optimizer, Ramp up then decay learn schedule, Binary cross entropy loss. And just global average pooling on EffNet backbones. Then for my different models, I just varied the image size, backbone size, external data, and upsample. (So I used same basic model for all).</p>
<p>I think it I spent time to make my single model template stronger. Then all my models would be stronger and the ensemble would score higher. I'm curious to see what the top teams did.</p>
<p>There was nothing special about my meta model. By itself, it had CV 0.760 and public LB 0.780, so I trusted that it wasn't overfitting train. It increased my CV 0.0005 which could have helped private LB a little but it turned out not to help private LB.</p>
<p>Thanks Roman. Random cropping worked very well. It allowed to train at small resolution speed but gave the models all the information from high resolution images.</p>
<p>I'm happy with my silver finish. To go higher, I think I needed to focus on a optimizing my single model design. If I started with stronger single models, my ensemble LB would have been much higher. </p>
<p>Wow, i just read that your ensemble CV is 0.960. That's incredible Gary and team. Congrats on building a great model, I look forward to reading your teams' full writeup.</p>
<p>Awesome. It is a great trick. We get all the information in the higher resolution images but we get to train at the speed of the smaller images. Congrats on solo gold Tim.</p>
<p>Yes at train time. I introduced a new variable <code>CROP = ???</code> in addition to <code>IMAGE_SIZE = ???</code>. The varible <code>IMAGE_SIZE</code> becomes <code>dim</code> in the code below. (This is my popular notebook updated)</p>
<pre><code>if augment:
if dim!=CROP: img = tf.image.random_crop(img, [CROP, CROP, 3])
img = transform(img,DIM=CROP)
img = tf.image.random_flip_left_right(img)
#img = tf.image.random_hue(img, 0.01)
img = tf.image.random_saturation(img, 0.7, 1.3)
img = tf.image.random_contrast(img, 0.8, 1.2)
img = tf.image.random_brightness(img, 0.1)
elif dim!=CROP:
img = tf.image.central_crop(img,CROP/dim)
</code></pre>
<p>I never trust public ensemble notebooks in competitions. I mixed some public notebook single models (that i read the code and trust) with my single models by blending public <code>submission.csv</code> files with my <code>submission.csv</code> files.</p>
<p>When blending with public notebooks, you need to be general like weighting everything equally, or weight strong LB as 2 and weak LB as 1. You can't fine tune the weights like you can if you use OOF. Otherwise you will overfit.</p>
<p>Thank you <a href="https://www.kaggle.com/ipythonx" target="_blank">@ipythonx</a> for your kind words. </p>
<p>Thanks Kaz. Great job with your team's strong finish. I look forward to reading about your model. What was your CV score?</p>
<p>This is a great job Manjesh. You did perfect. You built diverse models and maximized OOF CV. Then you trained meta model and ensembled it all together.</p>
<p>When you use 5 KFold, each model makes predictions on every training image. If you look at the output files of my public notebook <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords/output" target="_blank">here</a>, you will see a file named <code>oof.csv</code>. Now imagine that you have two <code>oof</code> files named <code>oof1.csv</code> and <code>oof2.csv</code>. To compute the ensemble score between two models, you do this</p>
<pre><code>df1 = pd.read_csv('oof1.csv')
df2 = pd.read_csv('oof2.csv')
auc = roc_auc_score(df1.target, 0.5 * df1.pred + 0.5 * df2.pred)
</code></pre>
<p>Note that <code>df1.target</code> and <code>df2.target</code> are the same thing. If you wonder what <code>df1.target</code> is, it is the same as </p>
<pre><code>train = pd.read_csv('train.csv')
print( train.target.values )
</code></pre>
<p>But they are in a different order as <code>df1.target</code> (which are ordered by how the TFRecords get read).</p>
<p>Thanks Konstantin.</p>
<p>Fantastic. Great solo silver. Congrats!</p>
<p>Thanks Rob. Congrats to you and your team's great finish. </p>
<p>Thanks Statking. I saw your writeup. You did fantastic! Congratulations on your excellent finish.</p>
<p>If I use my private LB 0.942 model and do <code>0.5 * image + 0.5 * meta</code> it becomes private 0.943. That is powerful.</p>
<p>If you load 512x512 and random crop to 256x256. That is not the same as loading 256x256. In the former, the images contain 4x more pixel information. Note that I am not resizing. My model is slowing viewing the entire 512x512 over time.</p>
<p>So loading 512x512 and cropping to 256x256 is the same as training on 512x512. And training on 512x512 always did better than using the smaller image sizes.</p>
<p>If you wish to train on 256x256, then you can load 256x256 and random crop to 128x128. Then your model will train very fast cause it processes 128x128 but it is training on 256x256</p>
<h1>Melanoma Model Ensemble!</h1>
<p>Thank you Kaggle, SIIM, and ISIC for an exciting competition. Thank you Kagglers for wonderful shared content and great discussions! </p>
<p>Early on I decided to build a large ensemble instead of optimizing a single model. The AUC metric seemed very unstable with this unbalanced dataset and using ensembles, heavy TTA, and crop augmentation helped stabilize it.</p>
<h1>My Final 3 Submissions</h1>
<p>My main (first) submission was an ensemble that maximized CV where all models used the same <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/165526" target="_blank">triple stratified leak-free CV</a> with <code>seed = 42</code>. (How to CV explained <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/175614" target="_blank">here</a>). Next I believed public notebooks could add diversity too, so my final 3 submissions were:</p>
<ul>
<li><code>Sub_1</code> - 9 models of mine ensemble - CV 0.9505 LB 0.9578 - Private 0.9418</li>
<li><code>Sub_2</code> - 5 mine plus 10 public single models - CV ??? LB 0.9662 - Private 0.9425</li>
<li><code>Sub_3 = 0.75 * Sub_1 + 0.25 * Sub_2</code> - CV ??? LB 0.9603 - Private 0.9429</li>
</ul>
<h1>Crop Augmentation</h1>
<p>Crop Augmentation was key to prevent overfitting during training when using external data, upsampling, and large EfficientNet backbones. Crop augmentation also helped stabilize AUC particularly when used in TTA.</p>
<p>Previously we saw that using different image sizes adds diversity (explained <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/160147" target="_blank">here</a>). What many people don't realize is that you can use TFRecord sized 512x512 but train on random 256x256 crops (different each epoch). Then training goes fast because your EfficientNet only processes size 256x256 but you are getting features from 512x512 resolution. This allows us to train quickly on large sizes such as 1024x1024 and 768x768 (using crops of 512 and 384 respectively).</p>
<p>In the picture below, we read the top row from TFRecords, then random crop, then train with the bottom row.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2Fb9fd40b45aa428e23a9a470322aa8c7b%2Fcrop.png?generation=1597700942674639&alt=media" alt=""></p>
<h1>My Final Models</h1>
<p>The following 8 models have ensemble CV 0.9500, Public LB 0.9577, and Private LB 0.9420 . Then ensembling meta data with geometric mean: <code>image_ensemble**0.9 * tabular_model**0.1</code>, increases CV to 0.9505, Public LB to 0.9578, and Private LB to 0.9418.</p>
<table>
<thead>
<tr>
<th>CV</th>
<th>LB</th>
<th>read size</th>
<th>crop size</th>
<th>effNet</th>
<th>ext data</th>
<th>upsample</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.936</td>
<td>0.956</td>
<td>512</td>
<td>384</td>
<td>B5</td>
<td>2018</td>
<td>1,1,1,1</td>
</tr>
<tr>
<td>0.935</td>
<td>0.937</td>
<td>768</td>
<td>512</td>
<td>B6</td>
<td>2019 2018</td>
<td>3,3,0,0</td>
</tr>
<tr>
<td>0.935</td>
<td>0.949</td>
<td>768</td>
<td>512</td>
<td>B7</td>
<td>2018</td>
<td>1,1,1,1</td>
</tr>
<tr>
<td>0.933</td>
<td>0.950</td>
<td>1024</td>
<td>512</td>
<td>B6</td>
<td>2018</td>
<td>2,2,2,2</td>
</tr>
<tr>
<td>0.927</td>
<td>0.942</td>
<td>768</td>
<td>384</td>
<td>B4</td>
<td>2018</td>
<td>0,0,0,0</td>
</tr>
<tr>
<td>0.920</td>
<td>0.941</td>
<td>512</td>
<td>384</td>
<td>B5</td>
<td>2019 2018</td>
<td>10,0,0,0</td>
</tr>
<tr>
<td>0.916</td>
<td>0.946</td>
<td>384</td>
<td>384</td>
<td>B345</td>
<td>no</td>
<td>0,0,0,0</td>
</tr>
<tr>
<td>0.910</td>
<td>0.950</td>
<td>384</td>
<td>384</td>
<td>B6</td>
<td>2018</td>
<td>0,0,0,0</td>
</tr>
</tbody>
</table>
<p>The above models use a variety of different augmentation, losses, optimizers, and learning rate schedules. External data explained <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/164910" target="_blank">here</a>. Upsample explained <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/169139" target="_blank">here</a>.</p>
<h1>My Training</h1>
<p>If you download my popular notebook <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">here</a> to your local machine or cloud provider, then you can run the code quickly using multiple GPUs by adding the following one line of code</p>
<pre><code>DEVICE = "GPU"
strategy = tf.distribute.MirroredStrategy()
</code></pre>
<p>Most of my models including my most accurate single model with CV 0.936 and LB 0.956 were trained using four Nvidia V100 GPUs. Thank you Nvidia for the use of GPUs!</p>
<h1>Ensemble Pseudo Code</h1>
<p>In the past 2 months, I trained 50+ diverse models. How do we ensemble 50+ models? Train all models using the same triple stratified folds <code>seed = 42</code> from my notebook <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">here</a>. Then to create an ensemble, start with the model that has largest CV and repeatedly try adding one model to increase CV. Whichever one additional model increases the CV the most (and at least 0.0003), keep that model and then iterate through all models again. Repeat this process until CV score stops increasing (by at least 0.0003).</p>
<pre><code> # START ENSEMBLE USING MODEL WITH LARGEST CV
Repeat until CV does not increase by 0.0003+ :
# TRY ADDING EVERY MODEL ONE AT A TIME AND REMEMBER
# HOW MUCH EACH INCREASES THE ENSEMBLE CV SCORE
for k in range( len(models) ):
for w in [0.01, 0.02, ..., 0.98, 0.99]:
# TRY ADDING MODEL k WITH WEIGHT w TO ENSEMBLE
trial = w * model[k,] + (1-w) * ensemble
auc_trial = roc_auc_score(true, trial)
# ADD ONE NEW MODEL TO ENSEMBLE THAT INCREASED CV THE MOST
# CHECK NEW CV SCORE. IF IT INCREASED REPEAT LOOP
</code></pre>
<h1>Ensemble Starter Notebook</h1>
<p>I posted my solution code <a href="https://www.kaggle.com/cdeotte/forward-selection-oof-ensemble-0-942-private" target="_blank">here</a> showing how to ensemble OOF files using forward selection. The notebook uses 39 of my Melanoma models' OOF files. Forward selection chooses 8 of them and achieves CV 0.950, Public LB 0.958, Private LB 0.942.</p>
<p>Thanks <a href="https://www.kaggle.com/ankitsajwan" target="_blank">@ankitsajwan</a> . Congratulations on achieving solo silver. Well done.</p> |
SIIM-ISIC Melanoma Classification | 222th Place Solution: Correlation CV vs Public LB 0.16, vs Private LB 0.73 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thank organizers for holding this competition and Kagglers for posting truly useful datasets, notebooks, discussions!!!</p>
<p>Many competitors have noticed that correlation of CV with Public LB is unstable. Some competitors have solved it by</p>
<ul>
<li><a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/175412" target="_blank">Using external data for validation</a></li>
<li><a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/175344" target="_blank">Ensemble</a></li>
</ul>
<p>By seeing these posts, I wondered how unstable correlation of my single models CV with Public LB and Private LB are. I investigated that by using late submissions.</p>
<p>Note that, throughout this post, "CV" means "OOF CV", not "Averaged CV over the folds".</p>
<p>I used <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> 's <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/165526" target="_blank">Triple Stratified Leak-Free KFold CV dataset</a> with 5 folds, and didn't include external data in validation phase. (Thank <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for providing very useful dataset and clear explanation. I could spend much time for training diverse models because of your datasets!)</p>
<p>Trained models are </p>
<ul>
<li>Efficient Net without meta data</li>
<li>Efficient Net with meta data</li>
<li>ResNest without meta data</li>
</ul>
<p>As I wrote in title, my single models CV are more correlate with Private LB than Public LB!</p>
<ul>
<li>CV vs Public LB: 0.16</li>
<li>CV vs Private LB: 0.73</li>
<li>(Public LB vs Private LB: 0.53)</li>
</ul>
<p>Actual values of CV, Public LB, Private LB are below.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1772718%2F91caae5b19967358979bb6e9fb2817eb%2F2020-08-20_00h07_39.png?generation=1597849679012485&alt=media" alt=""></p>
<p>I hypothesize the reason of huge gap between Public LB and Private LB is very few amount of positive samples in Public test set. <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> 's <a href="https://www.kaggle.com/cpmpml/number-of-public-melanoma-is-78-or-77" target="_blank">notebook</a> and <a href="https://www.kaggle.com/sirishks" target="_blank">@sirishks</a> 's <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/167215" target="_blank">analysis</a> show that the number of positive samples in Public and Private test set are 77 or 78 and 182.<br>
(or maybe I was just lucky enough to got such relatively stable correlation of CV with Private LB. And also I'm not sure about my training settings is suitable for getting stable correlation.)</p>
<h1>Details of my solution</h1>
<h3>Training settings</h3>
<p>Training settings of Efficient Net wo/w meta data are different for those of ResNest wo meta data (to obtain diverse models). The way to use meta data is identical to <a href="https://arxiv.org/abs/1910.03910" target="_blank">1st place solution of ISIC 2019</a>. Some settings are also based on this solution.</p>
<h4>Common settings</h4>
<ul>
<li>Pytorch with GPU and TPU</li>
<li><a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/165526" target="_blank">Triple Stratified Leak-Free KFold CV dataset</a> with 5 folds</li>
<li>External data are <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/164910" target="_blank">whole 2017 and 2018, malignant of 2019</a>, <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/169139" target="_blank">additional 580 malignant samples</a> prepared by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> (big thanks again!)</li>
<li>RAdam optimizer</li>
<li>BCE loss</li>
<li>16 time TTA for validation and test phase. (4 different scale with horizontal flip, vertical flip, both of them. Actual TTA images look like below. This is also inspired by ISIC 2019 1st place solution)</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1772718%2Ff4a76e46576a1c13fa5132f48c4813fc%2F2020-08-19_23h50_09.png?generation=1597849603207907&alt=media" alt=""></p>
<h4>Efficient Net</h4>
<ul>
<li>b0 ~ b6 with different image size (combinations of model size with image size are same with <a href="https://arxiv.org/abs/1905.11946" target="_blank">original paper</a>, like b0-224, b1-240, …, b6-528.)</li>
<li>Pretrained weights on ImageNet</li>
<li>Augmentations (actual snippet below)</li>
</ul>
<pre><code>p = .5
train_transforms = A.Compose([
A.RandomResizedCrop(img_size, img_size, p=p),
A.ShiftScaleRotate(rotate_limit=(-90, 90), p=p)
A.HorizontalFlip(p=p),
A.VerticalFlip(p=p),
A.HueSaturationValue(p=p),
A.RandomBrightnessContrast(p=p),
A.OneOf([A.IAAAffine(scale=1.0, translate_percent=(.5, .5), translate_px=None, rotate=0.0, shear=(50, 50), order=1, cval=0, mode='reflect', p=p),
A.IAAAffine(scale=1.0, translate_percent=(.5, .5), translate_px=None, rotate=0.0, shear=(30, 30), order=1, cval=0, mode='reflect', p=p),
A.IAAAffine(scale=1.0, translate_percent=(.3, .3), translate_px=None, rotate=0.0, shear=(0, 0), order=1, cval=0, mode='reflect', p=p)], p=p),
A.Cutout(num_holes=1, max_h_size=32, max_w_size=32, fill_value=0, always_apply=False, p=p),
A.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225],
),
ToTensorV2()
])
val_transforms = A.Compose([
A.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225],
),
ToTensorV2()
])
</code></pre>
<ul>
<li>12 epochs for training wo meta data</li>
<li>30 epochs for training w meta data (initial weighs are best AUC's one got by training wo meta data)</li>
<li>No LR scheduler</li>
</ul>
<h4>ResNest</h4>
<p>First of all, thank <a href="https://www.kaggle.com/arroqc" target="_blank">@arroqc</a> for posting <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/173272" target="_blank">topic telling ResNest is faster than Efficient Net implemented in Pytorch!</a>. I'm really suffered from same problem at that time and I guess training ResNest really helped to increase my CV in limited time!</p>
<ul>
<li>ResNest50, 101, 200 with different image size (combinations of model size with image size are 50-224, 101-256, 200-320, which are based on <a href="https://github.com/zhanghang1989/ResNeSt#pretrained-models" target="_blank">those of pretrained models</a>)</li>
<li>Pretrained weights on ImageNet</li>
<li>3 Dense head with ReLU, Dropout (probability=0.3). The number of neurons of each head is 1024, 512, 256. (This is based on <a href="https://www.kaggle.com/ajaykumar7778" target="_blank">@ajaykumar7778</a> 's <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/172882#961932" target="_blank">post</a>. Thanks to sharing your ideas! I guess it made my models diverse and increased CV of ensemble!)</li>
<li>Image is resized from 1024x1024 resolution</li>
<li>Augmentations (actual snippet below)</li>
</ul>
<pre><code>p = .5
train_transforms = A.Compose([
A.RandomResizedCrop(img_size, img_size, scale=(0.5, 1.0), p=1),
A.ShiftScaleRotate(rotate_limit=(-90, 90), p=p),
A.HorizontalFlip(p=p),
A.VerticalFlip(p=p),
A.HueSaturationValue(p=p),
A.RandomBrightnessContrast(p=p),
A.IAAAffine(scale=1.0, translate_percent=(.0, .0), translate_px=None, rotate=0.0, shear=(2.0, 2.0), order=1, cval=0, mode='reflect', p=p),
A.Cutout(num_holes=1, max_h_size=32, max_w_size=32, fill_value=0, always_apply=False, p=p),
A.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225],
),
ToTensorV2()
])
val_transforms = A.Compose([
A.Resize(int(img_size*1.25), int(img_size*1.25), interpolation=cv2.INTER_AREA, always_apply=False, p=1),
A.CenterCrop(img_size, img_size, always_apply=False, p=1.0),
A.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225],
),
ToTensorV2()
])
</code></pre>
<ul>
<li>CyclicLR scheduler (This is also based on <a href="https://www.kaggle.com/ajaykumar7778" target="_blank">@ajaykumar7778</a>'s <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/172882#961932" target="_blank">post</a>. Thanks again for sharing your ideas! It seemed like that this helped to stabilize training!)</li>
</ul>
<h2>Ensemble for Submission</h2>
<p>My final submission is hugely relied on OOF CV. I ensembled relatively high OOF CV models by taking summation of probability of rank with optimal weighs find by Optuna. (The code for rank AUC is based on <a href="https://www.kaggle.com/steubk" target="_blank">@steubk</a> 's <a href="https://www.kaggle.com/steubk/simple-oof-ensembling-methods-for-classification" target="_blank">notebook</a>. Thanks for sharing! This ensemble really boosted my CV!)</p>
<p>Only some portion of models are used to ensemble, which are </p>
<ul>
<li>b0 and b1 w meta</li>
<li>b3, 5 and 6 wo/w meta</li>
<li>b4 wo meta</li>
<li>resnest 50, 101 and 200 wo meta</li>
</ul>
<p>(I selected these models by just seeing OOF CV, because remaining time is 1 hour at that time! I believe there were more wise way to select models!)</p>
<p>Ensemble score are CV: 0.9534, Public LB: 0.9449, Private LB: 0.9391</p>
<h2>Things Not Work Well</h2>
<ul>
<li>Oversampling malignant image to balance 1:1 (inspired by <a href="https://arxiv.org/abs/1710.05381" target="_blank">this paper</a>)</li>
<li>Label smoothing with alpha 0.05 (inspired by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> 's <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">notebook</a>)</li>
<li>Increase epochs to 100 with StepLR scheduler with step size = 25 and gamma = 1/5 (inspired by 2019 ISIC 1st place solution)</li>
</ul>
<p>Thanks for reading my post and sorry for my poor english!</p>
<p>As this is my first hardworking comp, I have learned huge amount of things! For me, unstable CV vs Public LB correlation is really good learning opportunity because I believe that don't overfit to known test data is important in real world problems.</p>
<p>P.S. I don't have my own rich computational resources, so used cloud gpu/tpu at GCP, AWS, Azure. It costs around $1041 😅. In next comp, cost-effective approach is definitely needed! </p> |
SIIM-ISIC Melanoma Classification | 241st place (bronze) solution (+ GitHub) with batch sampling | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>You're welcome <a href="https://www.kaggle.com/bijeeshavs" target="_blank">@bijeeshavs</a> </p>
<p>Thanks <a href="https://www.kaggle.com/romanweilguny" target="_blank">@romanweilguny</a> , I also was surprised by <code>sample_from_datasets</code>, I've been looking for a long time a simple and elegant way to do batch sampling in <code>TF</code>, but be aware that it uses a little more memory.</p>
<p>Where you got the implementation for <code>SWA</code>? I have tried the version from <code>tensorflow_addons</code> but it crashed my Kaggle and Colab env.</p>
<p>I have tried the ensembling from Chris using 27 of my best OOFs and got a lower CV, public and private LB from it, my ensembling code worked better, but maybe I did something wrong.</p>
<p>I used meta-data on all my submission, but just blending values like <code>90%image ensemble +10% meta ensemble</code>, but got just a small improvement.</p>
<p>By what I saw I had good models, maybe if I had ensembled them in a more efficient way I should be somewhere close to 50, still learning how to diversify the ensemble.</p>
<p>Thanks <a href="https://www.kaggle.com/utshabkumarghosh" target="_blank">@utshabkumarghosh</a> </p>
<p>Thanks <a href="https://www.kaggle.com/ivanvoid" target="_blank">@ivanvoid</a> !, this <code>model backlog</code> takes some time to update during the competitions, but it is a time saver when you need com compare and evaluated your models later.</p>
<p>I think that the best notebook was the <code>136</code> It was my best single model on private LB with <code>0.9396</code> and public <code>0.9470</code> also had a nice CV <code>0.937</code>.</p>
<p>Thanks <a href="https://www.kaggle.com/epocxy" target="_blank">@epocxy</a> !</p>
<p>Thanks <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> I think batch sampling was the one single trick that helped me the most, after it, I got very consistently CV 9.30+, I guess it could work even better if you use two datasets one with only positive samples and another with only negative samples, I also got nice results sampling from datasets from different years, like {10%: 2019, 50%: 2020+2018, 40%: malignants}</p>
<p>Thanks <a href="https://www.kaggle.com/ambarish" target="_blank">@ambarish</a> , Yeah everything was created manually, I wish there was an automated way 😄, but as I am updating it almost daily it becomes easy to maintain, after the competition ends it works as a cool portfolio of projects and experiments.</p>
<p>Hey guys, I felt 93 positions after the shakedown, but still managed to get a bronze, was not a very good placing but still think would be nice to share my solution and experiments at my <a href="https://github.com/dimitreOliveira/melanoma-classification" target="_blank">Git</a> in case anyone is interested in taking a look.</p>
<p>My best solution from the 3 chosen, was the one that according to my experiment had the best CV values, so trusting my CV helped me here. Basically it was a weighted average using exponential log like some people also did.</p>
<h4>Models</h4>
<ul>
<li>1x EfficientNet B4 384x384</li>
<li>3x EfficientNet B4 512x512</li>
<li>1x EfficientNet B5 512x512</li>
</ul>
<p>They were training using one augmentation pipeline and predicted using a lighter one (with out Cutout and shear), all using the data provide by Chris<br>
All models were very simple just a regular AVG pooling and dense head, label smoothing of 0.05, and Adam optimizer, one model example:</p>
<pre><code>def model_fn(input_shape=(256, 256, 3)):
input_image = L.Input(shape=input_shape, name='input_image')
base_model = efn.EfficientNetB4(input_shape=input_shape,
weights=config['BASE_MODEL_WEIGHTS'],
include_top=False)
x = base_model(input_image)
x = L.GlobalAveragePooling2D()(x)
output = L.Dense(1, activation='sigmoid', kernel_initializer='zeros', name='output')(x)
model = Model(inputs=input_image, outputs=output)
opt = optimizers.Adam(learning_rate=config['LEARNING_RATE'])
loss = losses.BinaryCrossentropy(label_smoothing=0.05)
model.compile(optimizer=opt, loss=loss, metrics=['AUC'])
return model
</code></pre>
<h4>Training</h4>
<p>One thing that worked very well for me was using upsampling, but in my case, I did in a way that I did not see people doing, I used <code>tf.data.experimental.sample_from_datasets</code> to sample from 2 different datasets, one was the regular data (2020 + 2018 + 2017) and the other was just malignant samples (all sets), then I used the weights <code>[0.6, 0.4]</code> this way <code>40%</code> of the data were only malignant for every batch, this made the models converge faster.<br>
Used only TPUs, both from Kaggle and Colab<br>
Also, I got better results using a cyclical cosine learning rate with warm restarts and warm-up, shown below.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1182060%2Fa0849f9196f11493360c242a33cb1edc%2Fdownload.png?generation=1597710718834670&alt=media" alt=""></p>
<p>Here is a fold training history for illustration.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1182060%2Fa488650b5cde87105510c0ed5c317a90%2Fhistory_fold2.png?generation=1597710894876834&alt=media" alt=""></p>
<p>Here is the link to <a href="https://github.com/dimitreOliveira/melanoma-classification" target="_blank">my Github</a>, there you will find all my models, <a href="https://github.com/dimitreOliveira/melanoma-classification/tree/master/Model%20backlog" target="_blank">its scores</a>, EDAs, scripts and a <a href="https://github.com/dimitreOliveira/melanoma-classification/tree/master/Documentation" target="_blank">page with all relevant content</a> that I gathered during the competition.</p>
<p>About this competition, it was a great opportunity to experiment a little more with TPUs and TensorFlow, especially with the dataset API, I feel that a should join another computer vision competition to do some more practice. <br>
I tried a lot to make BiT(Big transfer) work but had no success, I got good results with Cutout, was getting close to making MixUp work good here but had no time left.</p>
<p>I tweaked a lot my augmentations pipeline and this was what roughly what gave me best results:</p>
<pre><code>def data_augment(image):
p_rotation = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_cutout = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_shear = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_crop = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
if p_shear > .2:
if p_shear > .6:
image = transform_shear(image, config['HEIGHT'], shear=20.)
else:
image = transform_shear(image, config['HEIGHT'], shear=-20.)
if p_rotation > .2:
if p_rotation > .6:
image = transform_rotation(image, config['HEIGHT'], rotation=45.)
else:
image = transform_rotation(image, config['HEIGHT'], rotation=-45.)
if p_crop > .2:
image = data_augment_crop(image)
if p_rotate > .2:
image = data_augment_rotate(image)
image = data_augment_spatial(image)
image = tf.image.random_saturation(image, 0.7, 1.3)
image = tf.image.random_contrast(image, 0.8, 1.2)
image = tf.image.random_brightness(image, 0.1)
if p_cutout > .5:
image = data_augment_cutout(image)
return image
</code></pre>
<p>I would like to thank the community for all the helpful discussion and work shared and give a special thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for his amazing contributions to us all during the competition, this was a great learning opportunity.</p>
<p>Nice <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> , let me know if there is anything to improve.<br>
Congratulations on your placing and thanks for your contributions.</p> |
SIIM-ISIC Melanoma Classification | 24th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I used BCE</p>
<p>20 for pretrain,10 for finetune</p>
<p>A single card can run batch size 10, with 2 cards, it's 20.</p>
<p>I didn’t make another stratified csv for 2019 data. Just used Chris’s. The diagnosis column is the label for 8 class classification.</p>
<p><strong>Trust your CV</strong><br>
I entered this competition after Panda and am surprised by the result.<br>
The most important things here should be your CV. Always trust it. A single positive sample in public testset can have 0.0064 effect on public LB and the effect is very likely to accumulate during ensembling and selecting.<br>
I used <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>'s JPEG files and his triple stratifed CV. They help me quickly start trainning and build a strong CV. Thank to his great work.</p>
<p><strong>Best Single Model</strong><br>
I used pytorch and 2xRTX2080 to generate my result and used no meta data.<br>
The best single model is efficientnet-b5 on 512x512 resolution. I firstly train model on 2019 data with 8 classes. Then I finetune it with 2020+2018/2017 data. This method improves 0.01 on CV,0.005 on private LB but no improvement on public LB.<br>
Augmentations: ImageCompression, Flips, ShiftRotateScale, HueSaturationValue, RandomBrightnetnessContrast, CutOut.<br>
This model achieves 0.944 on CV (before TTA) and 0.9447-0.9462 on private LB(different averaging method). Sadly I picked the lowest one. xD.</p>
<p><strong>Ensemble</strong><br>
I ensembled this model with some of my previous models. Most of these overfitted.<br>
The ensembles scores 0.9452-0.9492 on private LB. Made the wrong choise again. xD.</p> |
SIIM-ISIC Melanoma Classification | 24th place solution and my way to get it | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello everyone, congrats to all winners and thanks Kaggle and SIIM for this competition </p>
<p>This is my second medal awarded competiton (finally I'm an expert, yeah!!!), and my second medical image analisys competiton. My first one was <a href="https://www.kaggle.com/c/aptos2019-blindness-detection" target="_blank">APTOS Blindness Detection</a> last year and these are some similar problems I faced here:</p>
<p><strong>Imbalanced classes</strong></p>
<p>A common problem for many medical image analysis is imbalanced data distribution (few images of positive cases). There were 5 classes (or 5 degrees of disease, because the problem could be also interpreted as regression) and 2 classes strongly prevailed. In this competition it was binary classification case and the first one was much larger. The obvious way to fix classes balance was to use external data, and here is the second challenge:</p>
<p><strong>Use of external data</strong> </p>
<p>In both competitins we had to deal with quite a large amount of external data, you can use it to balance classes or just use all of this data. In APTOS many people pretrained their models on old external images and finetune in actual data (and so did I). Inept use of external data could decrease your score, old (external) and actual datasets were quite different. In SIIM old and actual data was much less different, so we were more free to use it.<br>
I used 2018+2019 external data for all training pipeline, validating on 2020 data only. </p>
<h2>My approcah</h2>
<p>At the beginning I tried various light models (like EffNet b0-b3, ResNet18 with image size 256x256) using PyTorch and various augmentation techniques both for metadata and for images, experementing with focal loss and label smoothing. But maximum I achived was 0.933 on public LB. But in discussions participants told about experiments with sizes like 512x512 and quite large models like EffNetB6, these kind of experiments was hard for me because I used PyTorch and GPUs only</p>
<p>All my way in deep learning I had used PyTorch, but topics related to tensorflow and TPU appeared in discussion and public notebooks more and more, I could not ignore it anymore. Finally I found that using TPU I can train heavier models on larger images! </p>
<p><strong>Making ensemble</strong></p>
<p><a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/154683" target="_blank">2019 1st place solution</a> is blending of various versions of EfficientNet and various augmentation techniques and image sizes, so I decided to implement something like that</p>
<p>I used pipeline like <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">Tripple Stratified KFold</a>, using 2018+2019+2020 data for training and only 2020 data for validation </p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Image Size</th>
<th>Initial Weights</th>
<th>Public Score</th>
<th>Private Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>EfficientNetB5</td>
<td>512</td>
<td>ImageNet</td>
<td>0.9524</td>
<td>0.9200</td>
</tr>
<tr>
<td>EfficientNetB5</td>
<td>768</td>
<td>ImageNet</td>
<td>0.9469</td>
<td>0.9185</td>
</tr>
<tr>
<td>EfficientNetB5</td>
<td>1024</td>
<td>ImageNet</td>
<td>0.9479</td>
<td>0.9285</td>
</tr>
<tr>
<td>EfficientNetB6</td>
<td>384</td>
<td>ImageNet</td>
<td>0.9497</td>
<td>0.9277</td>
</tr>
<tr>
<td>EfficientNetB6</td>
<td>512</td>
<td>ImageNet</td>
<td>0.9488</td>
<td>0.9258</td>
</tr>
<tr>
<td>EfficientNetB6</td>
<td>768</td>
<td>ImageNet</td>
<td>0.9502</td>
<td>0.9277</td>
</tr>
<tr>
<td>EfficientNetB6</td>
<td>1024</td>
<td>ImageNet</td>
<td>0.9478</td>
<td>0.9287</td>
</tr>
<tr>
<td>EfficientNetB7</td>
<td>512</td>
<td>ImageNet</td>
<td>0.9480</td>
<td>0.9239</td>
</tr>
<tr>
<td>EfficientNetB7</td>
<td>768</td>
<td>ImageNet</td>
<td>0.9379</td>
<td>0.9083</td>
</tr>
<tr>
<td>EfficientNetB5</td>
<td>512</td>
<td>NoisyStudent</td>
<td>0.9480</td>
<td>0.9194</td>
</tr>
<tr>
<td>EfficientNetB6</td>
<td>512</td>
<td>NoisyStudent</td>
<td>0.9435</td>
<td>0.9242</td>
</tr>
<tr>
<td>EfficientNetB7</td>
<td>512</td>
<td>NoisyStudent</td>
<td>0.9510</td>
<td>0.9218</td>
</tr>
<tr>
<td>EfficientNetB5</td>
<td>768</td>
<td>NoisyStudent</td>
<td>0.9543</td>
<td>0.9295</td>
</tr>
<tr>
<td>EfficientNetB6</td>
<td>768</td>
<td>NoisyStudent</td>
<td>0.9424</td>
<td>0.9218</td>
</tr>
<tr>
<td>ResNet152</td>
<td>512</td>
<td>ImageNet</td>
<td>0.9184</td>
<td>0.8901</td>
</tr>
<tr>
<td>InceptionResNetV2</td>
<td>768</td>
<td>ImageNet</td>
<td>0.9338</td>
<td>0.9161</td>
</tr>
<tr>
<td>Blending</td>
<td>---</td>
<td>---</td>
<td>0.9602</td>
<td>0.9406</td>
</tr>
</tbody>
</table>
<p>Being based on <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/160147" target="_blank">this idea</a>, I mixed various models with various training image sizes. </p>
<p><strong>Public submissions</strong></p>
<p>We can't ignore it since have a public submission in a silver medal zone!<br>
I also made two submmissions with mixing this ensemble and subs from some public kernels, like these ones <br>
<a href="https://www.kaggle.com/truonghoang/stacking-ensemble-on-my-submissions" target="_blank">This one</a>, because it consists of models like VGG and SeResNext, I think they are quite uncorrelated with my effnets</p>
<p><a href="https://www.kaggle.com/datafan07/analysis-of-melanoma-metadata-and-effnet-ensemble" target="_blank">This one</a>, because it has good work on metadata analysis</p>
<p>I did not use kernels kind of "public blending" or "public minimax ensemble" which were in fact combinations of others, trying to choose from public submissions these ones which correlate as little as possible with my ensemble and with each other. </p>
<p>At the end days of competition I fell from ~80 position to ~240 and it was nervous, but after th shakeup I, like many others, was thrown quite high (about 200 positions)! <br>
And I really not sure about reasons for it! On the one hand blending of all my models without any public submissions was about on 800 + position on Public LB but something like 140-150 on Private LB, on the other simple blending of public subs gained a silver zone.. I think this story teaches us not to trust Public LB and choose the most stable models for final sub. </p>
<p>I think it was quite a lottery, many conducted successful experiments, but chose wrong final submissions. What about me - I think I was saved by quite an ensemble and careful use of public submissions</p>
<p>Special thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for his great <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">kernel</a> and TFRecords datasets covering almost all possible approaches. They helped me alot throughout the competition!</p>
<p><a href="https://www.kaggle.com/dimitreoliveira" target="_blank">@dimitreoliveira</a> thanks =) </p>
<p>I used TF with kaggle TPU<br>
Loss is BCE with label smoothing 0.05 </p>
<p>No, the reason for using TF (TPU) became available a little bit later :) </p>
<p><a href="https://www.kaggle.com/truonghoang" target="_blank">@truonghoang</a> thanks for submissions of VGG and SeResNext models :)</p>
<p>Thanks! That was actually the main idea - build uncorrelated ensemble</p> |
SIIM-ISIC Melanoma Classification | 254th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the kaggle and the organizers for hosting such an interesting competition. And I like to thank my teammates, brother <a href="https://www.kaggle.com/udaykamal" target="_blank">@udaykamal</a>, and <a href="https://www.kaggle.com/tahsin" target="_blank">@tahsin</a> for the insightful discussion that we've made along with this journey. And last but not least, We're greatly thankful to the <strong>MVP</strong> of this competition, <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for all the contribution that he's shared from the very beginning of this competition. </p>
<p>Our experiment is based on <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> 's strong <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">baseline starter</a> and we extended it by integrating various types of the modeling approach.</p>
<h1>Brief Summary</h1>
<ul>
<li>All the Base-Models are <code>EfficientNets</code>, and here <strong>E-Net</strong> for short. Total of 20 models only.</li>
</ul>
<pre><code>- `E-Net B2` (2x)
- `E-Net B3` (2x)
- `E-Net B4` (3x)
- `E-Net B5` (2x)
- `E-Net B6` (5x)
- `E-Net B7` (6x)
</code></pre>
<ul>
<li>We have used various types of the top model including </li>
</ul>
<pre><code>- Global Average Pooling (GAP)
- Global Max Pooling (GMP)
- Attention Weighted Net (AWN)
- Generalized Mean Pooling (GeM)
- Global Average Attention Mechanism (GAAM, [ours])
</code></pre>
<p>Ok, here below are the full details. (<strong>TL,DR</strong>)</p>
<hr>
<h1>E-Net 2</h1>
<table>
<thead>
<tr>
<th>Model</th>
<th>Top</th>
<th>Seed</th>
<th>Data</th>
<th>Img</th>
<th>Folds</th>
<th>CV</th>
<th>LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>a-ENet 2</td>
<td>GAP</td>
<td>202</td>
<td>'20+'18</td>
<td>512</td>
<td>5</td>
<td>0.904</td>
<td>0.9345</td>
</tr>
<tr>
<td>b-ENet 2</td>
<td>GAP</td>
<td>1234</td>
<td>'20+'18</td>
<td>512</td>
<td>5</td>
<td>0.907</td>
<td>0.9346</td>
</tr>
</tbody>
</table>
<p>Next, based on the <strong>CV</strong> score, we took a simple average of them. </p>
<pre><code>ENet 2 = (a-ENet 2 + b-ENet 2)/2
</code></pre>
<p>The resultant prediction graph as follows: <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1984321%2Ff107b28e41ce558fa67b26e0abe56730%2F2.png?generation=1597815112995545&alt=media" alt=""></p>
<h1>E-Net 3</h1>
<table>
<thead>
<tr>
<th>Model</th>
<th>Top</th>
<th>Seed</th>
<th>Data</th>
<th>Img</th>
<th>Folds</th>
<th>CV</th>
<th>LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>a-ENet 3</td>
<td>GAAM</td>
<td>101</td>
<td>'20+'18</td>
<td>512</td>
<td>5</td>
<td>0.890</td>
<td>0.9462</td>
</tr>
<tr>
<td>b-ENet 3</td>
<td>GAP</td>
<td>2020</td>
<td>'20+'18</td>
<td>512</td>
<td>5</td>
<td>0.912</td>
<td>0.9435</td>
</tr>
</tbody>
</table>
<p>Next, based on the <strong>CV</strong> score, we took a simple weighted average of them. </p>
<pre><code>ENet 3 = (a-ENet 3*1 + b-ENet 3*2)/3
</code></pre>
<p>The resultant prediction graph as follows:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1984321%2F4e767d1caa113e72e38aafcce541f403%2F3.png?generation=1597815707202196&alt=media" alt=""></p>
<h1>E-Net 4</h1>
<table>
<thead>
<tr>
<th>Model</th>
<th>Top</th>
<th>Seed</th>
<th>Data</th>
<th>Img</th>
<th>Folds</th>
<th>CV</th>
<th>LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>a-ENet 4</td>
<td>GAP + AWN</td>
<td>786</td>
<td>'20+'18</td>
<td>512</td>
<td>5</td>
<td>0.9120</td>
<td>0.9334</td>
</tr>
<tr>
<td>b-ENet 4</td>
<td>GAP</td>
<td>786</td>
<td>'20+'18</td>
<td>512</td>
<td>5</td>
<td>0.9250</td>
<td>0.9454</td>
</tr>
<tr>
<td>c-ENet 4</td>
<td>M</td>
<td>999</td>
<td>'20+'18+'19+Upsampled</td>
<td>768</td>
<td>1</td>
<td>0.9320</td>
<td>0.9326</td>
</tr>
</tbody>
</table>
<p>Here, M = <strong>[GAP + GeM+AWN + GMP]</strong></p>
<p>Next, based on the <strong>CV</strong> score, we took a simple weighted average of them. </p>
<pre><code>ENet 4 = (a-ENet 4*1 + b-ENet 4*2 + c-ENet 4*2)/5
</code></pre>
<p>The resultant prediction graph as follows:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1984321%2Fb70062f56a279740e23d527f38f31831%2F4.png?generation=1597816112525326&alt=media" alt=""></p>
<h1>E-Net 5</h1>
<table>
<thead>
<tr>
<th>Model</th>
<th>Top</th>
<th>Seed</th>
<th>Dat</th>
<th>Img</th>
<th>Fold</th>
<th>CV</th>
<th>LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>a-ENet 5</td>
<td>GAP + AWN</td>
<td>101</td>
<td>'20+'18</td>
<td>384</td>
<td>5</td>
<td>0.908</td>
<td>0.9442</td>
</tr>
<tr>
<td>b-ENet 5</td>
<td>M</td>
<td>786</td>
<td>'20+'18</td>
<td>512</td>
<td>3</td>
<td>0.884</td>
<td>0.9491</td>
</tr>
</tbody>
</table>
<p>Here, M = <strong>[GAP + GMP + AWN]</strong></p>
<p>Next, based on the <strong>CV</strong> score, we took a simple weighted average of them. </p>
<pre><code>ENet 5 = (a-ENet 5*2 + b-ENet 5*1)/3
</code></pre>
<p>The resultant prediction graph as follows:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1984321%2F5a468bba3f008360fa58b416f70019e3%2F5.png?generation=1597816460348906&alt=media" alt=""></p>
<h1>E-Net 6</h1>
<p>Basically <code>E-Net 6</code> was our first modeling approach. So, initially, we kept the validation set the same (seed <code>42</code>) and experimented in the following way.</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Top</th>
<th>Seed</th>
<th>Data</th>
<th>Img</th>
<th>Fold</th>
<th>CV</th>
<th>LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>a-ENet 6</td>
<td>GAP</td>
<td>42</td>
<td>'20 + '18</td>
<td>384</td>
<td>5</td>
<td>0.904</td>
<td>0.9454</td>
</tr>
<tr>
<td>b-ENet 6</td>
<td>GAP + AWN</td>
<td>42</td>
<td>'20+'18</td>
<td>512</td>
<td>5</td>
<td>0.912</td>
<td>0.9422</td>
</tr>
<tr>
<td>c-ENet 6</td>
<td>AWN</td>
<td>42</td>
<td>'20+'18-</td>
<td>512</td>
<td>5</td>
<td>0.918</td>
<td>0.9431</td>
</tr>
<tr>
<td>d-ENet 6</td>
<td>GeM</td>
<td>42</td>
<td>'20+'18</td>
<td>512</td>
<td>5</td>
<td>0.905</td>
<td>0.9405</td>
</tr>
<tr>
<td>e-ENet 6</td>
<td>GAAM</td>
<td>42</td>
<td>'20+'18</td>
<td>512</td>
<td>5</td>
<td>0.925</td>
<td>0.9458</td>
</tr>
</tbody>
</table>
<p>Next, as the above 6 submissions came from the same validation split, we further tried to find the best weights that will maximize the <strong>OOF</strong> validation score max. </p>
<table>
<thead>
<tr>
<th>-</th>
<th>SimpleAvg</th>
<th>PowerAvg</th>
<th>RankAvg</th>
<th>BaysianOpt</th>
<th>L-BFGS-B</th>
</tr>
</thead>
<tbody>
<tr>
<td>CV</td>
<td>0.9319</td>
<td>0.9312</td>
<td>0.9321</td>
<td>0.9322</td>
<td>0.9320</td>
</tr>
<tr>
<td>LB</td>
<td>0.9487</td>
<td>0.9488</td>
<td>0.9483</td>
<td>0.9490</td>
<td>0.9491</td>
</tr>
</tbody>
</table>
<p>As we found <code>BaysianOpt</code> gave the maximum CV, so later we chose it for the final blending. I've published a notebook regarding this, showed a comparison between <strong>Bayesian Optimization</strong> and <strong>L-BFGS-B</strong> methods, <a href="https://www.kaggle.com/ipythonx/optimizing-metrics-out-of-fold-weights-ensemble" target="_blank">Optimizing Metrics: Out-of-Fold Weights Ensemble</a></p>
<p>The resultant prediction graph as follows:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1984321%2Faf77c044bad954675dae053b8419ec80%2F6.png?generation=1597821688482478&alt=media" alt=""></p>
<h1>E-Net 7</h1>
<table>
<thead>
<tr>
<th>Model</th>
<th>Top</th>
<th>Seed</th>
<th>Data</th>
<th>Img</th>
<th>Fold</th>
<th>CV</th>
<th>LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>a-ENet 7</td>
<td>GAP</td>
<td>101</td>
<td>'20+'18</td>
<td>256</td>
<td>5</td>
<td>0.910</td>
<td>0.9307</td>
</tr>
<tr>
<td>b-ENet 7</td>
<td>GAP</td>
<td>42</td>
<td>'20+'18</td>
<td>256</td>
<td>5</td>
<td>0.910</td>
<td>0.9312</td>
</tr>
<tr>
<td>c-ENet 7</td>
<td>GAP</td>
<td>42</td>
<td>'20+'18</td>
<td>384</td>
<td>4</td>
<td>0.923</td>
<td>0.9389</td>
</tr>
<tr>
<td>d-ENet 7</td>
<td>M</td>
<td>2020</td>
<td>'20 + '18 + '19 + Upsampled</td>
<td>768</td>
<td>1</td>
<td>0.928</td>
<td>0.9479</td>
</tr>
<tr>
<td>e-ENet 7</td>
<td>M</td>
<td>1221</td>
<td>'20 + '18 + '19 + Upsampled</td>
<td>512</td>
<td>5</td>
<td>0.922</td>
<td>0.9442</td>
</tr>
</tbody>
</table>
<p>Here, M = <strong>[GAP+GeM+GMP+AWN]</strong></p>
<p>Next, based on the <strong>CV</strong> score, we took a simple weighted average of them. </p>
<pre><code>ENet 7 = (a-ENet 7*1 + b-ENet 7*1 + c-ENet 7*2 + d-ENet 7*2 + e-ENet 7*2)/8
</code></pre>
<p>The resultant prediction graph as follows:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1984321%2Fb428333f2381ff3dba527e031e401985%2F7.png?generation=1597817972968820&alt=media" alt=""></p>
<h1>Whole Data Training, 2020 SIIM Comp.</h1>
<p>Maybe it's not full ideal but we did four experiments, inspired by <a href="https://www.kaggle.com/agentauers" target="_blank">@agentauers</a> 's awesome <a href="https://www.kaggle.com/agentauers/incredible-tpus-finetune-effnetb0-b6-at-once" target="_blank">kernel</a>. We didn't take any validation set, this was done before 2/3 weeks of the competition end. </p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Top</th>
<th>Seed</th>
<th>Data</th>
<th>Img</th>
<th>LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>ENet 3+4+5</td>
<td>GAP</td>
<td>101</td>
<td>'20</td>
<td>256</td>
<td>0.9384</td>
</tr>
<tr>
<td>ENet 3+4+5</td>
<td>GAP</td>
<td>42</td>
<td>'20</td>
<td>384</td>
<td>0.9470</td>
</tr>
<tr>
<td>ENet 3+4+5</td>
<td>GAP</td>
<td>786</td>
<td>'20</td>
<td>512</td>
<td>0.9438</td>
</tr>
<tr>
<td>ENet 3+4+5</td>
<td>GAP</td>
<td>202</td>
<td>'20</td>
<td>768</td>
<td>0.9385</td>
</tr>
</tbody>
</table>
<p>Next, we took a simple average of these four, which leads <code>LB</code>: <strong>94.8</strong>.</p>
<h1>Final Blends</h1>
<p>We couldn't give enough time for metal features. So, we've used a public meta submission form <a href="https://www.kaggle.com/datafan07/eda-modelling-of-the-external-data-inc-ensemble" target="_blank">here</a>. And gave <code>0.1</code> to the meta-features in the final blending.</p>
<p>The final blending took all the submission of <code>EfficientNets B2 to 7</code> and Meta features and combined training on the whole data set of 2020. After the generate single submission from <code>E-Net B2 to 7</code> and metal features (<code>0.1</code>), further, we did rank ensemble with combined submission. It leads the Public LB: <strong>0.9554</strong>, and Private <strong>0.9383</strong>.</p>
<hr>
<h2>Generalized Mean Pooling (GeM)</h2>
<pre><code>class GeneralizedMeanPooling2D(tf.keras.layers.Layer):
def __init__(self, p=3, epsilon=1e-6, name='', **kwargs):
super(GeneralizedMeanPooling2D, self).__init__(name, **kwargs)
self.init_p = p
self.epsilon = epsilon
def build(self, input_shape):
if isinstance(input_shape, list) or len(input_shape) != 4:
raise ValueError('`GeM` pooling layer only allow 1 input with 4 dimensions(b, h, w, c)')
self.build_shape = input_shape
self.p = self.add_weight(
name='p',
shape=[1,],
initializer=tf.keras.initializers.Constant(value=self.init_p),
regularizer=None,
trainable=True,
dtype=tf.float32
)
self.built=True
def call(self, inputs):
input_shape = inputs.get_shape()
if isinstance(inputs, list) or len(input_shape) != 4:
raise ValueError('`GeM` pooling layer only allow 1 input with 4 dimensions(b, h, w, c)')
return (tf.reduce_mean(tf.abs(inputs**self.p),
axis=[1,2], keepdims=False) + self.epsilon)**(1.0/self.p)
</code></pre>
<h3>Attention Weighted Network (AWN)</h3>
<p>From <a href="https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/77269#454482" target="_blank">here</a>. Thanks to <a href="https://www.kaggle.com/wowfattie" target="_blank">@wowfattie</a> </p>
<pre><code>class AttentionWeightedAverage2D(Layer):
def __init__(self, **kwargs):
self.init = initializers.get('uniform')
super(AttentionWeightedAverage2D, self).__init__(** kwargs)
def build(self, input_shape):
self.input_spec = [InputSpec(ndim=4)]
assert len(input_shape) == 4
self.W = self.add_weight(shape=(input_shape[3], 1),
name='{}_W'.format(self.name),
initializer=self.init)
self._trainable_weights = [self.W]
super(AttentionWeightedAverage2D, self).build(input_shape)
def call(self, x):
logits = K.dot(x, self.W)
x_shape = K.shape(x)
logits = K.reshape(logits, (x_shape[0], x_shape[1], x_shape[2]))
ai = K.exp(logits - K.max(logits, axis=[1,2], keepdims=True))
att_weights = ai / (K.sum(ai, axis=[1,2], keepdims=True) + K.epsilon())
weighted_input = x * K.expand_dims(att_weights)
result = K.sum(weighted_input, axis=[1,2])
return result
def get_output_shape_for(self, input_shape):
return self.compute_output_shape(input_shape)
def compute_output_shape(self, input_shape):
output_len = input_shape[3]
return (input_shape[0], output_len)
</code></pre>
<h3>Global Average Attention Mechanism</h3>
<p>The main idea of this mechanism is from <a href="https://www.kaggle.com/kmader" target="_blank">K Scott Mader</a>, currently working in Apple as ML Eng. However, you can find the vanilla implementation of the mechanism from his work (kernel). Here is one <a href="https://www.kaggle.com/hiramcho/melanoma-efficientnetb6-with-attention-mechanism" target="_blank">public kernel</a> of it. And <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> also used it to his pipeline, great.</p>
<hr>
<p>Lastly, we like to mention some work that we've published for this competition, hope it may come helpful for future readers.</p>
<ul>
<li><a href="https://www.kaggle.com/ipythonx/tf-keras-melanoma-classification-starter-tabnet" target="_blank">TF.Keras: Melanoma Classification Starter, TabNet</a><ul>
<li><a href="https://www.kaggle.com/ipythonx/optimizing-metrics-out-of-fold-weights-ensemble" target="_blank">Optimizing Metrics: Out-of-Fold Weights Ensemble</a></li>
<li><a href="https://www.kaggle.com/ipythonx/training-cv-melanoma-starter-ghostnet-tta" target="_blank">PyTorch: [Training CV] Melanoma Starter. GhostNet + TTA</a></li>
<li><a href="https://www.kaggle.com/ipythonx/tresnet-hp-gpu-dedicated-net-grad-accumulation-tta" target="_blank">PyTorch: TResNet:HP-GPU Dedicated Net+Grad-Accumulation+TTA</a></li></ul></li>
</ul>
<p>Yes, you're right. -)</p> |
SIIM-ISIC Melanoma Classification | 27th overview - I Survived the massive shakedown and be in the top 1% | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>You are right<br>
Kaggle tpu time limit is 3 - hours per training , 30 hours per week (but it is really fast!)<br>
If I had to train 5 folds of high-resolution images, then I needed to train 0,1,2 fold and after that, I train 3,4 fold seperately and merge [0,1,2] and [3,4] fold outputs into one output</p>
<p>since it was sort of tedious jobs, for most of training, I used my local environment</p>
<p>I have detailed it further.</p>
<p>Thank you so much</p>
<p><a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> thank you so much. i have detailed it further</p>
<p>Thank you so much!</p>
<ol>
<li>I have trained all 5 folds models</li>
<li>I utilized Chris's notebook. It is really well made and logical with stratified Tfrecords. </li>
<li>with TPU, I could train models faster.</li>
</ol>
<p>Thank you for your sincere congratulation! </p>
<ol>
<li><p>256, 384, 512, 768 Size didn't take time too much with the usage of kaggle tpu and colab<br>
but 1024 size training was a somewhat time-consuming work<br>
but we had 2~3 months.</p></li>
<li><p>I tried meta info + CNN dual inputs training, it doesn't show a sign of improving Public LB<br>
with meta info added to CNN, my cv could reach above 96 just on ISIC 2020 data<br>
but it didn't work for <strong>Public LB</strong> so i abandoned it(it was my mistake)<br>
by an adversarial analysis, we can infer that training datasets and test datasets are somewhat different.<br>
I guessed using meta info can lead my model to overfitting to just training datasets</p></li>
</ol>
<p>Since I had been so much afraid of shakedown, I tried to make an <strong>extremely low variance ensemble and it turned out to be a not so bad approach.</strong></p>
<p><strong>With efficient net B0 ~ B7 imagenet + B0 ~ B7 noisy-student</strong><br>
<strong>Size 256 , 384, 512, 768, 1024, it amounts to total 95 MODEL!</strong></p>
<p>I ensembled all!</p>
<p>For making networks from B0-B7, 256~1024, I used Chris's stratified notebook and I left the network original that Chris had made<br>
(Efficient net -> GAP -> dense(1, smoothing = 0.05))<br>
All the same network (From B0-B7, SIZE 256 to 1024)<br>
All 2019 ext data and coarse dropout was applied</p>
<p>as I had noticed so many strong competitors reported that their CV was around 0.95,<br>
I guessed that if I could make my cv converge to around 0.95, and if my cv score > 0.95 then I thought that I supposedly could be good at this competition</p>
<p><strong>This is how I ensembled 95 deep learning models</strong></p>
<p><strong>NET BASED ENSEMBLE(EFFICIENT NET 0 ~ EFFICIENT NET 7)</strong><br>
MODEL 0 : EffcientNet B0 from 256 to 1024, noisy + imagenet CV 0.941 PB 0.9441 PV 0.9350<br>
MODEL 1 : EffcientNet B1 from 256 to 1024, noisy + imagenet CV 0.943 PB 0.9491 PV 0.9355<br>
MODEL 2 : EffcientNet B2 from 256 to 1024, noisy + imagenet CV 0.946 PB 0.9516 PV 0.9406<br>
MODEL 3 : EffcientNet B3 from 256 to 1024, noisy + imagenet CV 0.950 PB 0.9515 PV 0.9411<br>
MODEL 4 : EffcientNet B4 from 256 to 1024, noisy + imagenet CV 0.953 PB 0.9543 PV 0.9422<br>
MODEL 5 : EffcientNet B5 from 256 to 1024, noisy + imagenet CV 0.953 PB 0.9540 PV 0.9451<br>
MODEL 6 : EffcientNet B6 from 256 to 1024, noisy + imagenet CV 0.953 PB 0.9536 PV 0.9472<br>
MODEL 7 : EffcientNet B7 from 256 to 1024, noisy + imagenet CV 0.951 PB 0.9549 PV 0.9434<br>
<strong>-> It was CV 0.9551, PUBLIC LB : 9555 PRIVATE LB 0.9461</strong></p>
<p><strong>SIZE BASED ENSEMBLE(256 to 1024)</strong><br>
MODEL 8 : ALL 256 SIZE, EffcientNet 0 ~ 7 CV : 0.939<br>
MODEL 9 : ALL 384 SIZE, EffcientNet 0 ~ 7 CV : 0.953<br>
MODEL 10 : ALL 512 SIZE, EffcientNet 0 ~ 7 CV : 0.954<br>
MODEL 11 : ALL 768 SIZE, EffcientNet 0 ~ 7 CV : 0.949<br>
MODEL 12 : ALL 1024 SIZE, EffcientNet 0 ~ 7 CV : 0.938</p>
<p>and I adopted Power ensemble (square 2) that's because AUC is summarized to draw a line between malignant ones and benign ones.<br>
If my model is robust, then the resultant power ensemble can be not much different from original ones as AUC is intrinsically just orders</p>
<p><strong>SIZE BASED ENSEMBLE (with square 2, prediction^2)</strong><br>
MODEL 13 : ALL 256 SIZE, EffcientNet 0 ~ 7 CV : 0.938<br>
MODEL 14 : ALL 384 SIZE, EffcientNet 0 ~ 7 CV : 0.947<br>
MODEL 15 : ALL 512 SIZE, EffcientNet 0 ~ 7 CV : 0.952<br>
MODEL 16 : ALL 768 SIZE, EffcientNet 0 ~ 7 CV : 0.952<br>
MODEL 17 : ALL 1024 SIZE, EffcientNet 0 ~ 7 CV : 0.947</p>
<p><strong>FINAL MODEL = NET BASED MODEL + SIZE BASED MODEL + Square 2 SIZE BASED MODEL</strong></p>
<p>FINAL MODEL :<strong>TOTAL SIMPLE AVERAGE (MODEL 0 TO 17) CV -> 0.9538</strong><br>
To find appropriate weights, I adopted a differential evolution strategy</p>
<h1><img src="https://i.imgur.com/xRrTIGE.png" alt="https://i.imgur.com/xRrTIGE.png"></h1>
<p>FINAL MODEL : OPTIMIZE WEIGHTS WITH SCIPY DIFFERENTIAL_EVOLUTION MODULE -> <strong>CV 0.9562</strong></p>
<p>0.8 * FINAL_MODEL + 0.2 * Tabular Meta(FROM XGBOOST) -> <strong>current LB 0.9446</strong><br>
(If I didn't add meta info to the final model, I could have nearly reached the gold medal)<br>
but at Public LB, adding meta info helped improve PUBLIC LB SCORE as much as 0.002, so I guessed that tabular data regularizes overfitting to CV(Cv dropped when added meta), that's why I couldn't abandon it</p>
<p>By implementing things above described, I found out that my cv is proportional to LB</p>
<p><strong>When I ensembled all, the resultant CV was around 95.62 and public LB was 95.80
and Private LB is now 94.46</strong></p>
<p>I am a Kaggle novice and I am satisfied with the current result!</p>
<p><strong>Some of you might be curious about scipy.optimize.differential_evolution model</strong></p>
<p>This is not a special thing.</p>
<p><a href="https://machinelearningmastery.com/weighted-average-ensemble-for-deep-learning-neural-networks/?fbclid=IwAR3GYgj0Fu4Mp3RhTeyacb99H2QyP5uuWJizR7ei6DOOC-NbERKQIGyBB4o" target="_blank">https://machinelearningmastery.com/weighted-average-ensemble-for-deep-learning-neural-networks/?fbclid=IwAR3GYgj0Fu4Mp3RhTeyacb99H2QyP5uuWJizR7ei6DOOC-NbERKQIGyBB4o</a></p>
<p>I found the method here</p>
<p>Actually, To optimize AUC ensemble, I tested the Bayesian method, the Powell optimization method(CV 9614, Public LB:9493 it turned out to be not good), and a lot of things. but CV from evolution differential method was so much proportional to Public LB since I guess it produced the optimized sum of weights = 1 and it is scaled to well for probability while bayesian and powell was not scaled to probability</p>
<p><strong>What worked:</strong><br>
trained model solely with 2019 data<br>
Coarse dropout<br>
Label smoothing<br>
malignant upsampling<br>
noisy student</p>
<p><strong>What didn't work:</strong><br>
Focal loss<br>
Dual input with META + CNN(but it dramatically boosted CV)<br>
Custom head<br>
Meta tabular info(but it improved Public LB)<br>
KNN feature bridging from train to test, from test to train</p>
<p>Thank you for your congratulation!</p>
<p>Let me give you an answer about what you asked</p>
<ol>
<li>the evaluation metric is AUC. In this competition, you don't need to classify accurately 0 : benign, 1 : malignant. it's somewhat different from measuring accuracy</li>
</ol>
<p>you just need to calculate how much it is likely to be,,, benign one or malignant one<br>
it is expressed by "a probability"<br>
for example, an output of benign one: 0.00001, malignant : 0.9495……</p>
<p>i had tested a lot of networks and found out that my CV auc values were clustered around 95.x</p>
<p>it means my model performance is 95.x</p>
<p>in this situation, generalizing output probability around 0.95x can be a good way for maintaining model stability. if you don't apply label smoothing, your model's output can be overconfident about malignant ones or benign ones. </p>
<p>that's why i applied label smoothing</p>
<p>2.For upsampling, watch this!<br>
<a href="https://www.kaggle.com/cdeotte/tfrecord-experiments-upsample-and-coarse-dropout" target="_blank">https://www.kaggle.com/cdeotte/tfrecord-experiments-upsample-and-coarse-dropout</a></p> |
SIIM-ISIC Melanoma Classification | 27th place solution - 2nd with context prize (edited) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This is a summery of "Yuval and nosound" model (27th place)</p>
<p>You can find more in:</p>
<ul>
<li><a href="https://github.com/yuval6957/SIIM-Transformer" target="_blank">our github repository</a> </li>
<li><a href="https://github.com/yuval6957/SIIM-Transformer/blob/master/SIIM%20paper.pdf" target="_blank">Paper</a></li>
<li><a href="https://github.com/yuval6957/SIIM-Transformer/blob/master/SIIM%20presentation.pdf" target="_blank">Presentation</a></li>
<li><a href="https://www.youtube.com/watch?v=qa6zimKQcno&t=15s" target="_blank">video</a></li>
</ul>
<h2>1. Summary</h2>
<p>Our solution is based on two step model + Ensemble:</p>
<ol>
<li>Base model for feature extraction per image</li>
<li>Transformer model - combining all the output features from a patient and predict per image. </li>
<li>The 2nd stage also included some post - processing and ensembling.</li>
</ol>
<h3>Base Model:</h3>
<p>As base model we used a models from the <a href="https://arxiv.org/abs/1905.11946" target="_blank">EfficientNet</a> family[6]:</p>
<ul>
<li>EfficientNet b3 </li>
<li>EfficientNet b4 </li>
<li>EfficientNet b5 </li>
<li>EfficientNet b6 </li>
<li>EfficientNet b7 </li>
</ul>
<p>All models were pre-trained on imagenet using noisy student algorithm. The models and weights are from <a href="https://github.com/rwightman/gen-efficientnet-pytorch" target="_blank">gen-efficientnet-pytorch</a>[2].</p>
<p>The input to these model is the image and meta-data such as age, sex, and anatomic Site. The meta-data is processed by a small fully connected network and it’s output is concatenated to the input of the classification layer of the original EfficientNet network. This vector is going through a linear layer with output size of 256 to create the “features”, and then after an activation layer to the final linear classification layer. </p>
<p>This network has 8 outputs and tries to classify the diagnosis label (there are actually more than 8 possible diagnoses, but some don’t have enough examples). </p>
<h3>Transformer Models:</h3>
<p>The input to the Transformer models are a stack of features from all images belonging to the same patient + the metadata for these images.</p>
<p>The transformer is a stack of 4 transformer encoder layers with self attention as described in <a href="https://arxiv.org/abs/1706.03762" target="_blank">Attention Is All You Need</a> [1]. Each transformer encoder layer uses 4 self attention heads. </p>
<p>The output of the transformer is a N*C, where N is the number of input feature vectors (the number of images) and C is the number of classes (8 in this case). Hence, the transformer predicts the class of each feature vector simultaneously, using the information from all other feature vectors.</p>
<p>The metadata is added using a “transformer style”, i.e. each parameter is transformed to a vector (size 256) using an embedding matrix and then added to the feature vector. for continuous values (like age) the embedding matrix was replaced by a 2 layer fully connected network. </p>
<h3>Ensembling the output of all networks:</h3>
<p>The data was split to 3 folds, 3 times (using 3 different seeds for splitting), and the inference was done using 16 (or 12) TTAs. giving 144 predictions from each model. These were averaged and then the outputs of all the models were averaged. All averaging was done on the outputs before softmax and there form it is actually geometric averaging.</p>
<h3>Training</h3>
<p>The heavy lifting was the training and inference of the base models. This was done on a server with 2 GPUs – Tesla V100, Titan RTX that worked in parallel on different tasks. Training one fold of one model took 3H (B3) to 11H (B7, B6 large images) on the Tesla and 20% more on the Titan, this sums up to about one day for B3 and 3.5 days for B7. Inferencing all the training data for 12 TTA’s + test data for 16 TTA’s to get the features for the next level took another 4h - 14h. The transformer training took less than 1H for the full model (3 folds*3seed). </p>
<p>The total time it took to train all models and folds is about 2.5W for one Tesla (~1.5W using the 2 GPUs).</p>
<h2>2. Models and features</h2>
<h3>Base models</h3>
<p>As base models we tried various types of models (pre-trained on Imagenet):</p>
<ul>
<li>Densenets – 121, 161, 169, 201</li>
<li>EfficientNet B0, B3 , B4, B5, B6, B7 with and with noisy student pre-training and with normal pretraining </li>
<li>ResNet 101</li>
<li>Xception </li>
</ul>
<p>At the end we used EfficientNet as it was best when judging accuracy/time</p>
<p>The noisy student version performed better than the normal one.</p>
<p>We also tried different image sizes and ended up using a <code>400*600</code> images in most cases, except one were we used <code>600*900</code> with the B6 network.</p>
<h4>Metadata</h4>
<p>As was described above the metadata was processed by a small fully connected nn and its output was concatenated to the output of the EfficientNet network (after removing the original top layer).</p>
<p>We also tried a network without metadata, and used the metadata as targets, i.e. this network predicted the diagnosis, but also the sex, age and anatomic site. The final predictions (including transformer) when using this approach weren't as good as the metadata as input approach. </p>
<h4>Model’s output</h4>
<p>Although the task at hand is to predict melanoma yes/no, it is better to let the network choose the diagnosis among a few possible options. This lets the network “understand” more about the image. The final prediction is the value of the Melanoma output after doing softmax on the output vector. </p>
<h4>Features</h4>
<p>The final layer in this model is a linear layer with 256 inputs and 8 outputs, we use the input to this layer as features. </p>
<h4>Augmentation</h4>
<p>The following augmentations where used while training and inference:</p>
<p>Random resize + crop</p>
<p>Random rotation</p>
<p>Random flip</p>
<p>Random color jitter (brightness, contrast, saturation, hue)</p>
<p><a href="https://arxiv.org/abs/1708.04552" target="_blank">Cutout</a>[3] - erasing a small rectangle in the image</p>
<p>Hair - Randomly adding “hair like” lines to the image</p>
<p>Metadata augmentation - adding random noise to the metadata as was done in the <a href="https://www.sciencedirect.com/science/article/pii/S2215016120300832?via%3Dihub" target="_blank">1st place solution in ISIC 2019 challenge</a> [7].</p>
<h4>TTA</h4>
<p>For inference each image was augmented differently 16 times and the final prediction was the average. These augmentations were also used for extracting 16 different features vectors per test image.</p>
<p>The same was done to extract 12 features vectors for the train images (12 and not 16 because of time limits).</p>
<h3>Transformer Network</h3>
<p>The input to the transformer network is the features from all the images from one patient.</p>
<p>The inspiration for this kind of model came from a previous competition in which we participated in RSNA<a href="https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection" target="_blank"> Intracranial Hemorrhage Detection</a>. In that competition, all the top solutions (including ours) used a two stage network approach (although none of them were transformers).</p>
<p>Using a transformer seems appropriate in this case because transformers are built to seek relationships between embedding (feature) vectors in it’s input.</p>
<p>As this is not a full seq2seq task, we only used encoder layers. The transformer is a stack of 4 encoder layers with 4 attention heads in each layer. (we also tested higher numbers of layers and attention heads - no performance improvement).</p>
<h4>metadata</h4>
<p>The metadata was incorporated in the network by adding “metadata vectors” to the input vectors - each value was transformed to a vector size 256 and added. The discrete values’ transformation was done using a trainable embedding matrix and the continuous values using a small nn.</p>
<h4>output</h4>
<p>The output of this network is a matrix of size N*C where N - the number of images, C - the number of classes. Which means it decides on all the images of the patient at once. </p>
<h4>Limit the input size</h4>
<p>A transformer can be trained on different number of feature vectors, by using padding. But when the range of numbers is very large, from a couple of hundred images for some patients to a handful for others, this may cause some implementation issues (like in calculating the loss). To simplify these issues, we limited N to 24 feature vectors, and for each patient we randomly divided the images to groups of size up to 24. </p>
<p>This might degrade the prediction as the most “similar” images might accidentally fall into different groups, but as we use TTA, this issue is almost solved. </p>
<h4>Augmentation</h4>
<p>From the base model we extract a number of feature vectors (12 for train and 16 for test) using different augmentation for the images and metadata. In the training and inference steps of the transformer model we randomly choose one of these vectors.</p>
<p>Another augmentation is the random grouping as stated above.</p>
<h2>3. Training and Inferencing</h2>
<p>The original 2020 competition data is highly unbalanced, there are only 2-3% of positive targets in the train and test data. Although we were able to train the base model using uneven sampling, the best way to get good training was to add the data from ISIC 2019 competition which has a much higher percentage of melanoma images. </p>
<p>We split the training data to 3 folds keeping all the images from the same patient in the same fold, and making sure each fold has a similar number of patients with melanoma. The ISIC2019’s data was also split evenly between the folds. The same folds were kept for the base and the transformer models. </p>
<p>To get more diversity we had 3 different splits using 3 seeds </p>
<h3>Preprocessing</h3>
<p>All images were resized to an aspect ratio of 1:1.5, which was the most popular aspect ratio of the images in the original dataset. We prepared 3 image datasets of sizes <code>300*450</code>, <code>400*600</code>, <code>600*900</code>. Most of the models were trained using the <code>400*600</code> dataset, as 300*450 gave inferior results and the <code>600*900</code> didn’t improve the results enough.</p>
<p>For the metadata we had to set the same terminology for the 2020 and 2019 datasets.</p>
<h3>Loss Function</h3>
<p>The loss function we used was cross entropy. Although the task is to predict only melanoma we found it is better to predict the diagnosis which was split to 8 different classes one of which was melanoma. The final prediction was the value for the melanoma class, after a softmax function on all classes. We also tried a binary cross entropy on the melanoma class alone and a combination between the two, but using cross entropy gave the best results.</p>
<p>The same loss was used for the base model and the transformer, but in the transformer we needed to regularize for the different number of predictions in each batch resulting from the different number of images for each patient. </p>
<p>We also tried using focal loss which didn’t improve the results, but we left one of the transformer models which was trained with focal loss in the ensemble (A model with cross entropy loss gave the similar CV and LB).</p>
<h4>Training the transformer model</h4>
<p>The transformer model was trained in two steps. For the first step we used the data from both competitions (2019, 2020). For the 2019 competition we don’t have information about the patient, and each image got a different dummy patient_id, meaning the transformer didn’t learn much from these images. In the 2nd stage we fine-tuned the transformer using only the 2020 competition’s data.</p>
<p>In both steps we used a sampler that over sampled the larger groups.</p>
<h4>Inference</h4>
<p>As stated above, the inference was done using TTA. For the base model we used 12-16 different augmentations and for the transformer model 32.</p>
<h3>Ensembling</h3>
<p>For our final submissions we used 2 ensembles:</p>
<h5>Without Context Submission:</h5>
<ol>
<li>EfficientNet B3 noisy student image size 400*600</li>
<li>EfficientNet B4 noisy student image size 400*600</li>
<li>EfficientNet B5 noisy student image size 400*600</li>
<li>EfficientNet B6 noisy student image size <strong>600*900</strong></li>
<li>EfficientNet B7 noisy student image size 400*600</li>
</ol>
<h5>With Context</h5>
<p>All the “without context” model +</p>
<ul>
<li>Transformer on features from A.</li>
<li>Transformer on features from B.</li>
<li>Transformer on features from C using focal loss</li>
<li>Transformer on features from D.</li>
<li>Transformer on features from E.</li>
</ul>
<h2>4. Interesting findings</h2>
<h3>CV and LB</h3>
<p>Although the number of images in the competition was very large, the number of different patients wasn’t large enough to give a stable and reliable CV and LB. And the correlation between the two was low. At the end we trusted neither and we submitted the models we felt were the most robust.</p>
<h3>What didn’t work</h3>
<ol>
<li>Using different sizes of images as suggested in <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/160147" target="_blank">CNN Input Size Explained</a> didn’t show any improvement.</li>
<li>Using Mixup[5] and MixCut[4] augmentation didn’t work</li>
<li>Larger transformers didn’t improve the results. </li>
</ol>
<h2>5. Better “Real World” model</h2>
<p>This model can be much simpler if we won’t ensemble and take only one base model, the EfficientNet B5 is probably the best compromise.</p>
<p>In the real world scenario the images from previous years will probably be tagged already. In that case we can use a full transformer with encoder and decoder which will perform a seq2seq operation. </p> |
SIIM-ISIC Melanoma Classification | 302th Solution Writeup - Always trust your CV | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>For me, the biggest win in this competition is I finally know how to use TPU 👌👌</p>
<p>This competition has been very tough for me because I thought I'm stuck at the bottom of the LB and everybody seems to be way ahead of me. But I stick to my gut and trust my CV and in the end it pays off.</p>
<h3>Data preprocessing:</h3>
<p>To prevent leaks, I use MD5 hash to remove all duplicates on both original and external data.</p>
<h3>Data augmentation:</h3>
<ul>
<li>ShiftScaleRotate</li>
<li>Cutout </li>
<li>Flip </li>
<li>BrightnessContrast.</li>
</ul>
<h3>Validation Strategy:</h3>
<p>At first I combined all data and did Triple Stratified KFold but the CV scores was way too high compared to public LB. Then I experimented with Triple Stratified KFold on original training set and noticed that CV and LB gaps are much smaller (0.917 - 0.919). So I concluded the test data must have similar distribution as the original training data.<br>
My final strategy: Triple Stratified KFold on original training set, include external data during training of each folds.</p>
<h3>Model (Image size):</h3>
<ul>
<li>B4 (384)</li>
<li>B5 (456)</li>
<li>B6 (512)</li>
</ul>
<h3>Ensemble:</h3>
<p>For each model, take average of 3 folds. Then combine all models predictions using power average with p = 2.</p>
<p>Overall validation scores (Average of 3 models): 0.938 CV, 0.389 Public LB, 0.376 Private LB.<br>
I'm super happy because my CV, public LB and private LB are in line 😁😁.<br>
Training kernel (TPU): <a href="https://www.kaggle.com/quandapro/isic-training-tpu" target="_blank">https://www.kaggle.com/quandapro/isic-training-tpu</a>.</p> |
SIIM-ISIC Melanoma Classification | 3rd place solution overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you very much for all the comments! :)</p>
<p>Exactly <a href="https://www.kaggle.com/kozodoi" target="_blank">@kozodoi</a>! Although I had to reduce the "optimal" number of epochs on some occasions to avoid exceeding the limit of Kaggle's TPU (since I was using more data and also hair augmentation, which was quite computationally expensive).</p>
<p>Hi <a href="https://www.kaggle.com/pawankumarsahu" target="_blank">@pawankumarsahu</a> ,<br>
I used k-fold cross-validation to estimate the hyperparameters of the model (hyperparameter tuning) and then used those optimized hyperparameters to fit a model to the whole dataset. This procedure is based on the idea that the more data you use the more likely it is to have a robust model that generalizes well.</p>
<p>Thank you, <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a>! I did not use hair removal in my final solution. I tested it and it worked worse for me than hair augmentation, so I decided to go this other way. The number of epochs was mostly determined based on when the early stopping was occurring in my previous CV results (with some adjustments to avoid exceeding the limit of Kaggle's TPU). Regarding my ensembles, I tried several things, but my final solution is actually a simple average of the 10 models (8 own models + 2 public notebooks). Finally I added metadata to this ensemble with a weighted average. I know I have a lot to learn about ensemble techniques and that I was pretty lucky to get such a good result with such a simple approach.</p>
<p>Code available here: <a href="https://github.com/Masdevallia/3rd-place-kaggle-siim-isic-melanoma-classification" target="_blank">https://github.com/Masdevallia/3rd-place-kaggle-siim-isic-melanoma-classification</a></p>
<p>Hello everybody!</p>
<p>Well… I am speechless. I am quite new at Kaggle and was not expecting such a good result, it has taken me completely by surprise.</p>
<p>First of all, I need to deeply thank the entire Kaggle community. I've learned a lot throughout the entire competition thanks to all the knowledge and insights you have generously shared. Thank you also to the organizers and Kaggle for hosting the competition.</p>
<p>I am away on vacation with a limited internet connection, but I will share my solution as soon as I can.</p>
<p>As a quick summary, my main submission was an ensemble of 8 different models built with various combinations of image sizes (256, 384, 512, 768). Many thanks to <a href="https://www.kaggle.com/vbhargav875" target="_blank">@vbhargav875</a>, whose notebook "EfficientNet-B5_B6_B7 TF-Keras" was an incredible starting point. I used CV to implement some experiments, but my finals models where obtained with all available data (without validation). I used 2017-2018-2019 + 2020 TFrecords (huge thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>), hair augmentation (thanks to <a href="https://www.kaggle.com/nroman" target="_blank">@nroman</a> and <a href="https://www.kaggle.com/graf10a" target="_blank">@graf10a</a>), heavy TTA, EfficientNet-B6 models and metadata (thanks to <a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a>). This approach scored: 0.9481 private LB, 0.9596 public LB.</p>
<p>However, I wanted to try to ensemble some public notebooks too, in order to add some diversity. I decided to go ahead with these two amazing notebooks, which introduced some juicy techniques that I didn't have time to test:</p>
<ul>
<li><a href="https://www.kaggle.com/rajnishe/rc-fork-siim-isic-melanoma-384x384" target="_blank">https://www.kaggle.com/rajnishe/rc-fork-siim-isic-melanoma-384x384</a> (thanks to <a href="https://www.kaggle.com/rajnishe" target="_blank">@rajnishe</a>).</li>
<li><a href="https://www.kaggle.com/ajaykumar7778/efficientnet-cv" target="_blank">https://www.kaggle.com/ajaykumar7778/efficientnet-cv</a> (thanks to <a href="https://www.kaggle.com/ajaykumar7778" target="_blank">@ajaykumar7778</a>).</li>
</ul>
<p>This approach scored: 0.9484 private LB, 0.9620 public LB.</p>
<p>Congratulations to all participants! I know that I still have a long way to go, but I am looking forward to continuing to grow alongside this incredible community.</p>
<p>😂😂😂 Thank you!</p>
<p>Yes, exactly! Thank you!</p> |
SIIM-ISIC Melanoma Classification | 40th Place Summary - In Chris and CV we trust | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Kaggle fam,</p>
<p>Apologies this was a little delayed – was quite surprised at the result and took some time to retrace my own steps, but thought I would share a few findings that might add to what’s been shared already. </p>
<p>Firstly massive shoutout to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for his triple-stratified TFrecords which saved a heap of time, and also the interesting techniques he kindly shared which I’ve still yet to try out and fully understand – I like many other noivces look forward to learning more from you in future competitions!</p>
<p>Like most teams, I realised early on that training different nets on images of different sizes generated varying results in terms CV across validation folds, so I thought ensembles that maximise local CV might be the way to go. My submissions were simple weighted averages of a bunch of EfficientNets of sizes (B4-B7), mostly initialised with noisy-student weights, trained on image sizes 384, 512 and 768 with TTA (15) across 5 validation folds. For augmentation, I believe the original author of the techniques I used is <a href="https://www.kaggle.com/agentauers" target="_blank">@agentauers</a>, so a huge thank you for sharing them. Previous competition data was also used for increasing the diversity of the ensemble.</p>
<p>Something I found interesting was that with my setup, larger nets + larger images seemed to have generally performed a little better than smaller nets + smaller images (both CV & private LB). Even when ensembling, smaller nets did not help with CV much at all and hence were largely not part of my final submissions. I’m curious to hear if that was the case for anyone else (sorry if it’s been talked about already).</p>
<p>As for metadata, I had a hunch it might be useful so had been including it in some of my submissions – following Chris’s advice of a roughly 90/10 (image/meta) split. Shoutout to <a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a> for his notebook, which worked very well for ensembling. I’d be interested to hear more from others who used metadata and how everyone incorporated it into their submissions.</p>
<p>Thank you to the organisers and Kaggle fam for this wonderful competition. Here’s hoping for many more to come.</p>
<p>Subs:</p>
<table>
<thead>
<tr>
<th>Sub</th>
<th>No. of Effnets</th>
<th>External Data</th>
<th>Metadata</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>18</td>
<td>Y</td>
<td>N</td>
<td>0.9521</td>
<td>0.9408</td>
</tr>
<tr>
<td>2</td>
<td>18</td>
<td>Y</td>
<td>Y</td>
<td>0.9535</td>
<td>0.9434</td>
</tr>
<tr>
<td>3</td>
<td>45</td>
<td>Y</td>
<td>Y</td>
<td>0.9447</td>
<td>0.9344</td>
</tr>
<tr>
<td>Unused smaller ensemble</td>
<td>6</td>
<td>Y</td>
<td>Y</td>
<td>0.9544</td>
<td>0.9405</td>
</tr>
</tbody>
</table>
<p>Cheers Chris! Great to hear that wasn't a one-off and you also had something similar going on. </p>
<p>Thanks again for your generosity in sharing - you're an absolute legend. Please keep it up! 🙏</p> |
SIIM-ISIC Melanoma Classification | 85th description - random label smoothing | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks! I hope it works out for you</p>
<p>Cool! I will be interested to see how that works out. The method is a little bit fiddly because it takes the single label smoothing parameter and makes many more. The results depend on the type of distribution (gauss vs uniform) as well as their shape. I suspect that I have not fully leveraged this idea, and I would like to explore what it means for the decision boundary that the network learns, especially wrt adversarial examples.</p>
<p>Also thank you very much for the datasets and kernels that you published! I learned a lot from them.</p>
<p>My approach was to build an ensemble of EfficientNets, I settled on this because it won last year and it did not seem that there were any new approaches performing better. I ran a lot of experiments using <a href="https://www.kaggle.com/cdeottes" target="_blank">@cdeottes</a> notebook with image size 128 and EfficientNet-B6 to find hyperparameters that increased my CV, then I made the assumption that these parameters would also work for larger image sizes and different EfficientNets. </p>
<p>I included the meta data into the classifier by concatenating it with the output of an MLP on the EfficientNet output. I also experimented with different amounts of label smoothing and class weights, finding that a class weight of 5 and label smoothing of 0.05 worked best. I also experimented with assigning random label smoothing to the data as described at the end of this discussion. My final ensemble also included models trained on meta data only and on image embeddings combined with meta data, the ensembling was done using and explicit grid search on the out of fold scores for all models.</p>
<p>For ensembling my models the best method I found was to find groups of predictions that were highly correlated (typically different EfficientNets trained on the same image size were highly correlated) and take a non-linear average that increased the out of fold cross validation score for each group. I then performed a grid search for the best weights to ensemble these averaged predictions. I found this approach to be superior to performing an explicit grid search without first averaging.</p>
<p>To give an explicit example of this ensembling. Take all of the models trained on image size NxN, take the geometric mean (and/or median) and check that this has a better cross validation score than the best single model. Repeat this process for all image sizes that have models trained on them, taking either the geometric mean or the best single model in each case. Blend the averaged predictions for all image sizes. This can be applied to any type of grouping where correlations between predictions are found.</p>
<p>The rationale behind this is that highly correlated models can still be included in the final ensemble, instead of ignoring models that took a long time to train.</p>
<p>The only part of my final submission that is significantly different from what I have seen posted elsewhere is that I trained some models where the amount of label smoothing for each example was drawn randomly from a distribution. So each time the model saw an example, the label was different. As far as I am aware this is the first time this method has been employed.</p>
<p>I trained multiple models with this strategy, experimenting with different distributions (for sampling the label smoothing), and then I ensembled all of these experiments (geometric mean and median both increased CV). This ensemble had a weight of 0.25 in my final submission.</p>
<p>This approach produced the lowest difference between CV, private LB and public LB for a single model that I have trained. The scores for a single model trained in this way, with labels drawn from a gaussian with mean 0.05 and std 0.025, are:</p>
<p>CV (2020 data only) 0.927<br>
private LB 0.9319<br>
public LB 0.9389.</p>
<p>Which can be compared with the original <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">notebook</a> scores of:</p>
<p>CV (2020 data only) 0.904<br>
private LB 0.9262<br>
public LB 0.9454.</p>
<p>To be completely clear on what this entails I used the following loss:</p>
<pre><code>class sls_loss(tf.keras.losses.Loss):
def call(self, y_true, y_pred):
ls = tf.abs(tf.random.normal([1], 0, 0.025, tf.float32))
y_true = y_true * (1.0 - ls) + 0.5 * ls
bce = tf.keras.losses.BinaryCrossentropy(label_smoothing=0,
reduction=tf.keras.losses.Reduction.NONE)
per_example_loss = bce(y_true,y_pred)
return tf.nn.compute_average_loss(per_example_loss,
global_batch_size=GLOBAL_BATCH_SIZE)
</code></pre>
<p>Note this is slightly different to what I described as it is per batch.</p> |
SIIM-ISIC Melanoma Classification | From 66x online tutorials to winning the First Provisional Silver Medal | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everyone. After completing 66x online tutorials and going through shake-ups in the initial competitions I am able to secure my first silver medal. Special thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for his amazing notebook. I blended models of images with all size :). Special thanks to amazing mentors <a href="https://www.kaggle.com/dmitrylarko" target="_blank">@dmitrylarko</a> and 4x grandmaster <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a>. Good Luck to everyone.</p> |
SIIM-ISIC Melanoma Classification | Justification and 4th place Solution Overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Almost the same interpretation<br>
<em>y</em> does not have to be a single model, but it is not recommended if the R2 score between <em>y</em> and <em>z</em> is high (R2> 0.7).<br>
On the other hand, choosing <em>y</em> with a higher R2 may increase the public score, but at the risk of overfitting the public score.<br>
If you find a model where R2 between <em>y</em> and <em>z</em> is between 0.5 and 0.7 and can contribute to the public score, your private score will be guaranteed too.</p>
<p>You made me notice a lot. Thank you very much.</p>
<p>Each parameter I used is:</p>
<p>DataSet:2017-2018-2019 + 2020 TFrecords (thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>)<br>
Image_size:384*384<br>
Model:efficientnet-b6<br>
Loss:binary_crossentropy<br>
Optimizer:Adam<br>
Seed:42<br>
TTA:15<br>
Model building:<br>
With reference to *<a href="https://www.kaggle.com/ajaykumar7778/efficientnet-cv" target="_blank">https://www.kaggle.com/ajaykumar7778/efficientnet-cv</a> (thanks to <a href="https://www.kaggle.com/ajaykumar7778" target="_blank">@ajaykumar7778</a>)<br>
x = base(keras.layers.Input(shape=(384,384,3)))<br>
x = keras.layers.GlobalAveragePooling2D()(x)<br>
x = keras.layers.Dropout(0.3)(x)<br>
x = keras.layers.Dense(1024)(x)<br>
x = keras.layers.Dropout(0.2)(x)<br>
x = keras.layers.Dense(512)(x)<br>
x = keras.layers.Dropout(0.4)(x)<br>
x = keras.layers.Dense(256)(x)<br>
x = keras.layers.Dropout(0.3)(x)<br>
x = keras.layers.Dense(128)(x) <br>
x = keras.layers.Dense(1)(x)<br>
x = keras.layers.Activation('sigmoid', dtype='float32')(x)</p>
<p>By blending with *<a href="https://www.kaggle.com/paklau9/minmax-highest-public-lb-9619" target="_blank">https://www.kaggle.com/paklau9/minmax-highest-public-lb-9619</a>, Scored 0.9596 public LB and 0.9476 private LB. (thanks to@Pak Lau9)<br>
I believe that one of the reason is probably that the R2 score is less than 0.6 between the ensemble models, the correlation is not large, and the generalization performance is high.</p>
<p>First of all, thanks to the Kaggle community and the organizers.<br>
I learned a lot through the competition.</p>
<p>Some people suspect me to be a bot, so I will provide 4th place solution.<br>
Certainly, I can't hide my surprise at this result.<br>
But I'm a little disappointed with the skepticism of the competition itself.<br>
I hope the contest is held in our good faith.</p>
<p>I could not submit it many times, of course, because the participation period was not long.<br>
In addition, the model was selected based on the CV results.<br>
CV was done with multiple resolutions, and 384*384 was the best.<br>
Also, When ensemble with the best scoring model,<br>
I was able to efficiently raise the score by performing the ensemble at the rate that the difference from the result after the ensemble became the minimum.<br>
By using this method, we were able to exclude the ones with very bad differences.<br>
However, this method also carries the risk of overfitting public data.<br>
I think there is also a risk in ignoring CV results and increasing the number of applications and raising the score.<br>
In fact, the 4th private score I got wasn't the best model in public.</p>
<p>And I think the main factors to win are:<br>
I have found that some of the prediction results of the CV model predict a high positive rate, whereas many kernels predict a low positive rate.<br>
I thought this was a false negative in many published models.</p>
<p>I finally submitted the following 3 models.<br>
(1) Best score<br>
(2) Ensemble with best score and model considered in CV<br>
(3) Due to risk of overfitting, ensemble with model other than the best score and model considered in CV </p>
<p>As a result, Model (3) was the 4th place result. and, (1) was the worst.<br>
From this result, the following can be said.<br>
Like many kernel predictive models, the best-scoring model are also models with low sensitivity and is more likely to predict false negatives.<br>
The highest scoring model fits public data too much.</p>
<p>I think the following are valid:<br>
Trust CV as many claims show<br>
We also observe trends by comparing with the results of many models that have a low correlation with the results examined in CV.</p>
<p>Of course, it is very lucky that the model (3) got the 4th place result.</p>
<p>I am aware of my lack of skills. However, I've found the fun of kaggle.<br>
I will continue to deepen my learning through other contests.</p>
<p>Best Regards</p>
<p>Thank you everyone for celebrating me.</p>
<p>To Helen-san<br>
Thank you for your comment.<br>
I'm sorry to explain a very simple method.<br>
To use this method, you first need to find one prediction that contributed to best score model by the ensemble.<br>
That prediction is y.Then try to run the following code, where x is the prediction of the model you want to ensemble.<br>
That's how the models that helped to best score model are improved and help to score further.</p>
<p>`import pandas as pd<br>
import numpy as np<br>
from statistics import mean</p>
<p>x=pd.read_csv("submission_x.csv")["target"]<br>
y=pd.read_csv("submission_y.csv")["target"]<br>
z=pd.read_csv("submission_best.csv")["target"]</p>
<h1>x:Prediction you want to ensemble</h1>
<h1>y:Prediction that contributed to the best prediction</h1>
<h1>z:Best prediction</h1>
<p>df=pd.read_csv("sample_submission.csv")</p>
<p>best=len(x)<br>
for a in np.arange(0, 1, 0.1):<br>
b=1-a<br>
c=(a<em>x+b</em>y)<br>
d=abs(z-c)/z<br>
res=mean(d)<br>
if res <best:<br>
best=res<br>
beat_a=a<br>
beat_b=b<br>
best_c=c</p>
<p>if best>0.5:<br>
print("Do not recommend ensemble")<br>
else:<br>
best_pre=z<em>0.9+best_c</em>0.1<br>
df["target"]=best_pre<br>
df.to_csv("submission.csv", index=False)<br>
print(best)<br>
print(f'a: {beat_a}')<br>
print(f'b: {beat_b}')`</p> |
SIIM-ISIC Melanoma Classification | Missed Opportunity - 4th place - Solo Gold | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a>, nope I would not have chosen using any reasonable intuition because it is my own conservative blend of blends which is usually risky.</p>
<p>Thanks <a href="https://www.kaggle.com/mightyrains" target="_blank">@mightyrains</a>.</p>
<p>I forgot to say that yesterday. </p>
<p><a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a> congrats to you and your team for another strong finish.</p>
<p><a href="https://www.kaggle.com/authman" target="_blank">@authman</a> congrats to you and your team on the silver medal.</p>
<p>Congrats on another strong finish and thanks for your kind words <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>.</p>
<p>Your contributions here and always is well apreciated. I joined the Flower competition to try out TPU and I learnt a lot from you there as well as here on that subject. Your triple stratified dataset made life easier for so many of us. So thanks so much.</p>
<p><a href="https://www.kaggle.com/authman" target="_blank">@authman</a>, <br>
<a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a>, <br>
Thanks so much for your kind words. <br>
I am thankful for experiencing this competition because I learnt so much about the use of TPU and a few TF tricks.</p>
<p>Congrats <a href="https://www.kaggle.com/serigne" target="_blank">@serigne</a> on a strong finish. </p>
<p>My current position public LB= 0.9571, private LB=0.9424 is a lower scoring one on the public LB but I went with trusting my CV as well. The best in private LB submission I did not choose was one of my ensembles that does not have a CV score. With a systematic approach there is no way I would have chosen it. I am only disappointed because I have been waiting so long for the elusive solo gold.</p>
<p><a href="https://www.kaggle.com/roydatascience" target="_blank">@roydatascience</a>, it is an ensemble without CV hence there is no way I would have chosen it. I went with trusting my CV and chose a simple averaged ensemble and the other 2 based on CV ensemble.</p>
<p><a href="https://www.kaggle.com/mpsampat" target="_blank">@mpsampat</a>, thanks for your kind words. Congrats on your silver medal.</p>
<p><a href="https://www.kaggle.com/serigne" target="_blank">@serigne</a>, thanks.</p>
<p>I had similar result to yours in the Jigsaw Multilingual competition as well hence my feeling a little sad this time. J'espère bientôt pour l'or en solo.</p>
<p><a href="https://www.kaggle.com/roydatascience" target="_blank">@roydatascience</a>, my 2nd best submission on the private LB which is my current LB position is a submissions mean of 27 models.</p>
<p>Congratulations to the winners.</p>
<p><strong>Scores:-</strong><br>
<strong>public LB=0.9618</strong><br>
<strong>private LB=0.9482</strong> </p>
<p>I am Gutted.</p>
<p><a href="https://www.kaggle.com/sheriytm/melanoma-clf-xgb-missed-opportunity-4th-place" target="_blank">https://www.kaggle.com/sheriytm/melanoma-clf-xgb-missed-opportunity-4th-place</a></p>
<p><strong>UPDATE</strong></p>
<p>In late submission, using the mean ensemble of 27 models which gave my private LB score in the above ensemble script scores :-<br>
<strong>public LB=0.9618</strong><br>
<strong>private LB=0.9487</strong> </p> |
SIIM-ISIC Melanoma Classification | My 23rd Place Aprroach | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks Dimitre!</p>
<p>Yeah they were very diverse than the rest but gave decent cv and lb's so I included them too. I built this model in last 3-4 days so couldn't test it out on larger scale. But just checked the mix of these 7, that cluster alone gave me 0.9404 in private. If I had more time and tpu power maybe could get something better from it…</p>
<p>No didn't include meta on this final heatmap. The 7's are from my last approach I shared in this topic, first models were mainly from earlier approaches similar to my first public notebook.</p>
<p>I clustered similar approaches in different groups, eliminated some, then took them to this final map.</p>
<p>For example this is one of the earlier clusetrs, mainly from my public notebook results, you can see the metadata easily:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4616296%2F4f8fd7e5ef5670771e6eb1faf9b53caf%2FScreenshot_2020-08-20%20corr%20-%20Jupyter%20Notebook.png?generation=1597888150914491&alt=media" alt=""></p>
<p>Thanks. I'll try to publish it when my TPU timer resets…</p>
<p>Just an update;</p>
<p>Since my TPU limit reset I shared light version of my approach here.</p>
<p><a href="https://www.kaggle.com/datafan07/final-melanoma-model-18th-place-solution-light-v" target="_blank">https://www.kaggle.com/datafan07/final-melanoma-model-18th-place-solution-light-v</a></p>
<p>You are welcome! <a href="https://www.kaggle.com/fiyeroleung" target="_blank">@fiyeroleung</a> </p>
<p>Thank you Chris!, my job would be much more harder without your perfect baselines and data!</p>
<p>Thank you again Ben, it was great adaptation. I really enjoyed while playing with the custom callback!</p>
<p><strong>Update;</strong></p>
<p>Since my TPU limit reset I shared light version of my approach here:</p>
<p><a href="https://www.kaggle.com/datafan07/final-melanoma-model-18th-place-solution-light-v" target="_blank">https://www.kaggle.com/datafan07/final-melanoma-model-18th-place-solution-light-v</a></p>
<p>First of all thank you Kaggle and rest of the people involved in this competition. It was my first serious competition ever and learned a lot on the way. I wanted to write what worked for me (at least what I think worked), I learnt a lot from this community so I wanted to share them back!</p>
<p>My highest scored submission on public score was based only 2020 data only. They were doing good with efficientnet + meta blend but I noticed they weren't doing great in terms of non seen data. I thought this were due to some unseen test set which Chris and I pointed out in some public discussions.</p>
<p>I was getting some unstable results for some cases in the test set, there were big differences between only 2020 trained predictions and only 2019 predictions. I had gut feeling that this might be caused by some medical differences about the stage of the melanoma, or different scanning device but that's not my expertise area at all so to get overcome that I decided to use external data, I thought adding more examples would make my model better at predicting these weird cases. Thanks to Chris I used the external tfrecords and malignant upsampling on my existing model.</p>
<p>Well… That increased my CV a lot but wasn't the case with LB. I decided to add these external data one by one and at the end decided to keep out 2019 part out of my model and only used 2018. This helped me a little but there was a big problem: <strong>overfitting</strong>. Tried some augmentations and regularizing but wasn't enough imo. I was really interested in Coarse Dropout from Chris but it was kinda damaging my model speed at the dropout levels I want. Then found <a href="https://www.kaggle.com/benboren" target="_blank">@benboren</a> 's great sprinkle method and fine tuned it for my model:</p>
<pre><code>def make_mask(num_holes,side_length,rows, cols, num_channels):
"""Builds the mask for all sprinkles."""
row_range = tf.tile(tf.range(rows)[..., tf.newaxis], [1, num_holes])
col_range = tf.tile(tf.range(cols)[..., tf.newaxis], [1, num_holes])
r_idx = tf.random.uniform([num_holes], minval=0, maxval=rows-1,
dtype=tf.int32)
c_idx = tf.random.uniform([num_holes], minval=0, maxval=cols-1,
dtype=tf.int32)
r1 = tf.clip_by_value(r_idx - side_length // 2, 0, rows)
r2 = tf.clip_by_value(r_idx + side_length // 2, 0, rows)
c1 = tf.clip_by_value(c_idx - side_length // 2, 0, cols)
c2 = tf.clip_by_value(c_idx + side_length // 2, 0, cols)
row_mask = (row_range > r1) & (row_range < r2)
col_mask = (col_range > c1) & (col_range < c2)
# Combine masks into one layer and duplicate over channels.
mask = row_mask[:, tf.newaxis] & col_mask
mask = tf.reduce_any(mask, axis=-1)
mask = mask[..., tf.newaxis]
mask = tf.tile(mask, [1, 1, num_channels])
return mask
def sprinkles(image, cfg = CFG):
num_holes = cfg['num_holes']
side_length = cfg['side_length']
mode = cfg['sprinkles_mode']
PROBABILITY = cfg['sprinkles_prob']
RandProb = tf.cast( tf.random.uniform([],0,1) < PROBABILITY, tf.int32)
if (RandProb == 0)|(num_holes == 0): return image
img_shape = tf.shape(image)
if mode is 'normal':
rejected = tf.zeros_like(image)
elif mode is 'salt_pepper':
num_holes = num_holes // 2
rejected_high = tf.ones_like(image)
rejected_low = tf.zeros_like(image)
elif mode is 'gaussian':
rejected = tf.random.normal(img_shape, dtype=tf.float32)
else:
raise ValueError(f'Unknown mode "{mode}" given.')
rows = img_shape[0]
cols = img_shape[1]
num_channels = img_shape[-1]
if mode is 'salt_pepper':
mask1 = make_mask(num_holes,side_length,rows, cols, num_channels)
mask2 = make_mask(num_holes,side_length,rows, cols, num_channels)
filtered_image = tf.where(mask1, rejected_high, image)
filtered_image = tf.where(mask2, rejected_low, filtered_image)
else:
mask = make_mask(num_holes,side_length,rows, cols, num_channels)
filtered_image = tf.where(mask, rejected, image)
return filtered_image
</code></pre>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4616296%2F9f27283c72e3b99022acdff72643cce0%2F__results___13_0.png?generation=1597716024628082&alt=media" alt=""></p>
<p>After adding this to my regular augmentations, I have noticed it decreased overfitting but it also reduced my models learning speed. To fix that I wanted to add attention on my model, thought that might speed up the training speed and also helps with the cv so ended up adding Scott Madder's attention model to my efficientnet top, attention explanation directly from his notebook <a href="https://www.kaggle.com/kmader/attention-on-pretrained-vgg16-for-bone-age#Show-Attention" target="_blank">here</a>. So I played with it edited it here and there meanwhile added another input to my model including metadata from tfrecords and ended up with this:</p>
<pre><code>def get_model():
with strategy.scope():
inp1 = tf.keras.layers.Input(shape = (cfg['net_size'],cfg['net_size'], 3), name = 'inp1')
inp2 = tf.keras.layers.Input(shape = (9), name = 'inp2')
efnetb3 = efn.EfficientNetB3(weights = 'noisy-student', include_top = False)
pt_depth = efnetb3.get_output_shape_at(0)[-1]
pt_features = efnetb3(inp1)
bn_features = tf.keras.layers.BatchNormalization()(pt_features)
attn_layer = tf.keras.layers.Conv2D(64, kernel_size = (1, 1), padding = "same", activation = "relu")(tf.keras.layers.Dropout(0.5)(bn_features))
attn_layer = tf.keras.layers.Conv2D(16, kernel_size = (1, 1), padding = "same", activation = "relu")(attn_layer)
attn_layer = tf.keras.layers.Conv2D(8, kernel_size = (1,1), padding = 'same', activation = 'relu')(attn_layer)
attn_layer = tf.keras.layers.Conv2D(1, kernel_size = (1, 1), padding = "valid", activation = "sigmoid")(attn_layer)
up_c2_w = np.ones((1, 1, 1, pt_depth))
up_c2 = tf.keras.layers.Conv2D(pt_depth, kernel_size = (1, 1), padding = "same", activation = "linear", use_bias = False, weights = [up_c2_w] )
up_c2.trainable = False
attn_layer = up_c2(attn_layer)
mask_features = tf.keras.layers.multiply([attn_layer, bn_features])
gap_features = tf.keras.layers.GlobalAveragePooling2D()(mask_features)
gap_mask = tf.keras.layers.GlobalAveragePooling2D()(attn_layer)
# To account for missing values from the attention model
gap = tf.keras.layers.Lambda(lambda x: x[0] / x[1], name = "RescaleGAP")([gap_features, gap_mask])
gap_dr = tf.keras.layers.Dropout(0.5)(gap)
dr_steps = tf.keras.layers.Dropout(0.25)(tf.keras.layers.Dense(128, activation = "relu")(gap_dr))
x1 = tf.keras.layers.Dense(16)(inp2)
x1 = tf.keras.layers.Activation('relu')(x1)
x1 = tf.keras.layers.Dropout(0.2)(x1)
x1 = tf.keras.layers.BatchNormalization()(x1)
x1 = tf.keras.layers.Dense(8)(inp2)
x1 = tf.keras.layers.Activation('relu')(x1)
x1 = tf.keras.layers.Dropout(0.2)(x1)
x1 = tf.keras.layers.BatchNormalization()(x1)
concat = tf.keras.layers.concatenate([dr_steps, x1])
concat = tf.keras.layers.Dense(512, activation = 'relu')(concat)
concat = tf.keras.layers.BatchNormalization()(concat)
concat = tf.keras.layers.Dropout(0.15)(concat)
output = tf.keras.layers.Dense(1, activation = 'sigmoid',dtype='float32')(concat)
model = tf.keras.models.Model(inputs = [inp1, inp2], outputs = [output])
opt = tf.keras.optimizers.Adam(learning_rate = LR)
model.compile(
optimizer = opt,
loss = [tfa.losses.SigmoidFocalCrossEntropy(gamma = 2.0, alpha = 0.90)],
metrics = [tf.keras.metrics.BinaryAccuracy(), tf.keras.metrics.AUC()]
)
return model
</code></pre>
<p>I got predictions for different image sizes and different efficientnets and different data ratios (external etc.).</p>
<p>At the end looking for ensembling I choose pretty basic way of averaging different models. I got CV's and LB's for many models and ensembled them basically depending on correlations between them, the heatmap looked like this, sorry for the mess :)</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4616296%2F85de95f55a0d3fb77a21b5b81e30aae1%2FScreenshot_2020-08-18%20corr%20-%20Jupyter%20Notebook.png?generation=1597716422214100&alt=media" alt=""></p>
<p>Simply I ensembled the high cv predictions with less correlations between them. And got the final results.</p>
<p>This was my first proper competition and I wasn't expecting writing something like this so this might be not looking like your usual writeups. I learnt a lot in this competition and wanted to share some of them back! Maybe I'll release the notebook in more proper way later but that's it for now. Thank you all!</p>
<p><a href="https://www.kaggle.com/epocxy" target="_blank">@epocxy</a> details are here in my notebook, you can also see the original work there</p>
<p><a href="https://www.kaggle.com/datafan07/final-melanoma-model-16th-place-solution-light-v" target="_blank">https://www.kaggle.com/datafan07/final-melanoma-model-16th-place-solution-light-v</a></p> |
SIIM-ISIC Melanoma Classification | Our best private solution (0.9498) [not selected] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>As most participants, we have not chosen our best private solution. But we would like to share it with you. Maybe it will be helpful for someone!</p>
<p><a href="https://www.kaggle.com/vladimirsydor/solution-private-score-0-9498-not-selected" target="_blank">https://www.kaggle.com/vladimirsydor/solution-private-score-0-9498-not-selected</a></p>
<p>Thanks to Kaggle team and all participants and good Kaggling! </p> |
SIIM-ISIC Melanoma Classification | [2nd place] Solution Overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I used all the training data for ISIC 2019 in addition to the data provided for this competition. </p>
<p>I have a <code>Trainer</code> class that uses this for gradient accumulation:</p>
<pre><code>self.optimizer.zero_grad()
for i in range(int(self.gradient_accumulation)):
output = self.model(accum_batch[i])
loss = self.criterion(output, accum_labels[i])
(loss / self.gradient_accumulation).backward()
self.optimizer.step()
</code></pre>
<p>I had gotten a grant from HOSTKEY for 4x 1080 Ti, so to maximize usage of all my GPUs, I also ran additional experiments on those GPUs. </p>
<p>I mainly used it to test different backbones. I used small image size 256x256 and just experimented with different backbones to see which one performed best. This doesn't guarantee that I will pick the best backbone for the final configuration, but it led me to conclude that EfficientNet was the best family of backbones, without much benefit going to B7/B8. </p>
<p>Usually I don't have the luxury of having additional GPUs for extra experimentation. I didn't use the 1080s for the final models because I wanted to leverage maximum batch size. </p>
<p>Thank you! I did not use label smoothing. </p>
<p>Thank you! I was using metadata in the beginning, and during the last week I decided to train some models without metadata so I could be eligible for without context prizes. Turns out these were my best models in the end. </p>
<p>I used embedding layers to map the metadata to 32-dimensional vectors, which I then concatenated to the image feature vector before inputting the concatenated vector into the final linear classification head. </p>
<p>Thank you! Now that I've started residency, it's definitely tough to keep up with Kaggle. I got lucky with this competition…</p>
<p>Thanks! I just used the melanoma probability as the predicted score for submission.</p>
<p>For my EfficientNet-B6, each fold took about 2 hours. For B7, about 4 hours. </p>
<p>Thank you, Chris! I really appreciate all your discussion posts and notebooks, in this competition and others. </p>
<p>I updated my post to detail the pseudolabeling more. I used soft pseudolabels (predicted probabilities) and a modified cross entropy loss. </p>
<p>Haha, provisional since I might get DQ'd! I don't think I broke any rules, but after the deepfake fiasco, you can never be sure…</p>
<p>Code available here: <a href="https://github.com/i-pan/kaggle-melanoma" target="_blank">https://github.com/i-pan/kaggle-melanoma</a></p>
<p>Wow. Did not expect this. But super excited to finally get my solo gold medal and complete the journey to Grandmaster. I'm grateful to benefit from the shakeup this time after suffering in PANDA. Thank you to all the organizers, and congratulations to all the other winners and participants. </p>
<p>This year, I accomplished 2 milestones: 1) I graduated medical school and became a doctor; and 2) I became a Kaggle Competitions Grandmaster. I actually just started my intern year as a doctor working 70-80 hours/week so did not have too much time to dedicate to this competition. The key was having a pipeline that allowed me to quickly iterate on experiments so I could start them in the morning, go to work, then analyze the results when I came back. Long story short, to all those who are starting out, keep going. Put in the time and effort. Compete, read winners' solutions, and learn, over and over again. </p>
<p>I have to thank my teammates in previous competitions <a href="https://www.kaggle.com/felipekitamura" target="_blank">@felipekitamura</a>, <a href="https://www.kaggle.com/alexandrecc" target="_blank">@alexandrecc</a>, and <a href="https://www.kaggle.com/jamesphoward" target="_blank">@jamesphoward</a>. And I also have to thank the Kaggle community as I have learned so much from participating.</p>
<p>Also huge thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for all the insights he shared. I used his triple stratified dataset to split my data for CV. </p>
<p><strong>Environment</strong><br>
PyTorch 1.6 with automatic mixed precision<br>
4x NVIDIA GTX 1080 Ti 11GB provided by HOSTKEY (<a href="http://www.hostkey.com" target="_blank">www.hostkey.com</a>) as part of their Kaggle competitions grant<br>
4x Quadro RTX 6000 24GB </p>
<p><strong>Batch Size</strong><br>
In the beginning, I was training models on GTX 1080 Ti GPUs provided by HOSTKEY. I wasn’t able to get multi-GPU training to work, so I was just experimenting with batch sizes of 8-16 on a single 11GB 1080 Ti. My CV scores were not that high, usually a 5-fold average of around 0.92, with similar results on LB. The TPU kernels were doing so well (on public LB, at least), that I thought it was in part due to increased batch size. At that point, I switched over to my 4x Quadro RTX6000 24GB setup to leverage more GPU memory and multi-GPU training. I aimed for a batch size of 64 while maximizing image resolution for a particular backbone (and if that would not fit, I settled for BS 32 with gradient accumulation 2).</p>
<p><strong>Backbones & Image Resolution</strong><br>
I tried several backbones in the EfficientNet, SE-ResNeXt, and ResNeSt families. I also tried BiT-ResNet (recently released by Google). EfficientNet performed better on CV so I decided to stick with EfficientNets for the remainder of the competition. I used the 1080 Ti GPUs to experiment with different backbones, all other hyperparameters held constant. Some of you asked whether this necessarily transfers when I switch over to a larger GPU and increase the batch size/change the image resolution. I did briefly compare backbones on the larger GPU and the results seemed to be consistent (EfficientNet > all others). </p>
<p>I experimented with backbones of different sizes from pruned EfficientNet-B3 to EfficientNet-B8, using the implementation from <a href="https://github.com/rwightman/pytorch-image-models" target="_blank">https://github.com/rwightman/pytorch-image-models</a>. For smaller backbones, I tried larger resolutions (up to 1024 x 1024) and for EfficientNet-B8 I went down to 384 x 384. The image size for each backbone was selected so that I could use batch size of 64 (16/GPU) during training. I found that smaller EfficientNets at higher resolutions were not as good. The best models for me were EfficientNet-B6 (initialized with noisy student weights) and EfficientNet-B7, at image resolutions of 512 x 512 and 640 x 640, respectively, so I only used these moving forward.</p>
<p><strong>Base Model</strong><br>
My base model was your standard convolutional neural network backbone with linear classification head. I used generalized mean pooling with trainable parameter p (not sure if this was any better than average or max, as I just stuck with GeM from the beginning). I also used multisample dropout following <a href="https://www.kaggle.com/qishenha" target="_blank">@qishenha</a>’s implementation in one of this winning NLP solutions. </p>
<p><strong>From 2 Class to 3 Class</strong><br>
I felt that more granular classes would result in better feature representations that could help improve performance. The majority of melanomas are dark (exception being rare amelanotic melanomas), so differentiating them from benign nevi is probably the most challenging task. The 2019 data all had auxiliary diagnoses, including nevi, whereas a large fraction of the 2020 data was unknown. </p>
<p>I trained a model on 2019 data only using the diagnosis as the target. Then, I applied this model to the 2020 data. My main focus was on labeling the unknowns as nevus or not nevus, as I know they are not melanoma. To find the threshold at which I would label an image as nevus, I used the 5th percentile of the 2019 model’s predictions on the 2020 data which had a known label of nevus.</p>
<p>Now, all of the 2019 and 2020 data has a label of other, benign nevus, or melanoma, and I trained my model on these 3 classes using vanilla cross-entropy loss. I did not try label smoothing.</p>
<p><strong>Upsampling</strong><br>
There was a lot of discussion over whether or not to upsample malignant images or not. I did upsample malignant images for 2020 data. Because I used 2019 data and the percentage of melanomas in that dataset was much higher, I wanted to make sure that the 2019 melanomas did not overwhelm the 2020 melanomas. To that end, I upsampled the 2020 melanomas 7 times so that there was about an equal number of melanomas from both datasets.</p>
<p><strong>Training</strong><br>
I used AdamW optimizer and cosine annealing with warm restarts scheduler, initial learning rate 3.0E-4. I did 3 snapshots, 2 epochs each for EfficientNet-B6 and 3 epochs each for EfficientNet-B7. I noticed that it did not take long for the models to start overfitting. I found that this gave better results than using one cycle, so I stuck with it. Out of the 3 snapshots, I just took the one that did the best on that validation fold. Every experiment I ran, I did 5-fold CV using Chris Deotte’s triple stratified data splits. Single validation folds were not stable for me, so in order to really understand my model performance and the effects of my adjustments I had to look at 5-fold CV average. I only validated on 2020 data. </p>
<p>In the beginning, I was using metadata by using embeddings for age, sex, and location. Each embedding was mapped to a 32-D vector and concatenated to the final feature vector before input into the linear classification layer. I did not want to spend too much time tuning this because I was afraid I would overfit to the distribution of the training set. I just used mean/mode imputation for missing values. </p>
<p>It wasn’t until the last several days of the competition that I decided to train models without metadata, so I could be eligible for the without context special prize. It turns out that these models were actually my highest scoring private LB solutions!</p>
<p><strong>Augmentations</strong><br>
I knew that augmentation would be important given the small percentage of melanomas in the 2020 data. I used the RandAugment strategy, implemented here: <a href="https://github.com/ildoonet/pytorch-randaugment" target="_blank">https://github.com/ildoonet/pytorch-randaugment</a>. I used N=3 augmentations with magnitude M/30 where M was sampled from a Poisson distribution with mean 12 for extra stochasticity. For those unfamiliar with RandAugment, M is essentially the “hardness” of the augmentation (angle for rotation, % zoom, gamma for contrast adjustment, etc.). For augmentations like flips, M is not relevant. I tried other augmentations such as mixup, cutmix, and grid mask, but those did not help.</p>
<p>I also used square cropping during training and inference. During training, a square was randomly cropped from the image if it was rectangular (otherwise, the entire image was used), where the length of the square image was the size of the shortest side (i.e., 768x512 would be cropped to 512x512). During inference, I spaced out 10 square crops as TTA and took the average as the final prediction (again, unless the image was already square - then no TTA was applied). I found that this gave me better results than rectangular crops or using the whole image. </p>
<p><strong>Pseudolabeling</strong><br>
Pseudolabeling was key to my solution. Given the limited number of 2020 melanomas, I felt that pseudolabeling would help increase performance. 2019 melanomas were helpful but still different from 2020 melanomas. I took my 5-fold EfficientNet-B6 model, trained without metadata, and obtained soft pseudolabels (3 classes) for the test set. When combining the test data with the training data (2019+2020), I upsampled images with melanoma prediction > 0.5 7 times (same factor as I did for 2020 training data). I used <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a>'s implementation (<a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/173733" target="_blank">https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/173733</a>) of cross entropy in PyTorch (without label smoothing) so I could use soft pseudolabels. </p>
<p><strong>CV vs LB</strong><br>
I knew early on that it would be easy to fit public LB, given the small number of melanomas that would be in the public test. At the same time, the CV for my different experiments was much tighter than the LB, so I was nervous to fully trust CV as there may have been differences between training and test data. With that in mind, I favored solutions that had reasonably high CV and LB. </p>
<p>There is a fair amount of luck that goes into picking the right solution, but you should be able to justify to yourself why you are picking a certain solution over another (going by CV score, LB score, some combination of CV/LB, or some hypotheses about the private test set that would favor one solution over another). </p>
<p>My 2nd place solution was an ensemble of 3 5-fold models:</p>
<ul>
<li>EfficientNet-B6, 512x512, BS64, no metadata (CV 0.9336 / public 0.9534)</li>
<li>EfficientNet-B7, 640x640, BS32, gradient accumulation 2, no metadata (CV 0.9389 / public 0.9525)</li>
<li>Model 1, trained on combined training and pseudolabeled test data (CV 0.9438 / public 0.9493)</li>
</ul>
<p>Note that the CV score does not account for the 5-fold blend effect. </p>
<p>My highest scoring private LB solution was actually model 3 alone, which I did not select.</p>
<p>I trained other models with metadata, but best private LB score was 0.945 (public LB 0.959), with similar CV. </p> |
SIIM-ISIC Melanoma Classification | [Summary] Public 11 Private 23 and Congrats to a new Master | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to our new master <a href="https://www.kaggle.com/meliao" target="_blank">@meliao</a> !🙌🙌🙌</p>
<p>First of all, congratulations to everyone who insists on CV and finally got its deserved place ! 😁</p>
<p>A lot of thanks to my teammates <a href="https://www.kaggle.com/jielu0728" target="_blank">@jielu0728</a> <a href="https://www.kaggle.com/meliao" target="_blank">@meliao</a> <a href="https://www.kaggle.com/dandingclam" target="_blank">@dandingclam</a> <a href="https://www.kaggle.com/captain0602" target="_blank">@captain0602</a> 😏Even though a little frustrated with the final result because we had hoped to win a gold. We were almost the least shaken among the top teams. </p>
<p>Our 0.9451 solution ranked 6th among all our 300+ submissions, and our 2 highest submissions are 0.9462. But I don’t regret it personally, because these submissions looked so inconspicuous, they are neither the highest CV nor the highest LB. I would never think of choosing them even with 10 extra days.</p>
<h3>[Final submissions]</h3>
<p>Ensemble 1 (trust lb) : Our best public LB 0.9723 (private 9268)<br>
Ensemble 2 (trust model number) : Simple Average Rank on 18 submissions between LB 0.9680 and 0.9700 (private 9397)<br>
Ensemble 3 (trust cv) : Simple Average Rank of 12 best models gives CV 0.9517 (private 9451)</p>
<h3>[Single models]</h3>
<p>In the image section, we made improvements based on <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">chris's notebook</a>.<br>
We tried training with B0-7 on image sizes of 256, 384, 512, 600, 768. </p>
<p>ResNet gives smaller gap but we didn’t use due to their low public scores.</p>
<p>Our best single model CV : 0.942-943, best LB : 9578</p>
<p>In the metadata section, we tried ridge, xgb, and lgb. But nothing was better than a weighted blend with <a href="https://www.kaggle.com/titericz/simple-baseline" target="_blank">Giba's baseline</a></p>
<h3>[What worked]</h3>
<p>BCE + Focal loss (FL) => details in the end<br>
Using Effnet on their best resolutions. Ex. 384 with B4, 512 with B5<br>
Using upsample and 2018ext data<br>
Gridmask, we found gridmask is better than coarse dropout => details in the end<br>
noisy-student, train 18 epochs, early stop, train 15 folds help for some models</p>
<h3>[What didn't work]</h3>
<p>Add hair augment<br>
Save model using lowest loss<br>
Change seed<br>
Change class weights in FL<br>
Extreme upsample, like 25X upsample on 2020 mal</p>
<h3>[How we achieved public LB 9723]</h3>
<p>9603 : unweighted gmean on 3 single models</p>
<p>9643 : 9603 * 0.4+giba's baseline * 0.6</p>
<p>9694 : 9643+<a href="https://www.kaggle.com/datafan07/eda-modelling-of-the-external-data-inc-ensemble" target="_blank">9577</a> then <a href="https://www.kaggle.com/khoongweihao/post-processing-technique-c-f-1st-place-jigsaw" target="_blank">post-processing</a></p>
<p>9723 : minmax post-processing</p>
<p>When I reached 9694, I knew I was overfitted. Because there are few such complicated winning solutions in Kaggle Competitions, but like most people, I cannot restrain myself from seeking a higher LB score. Facts once again prove that simple solutions are better 😑</p>
<h3>[BCE+FL]</h3>
<pre><code>def Focal_Loss(y_true, y_pred, alpha=0.25, gamma=2, weight=5):
y_true = K.flatten(y_true)
y_pred = K.flatten(y_pred)
BCE = K.binary_crossentropy(y_true, y_pred)
BCE_EXP = K.exp(-BCE)
alpha = alpha*y_true+(1-alpha)*(1-y_true)
focal_loss = K.mean(alpha * K.pow((1-BCE_EXP), gamma) * BCE)
return BCE+weight*focal_loss
</code></pre>
<h3>[Grid Mask]</h3>
<pre><code>def add_mask(img, dim):
num_grid = 3
gm = GridMask(mode=0, num_grid=num_grid)
gm.init_masks(dim,dim)
init_masks = tf.cast(gm.masks[0], dtype='float32')
init_masks = tf.stack([init_masks]*3, axis=2)
rotated_masks = transform(init_masks, DIM=init_masks.shape[0])
mask_single = tf.image.random_crop(rotated_masks,[dim,dim,3])
img = img*mask_single
return img
</code></pre>
<p>Thank you !</p> |
Santa 2020 - The Candy Cane Contest | 14th Place Solution (Looking for teammates) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2020 - The Candy Cane Contest
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Interesting, thanks for checking. Your team's agent seems very well-rounded. I remember you getting a similar win rate against <a href="https://www.kaggle.com/c/santa-2020/discussion/217537#1190549" target="_blank">mehdi_pedram</a> and <a href="https://www.kaggle.com/c/santa-2020/discussion/218453#1196994" target="_blank">nagiss</a>. When I tested our team's best agent against mehdi_pedram, we had a 10% win rate 🤕</p>
<p>We'd like to thank the Kaggle team for hosting the Santa 2020 competition during a tough year. In this competition, I had the pleasure of teaming up with <a href="https://www.kaggle.com/lihuajing" target="_blank">Alex</a>, <a href="https://www.kaggle.com/nischaydnk" target="_blank">Nischay Dhankhar</a>, <a href="https://www.kaggle.com/chaudharypriyanshu" target="_blank">Priyanshu Chaudhary</a>, and <a href="https://www.kaggle.com/soosten" target="_blank">Per von Soosten
(PS)</a>. This is a summary of our team's solution which placed 14th on the leaderboard. Our <a href="https://github.com/soosten/candy-canes/blob/main/agent.py" target="_blank">best agent</a> is relatively simple, with only about 100 lines of code (<a href="https://github.com/soosten/candy-canes" target="_blank">github repo</a>).</p>
<p>TL;DR: We relied on a Bayesian Bandit approach with a few heuristics.</p>
<h3>Local Simulations</h3>
<p>We used a pool of opponents to evaluate our agents. Running a large number of matches against these opponents gave us a rough idea of agent performance, but improvements in local win rate did not always translate well to the leaderboard. Our pool of opponents included some of our own agents and a few agents from public notebooks:</p>
<ul>
<li>opponent_agent.py and submission.py from <a href="https://www.kaggle.com/a763337092/pull-vegas-slot-machines-add-weaken-rate-continue5" target="_blank">pull_vegas_slot_machines add weaken rate continue5</a></li>
<li>thompson.py from <a href="https://www.kaggle.com/xhlulu/santa-2020-thompson-sampling" target="_blank">Santa 2020: Thompson Sampling</a></li>
<li>bayesian_ucb.py from <a href="https://www.kaggle.com/xhlulu/santa-2020-ucb-and-bayesian-ucb-starter" target="_blank">Santa 2020: UCB and Bayesian UCB Starter</a></li>
<li>submission.py from <a href="https://www.kaggle.com/ilialar/simple-multi-armed-bandit" target="_blank">Simple multi-armed bandit</a></li>
</ul>
<p>Thank you to <a href="https://www.kaggle.com/a763337092" target="_blank">Lindada焱焱焱</a>, <a href="https://www.kaggle.com/xhlulu" target="_blank">xhlulu</a> and <a href="https://www.kaggle.com/ilialar" target="_blank">Ilia Larchenko</a> for making these notebooks public. </p>
<h3>Bayesian approach</h3>
<p>The basic logic of our final submission is to maintain a set of distributions reflecting what we know about the current reward probability of each arm. At the outset, these distributions are initialized to a uniform prior. Every time we get a result from our previous pull, we perform a Bayesian update on the corresponding distribution reflecting the result of our previous pull. This is a classical idea for dealing with multi-armed bandits.</p>
<h3>Choosing the next arm</h3>
<p>We decide on which new arm to pull based on a simplified variant of the Bayesian UCB algorithm. We get an optimistic estimate of the reward probability of each arm by adding the mean and the standard deviation of the corresponding distribution and then choose an arm with a maximal estimate. An important part of the game is to understand when the opponent has discovered an arm with a high reward probability before the opponent has exploited that arm enough to dramatically decrease the probability. We tried to incorporate this information by applying some temporary Bayesian updates to the distribution before computing the mean and standard deviation comprising the estimate. More concretely:</p>
<ul>
<li>If the opponent pulled an arm more than twice in the last 10 turns, we assumed all of those pulls resulted in a reward.</li>
<li>If the opponent pulled an arm exactly once in the last 100 turn, we assumed that pull did not result in a reward.</li>
</ul>
<p>Our local simulations indicated that the optimal strategy never pulls on an arm with a reward probability less than 0.22. Therefore, we stopped applying temporary updates to arms for which the true estimate (without any temporary updates) was less than 0.25. This ensured that we did not follow the opponent to arms which we already knew were bad.</p>
<h3>Decaying thresholds</h3>
<p>In principle, the distributions should also reflect the decay in the reward probability after each arm is pulled. However, we noticed that our agents that did not use the opponent's actions to decay their estimates performed surprisingly well. One explanation for this is that not decaying encourages the agent to exploit good machines that the opponent has found, this is especially important early in the game. On the other hand, having correct estimates of the thresholds is important near the end of the game. We attempted to get the best of both worlds by averaging these two estimates (with/without opponent decay) with a weight that favors non-decayed estimates early in the game and decayed estimates later in the game.</p>
<h3>Learning from opponent actions</h3>
<p>Finally, some opponent actions are so indicative of the result their pulls that we conservatively applied some permanent Bayesian updates to the distribution based on them. We hardcoded the logic needed to learn from these opponent actions. This can work well against agents who are greedy with respect to their threshold estimates, but these hardcoded rules are easy to counter by being deceptive. Here are the rules that we used</p>
<ul>
<li>If the opponent repeats a first-time action, assume the first time is a success.</li>
<li>If the opponent doesn't pull a lever for a long time after pulling it for the first time, it is probably because the first time was a failure.</li>
</ul> |
Santa 2020 - The Candy Cane Contest | 409th Place Solution - Copy Opponent Move Unless Win | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2020 - The Candy Cane Contest
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you!</p>
<p>I didn't really have time to properly explore this competition to the same depth I explored Rock Paper Scissors, so in this competition, I cheated!</p>
<p>My highest scoring original agent was: </p>
<p><strong>Rock Paper Candy - Copy Opponent Move Unless Win (744.8)</strong></p>
<ul>
<li><a href="https://www.kaggle.com/jamesmcguigan/rock-paper-candy-copy-opponent-move-unless-win" target="_blank">https://www.kaggle.com/jamesmcguigan/rock-paper-candy-copy-opponent-move-unless-win</a></li>
</ul>
<p>Does exactly what it says on the tin. Rather than implement my own logic for finding the best handle to pull, I implemented the Little Sister strategy. Little Sister doesn't know how to play the game, she simply runs after her Big Brother and plays whatever move he played last. Unless of course she wins, in which case she keeps pulling the same handle until she loses and then runs after Big Brother again.</p>
<p>I tried a few variations to modify the logic of when to repeat and when to copy, but the original and simplest logic worked best. The result of this strategy was 1.3% below the exact midpoint of the leaderboard!</p>
<p>This approach outperformed all my failed attempts to implement UCB and Multi-Armed-Bandit correctly.</p>
<p>My other approach was to compile a big list of public notebooks, play them against each other, and repeatedly submit the best ones to the competition as my own work, which really feels like cheating now that I have done it!</p>
<ul>
<li><a href="https://www.kaggle.com/jamesmcguigan/santa-2020-agents-comparison" target="_blank">https://www.kaggle.com/jamesmcguigan/santa-2020-agents-comparison</a></li>
</ul>
<p>Shoutouts to:</p>
<p><a href="https://www.kaggle.com/felipefonte99" target="_blank">@felipefonte99</a> </p>
<ul>
<li>Your ensemble model scored 859.7 - 793.0 over 14 submissions, earning me 335th place!</li>
<li><a href="https://www.kaggle.com/felipefonte99/ensemble-pull-vegas-decision-tree-model" target="_blank">https://www.kaggle.com/felipefonte99/ensemble-pull-vegas-decision-tree-model</a></li>
</ul>
<p><a href="https://www.kaggle.com/iehnrtnc" target="_blank">@iehnrtnc</a></p>
<ul>
<li>Your agent3.py notebook scored 823.6 - 781.4 over 6 submissions.</li>
<li><a href="https://www.kaggle.com/iehnrtnc/santa2020" target="_blank">https://www.kaggle.com/iehnrtnc/santa2020</a></li>
</ul>
<p><a href="https://www.kaggle.com/sirishks" target="_blank">@sirishks</a></p>
<ul>
<li>Your advice to "Keep pulling the same bandit as long as reward keeps coming!" helped inspire the Copy Opponent Move Unless Win strategy</li>
<li>Your notebook scored 761.0 with a single submission yet still outperformed my own notebook idea<br>
= <a href="https://www.kaggle.com/sirishks/pull-vegas-slot-machines" target="_blank">https://www.kaggle.com/sirishks/pull-vegas-slot-machines</a></li>
</ul> |
Santa 2020 - The Candy Cane Contest | 67th place solution - and good result in my own competition and challenge | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2020 - The Candy Cane Contest
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks, i'll try! ;)</p>
<p>Congrats to the top solutions and a great competition.<br>
One learns new thing in every competition, a perfect way to follow the journey of AI world and also to contribute to important problems. Thanks Kaggle and fellow kagglers contributing with valuable insights, ideas, and notebooks.</p>
<p>For my 67th place solution, yes not in the top but shows that even with a basic approach can reach to a good medal place, I used 80M steps from agents with score 900 and above to train a LGBM regressor with custom parameters used together with the public Greedy algorithm, I tried many different approaches, but this one seemed to do it best when updated it with latest games played final days. I also submitted 2-3 extra of the best model to reduce the risk of land in an outlier situation after deadline.</p>
<p>I have also made a personal competition and challenge this time, to contribute in 5 different Kaggle competition in parallel, trying to get good results in every one of them, equal distribution of time. Just for fun and the learning. <br>
1st and 2nd out finished prev. week. where I got a bronze medal and silver, and now bronze in this and a silver in the 4th which finalized at the same time yesterday, much better than I expected but grateful, and glad to see that lots of reading and testing pay off.<br>
I had my first Kaggle competition only 1,5 year ago, so I glad that I managed my own personal challenge quite well after all. <br>
All together I think this can move me to a total top 50 Kaggle rank, and still as Kaggle Expert, must reach that gold some year 😊 but that is even harder of course and humble to that challenge but everything is possible.</p>
<p>Happy Kaggleing!</p> |
Santa 2020 - The Candy Cane Contest | 9th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2020 - The Candy Cane Contest
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for hosting this exciting competition ! I'm happy to become a Competition Grandmaster.</p>
<h1>Code</h1>
<p>Source code is available here.<br>
<a href="https://github.com/toshi-k/kaggle-santa-2020" target="_blank">https://github.com/toshi-k/kaggle-santa-2020</a></p>
<h1>Basic Strategy</h1>
<p>In this competition, the reward was decided by comparing the threshold and random generated number. It was easy to calculate the probability of getting reward if we knew the thresholds. But the agents can't see the threshold during the game, we had to estimate it.</p>
<p>Like other teams, I also downloaded the history by Kaggle API and created a dataset for supervised learning. We can see the true value of <code>threshold</code> at each round in the response of API. So, I used it as the target variable.</p>
<p>In the middle of the competition, I found out that quantile regression is much better than conventional L2 regression. I think it can adjust the balance between <strong>Explore</strong> and <strong>Exploit</strong> by the percentile parameter.</p>
<h1>Features</h1>
<p>I made 32 features for training and prediction. I tried some other features, but they didn't work well. Adding unnecessary features sometimes caused time out error, I didn't employ them.</p>
<table>
<thead>
<tr>
<th> # </th>
<th>Name</th>
<th>Explanation</th>
</tr>
</thead>
<tbody>
<tr>
<td>#1</td>
<td>round</td>
<td>number of round in the game (0-1999)</td>
</tr>
<tr>
<td>#2</td>
<td>last_opponent_chosen</td>
<td>whether the opponent agent chose this machine in the last step or not</td>
</tr>
<tr>
<td>#3</td>
<td>second_last_opponent_chosen</td>
<td>whether the opponent agent chose this machine in the second last step or not</td>
</tr>
<tr>
<td>#4</td>
<td>third_last_opponent_chosen</td>
<td>whether the opponent agent chose this machine in the third last step or not</td>
</tr>
<tr>
<td>#5</td>
<td>opponent_repeat_twice</td>
<td>whether the opponent agent continued to choose this machine in the last two rounds (#2 x #3)</td>
</tr>
<tr>
<td>#6</td>
<td>opponent_repeat_three_times</td>
<td>whether the opponent agent continued to choose this machine in the last three rounds (#2 x #3 x #4)</td>
</tr>
<tr>
<td>#7</td>
<td>num_chosen</td>
<td>how many times the opponent and my agent chose this machine</td>
</tr>
<tr>
<td>#8</td>
<td>num_chosen_mine</td>
<td>how many times my agent chose this machine</td>
</tr>
<tr>
<td>#9</td>
<td>num_chosen_opponent</td>
<td>how many time the opponent agent chose this machine (#7 - #8)</td>
</tr>
<tr>
<td>#10</td>
<td>num_get_reward</td>
<td>how many time my agent got rewards from this machine</td>
</tr>
<tr>
<td>#11</td>
<td>num_non_reward</td>
<td>how many time my agent didn't get rewarded from this machine</td>
</tr>
<tr>
<td>#12</td>
<td>rate_mine</td>
<td>ratio of my choices against the total number of choices (#8 / #7)</td>
</tr>
<tr>
<td>#13</td>
<td>rate_opponent</td>
<td>ratio of opponent choices against the total number of choices (#9 / #7)</td>
</tr>
<tr>
<td>#14</td>
<td>rate_get_reward</td>
<td>ratio of my rewarded choices against the total number of choices (#10 / #7)</td>
</tr>
<tr>
<td>#15</td>
<td>empirical_win_rate</td>
<td>posterior expectation of threshold value based on my choices and rewords</td>
</tr>
<tr>
<td>#16</td>
<td>quantile_10</td>
<td>10% point of posterior distribution of threshold based on my choices and rewords</td>
</tr>
<tr>
<td>#17</td>
<td>quantile_20</td>
<td>20% point of posterior distribution of threshold based on my choices and rewords</td>
</tr>
<tr>
<td>#18</td>
<td>quantile_30</td>
<td>30% point of posterior distribution of threshold based on my choices and rewords</td>
</tr>
<tr>
<td>#19</td>
<td>quantile_40</td>
<td>40% point of posterior distribution of threshold based on my choices and rewords</td>
</tr>
<tr>
<td>#20</td>
<td>quantile_50</td>
<td>50% point of posterior distribution of threshold based on my choices and rewords</td>
</tr>
<tr>
<td>#21</td>
<td>quantile_60</td>
<td>60% point of posterior distribution of threshold based on my choices and rewords</td>
</tr>
<tr>
<td>#22</td>
<td>quantile_70</td>
<td>70% point of posterior distribution of threshold based on my choices and rewords</td>
</tr>
<tr>
<td>#23</td>
<td>quantile_80</td>
<td>80% point of posterior distribution of threshold based on my choices and rewords</td>
</tr>
<tr>
<td>#24</td>
<td>quantile_90</td>
<td>90% point of posterior distribution of threshold based on my choices and rewords</td>
</tr>
<tr>
<td>#25</td>
<td>repeat_head</td>
<td>how many times my agent chose this machine before the opponent agent chose this agent for the first time</td>
</tr>
<tr>
<td>#26</td>
<td>repeat_tail</td>
<td>how many times my agent chose this machine after the opponent agent chose this agent last time</td>
</tr>
<tr>
<td>#27</td>
<td>repeat_get_reward_head</td>
<td>how many times my agent got reward from this machine before my agent didn't get rewarded or the opponent agent chose this agent for the first time</td>
</tr>
<tr>
<td>#28</td>
<td>repeat_get_reward_tail</td>
<td>how many times my agent got reward from this machine after my agent didn't get rewarded or the opponent agent chose this agent last time</td>
</tr>
<tr>
<td>#29</td>
<td>repeat_non_reward_head</td>
<td>how many times my agent didn't get rewarded from this machine before my agent got reward or the opponent agent chose this agent for the first time</td>
</tr>
<tr>
<td>#30</td>
<td>repeat_non_reward_tail</td>
<td>how many times my agent didn't get rewarded from this machine after my agent got reward or the opponent agent chose this agent last time</td>
</tr>
<tr>
<td>#31</td>
<td>opponent_repeat_head</td>
<td>how many times the opponent agent chose this machine before my agent chose this machine for the first time</td>
</tr>
<tr>
<td>#32</td>
<td>opponent_repeat_tail</td>
<td>how many times the opponent agent chose this machine after my agent chose this machine last time</td>
</tr>
</tbody>
</table>
<p>The feature importance of my best agent is as below.</p>
<p><img src="https://raw.githubusercontent.com/toshi-k/kaggle-santa-2020/master/src/02_lgbm/importance.png" alt="importance"></p>
<p>It may be interesting the most important feature was the <em>round</em> in the game. I think the agent can change the strategy based on the round, and it was something essential the agent learned from the history.</p>
<p>Some other features related to the opponent agent were also important (<em>num_chosen_opponent</em>, <em>rate_opponent</em>) . ML based agent can utilize the behavior of the opponent agent which can't be handled by conventional approaches.</p>
<h1>Top Agents</h1>
<p>I tried several packages and parameters. It seemed that LightGBM was the best for my settings.</p>
<p>Although some of CatBoost agents achieved similar performance like LightGBM, CatBoost didn't reach the best score of LightGBM. Since all features were numeric variable, it could not utilize the unique feature for categorical variables.</p>
<p>I also tried XGBoost with pairwise loss. It compared the thresholds of machines in the same round. Although I thought it was a natural training setting, the performance of the agents were worse than quantile regression.</p>
<p>NumRound=4000 was almost the maximum number to prevent the time out error. I set NumRound=3000 or NumRound=3500 for some agents as safer settings. I basically tuned learning rate to control the fitness to the dataset.</p>
<table>
<thead>
<tr>
<th>Regressor</th>
<th>Loss</th>
<th>NumRound</th>
<th>LearningRate</th>
<th>LB Score</th>
<th>SubmissionID</th>
</tr>
</thead>
<tbody>
<tr>
<td>LightBGM</td>
<td>Quantile (0.65)</td>
<td>4000</td>
<td>0.05</td>
<td>1449.4</td>
<td><a href="https://www.kaggle.com/c/santa-2020/submissions?dialog=episodes-submission-19318812" target="_blank">19318812</a></td>
</tr>
<tr>
<td>LightBGM</td>
<td>Quantile (0.65)</td>
<td>4000</td>
<td>0.10</td>
<td>1442.1</td>
<td><a href="https://www.kaggle.com/c/santa-2020/submissions?dialog=episodes-submission-19182047" target="_blank">19182047</a></td>
</tr>
<tr>
<td>LightBGM</td>
<td>Quantile (0.65)</td>
<td>3000</td>
<td>0.03</td>
<td>1438.8</td>
<td><a href="https://www.kaggle.com/c/santa-2020/submissions?dialog=episodes-submission-19042049" target="_blank">19042049</a></td>
</tr>
<tr>
<td>LightBGM</td>
<td>Quantile (0.66)</td>
<td>3500</td>
<td>0.04</td>
<td>1433.9</td>
<td><a href="https://www.kaggle.com/c/santa-2020/submissions?dialog=episodes-submission-19137024" target="_blank">19137024</a></td>
</tr>
<tr>
<td>CatBoost</td>
<td>Quantile (0.65)</td>
<td>4000</td>
<td>0.05</td>
<td>1417.6</td>
<td><a href="https://www.kaggle.com/c/santa-2020/submissions?dialog=episodes-submission-19153745" target="_blank">19153745</a></td>
</tr>
<tr>
<td>CatBoost</td>
<td>Quantile (0.67)</td>
<td>3000</td>
<td>0.10</td>
<td>1344.5</td>
<td><a href="https://www.kaggle.com/c/santa-2020/submissions?dialog=episodes-submission-19170829" target="_blank">19170829</a></td>
</tr>
<tr>
<td>LightGBM</td>
<td>MSE</td>
<td>4000</td>
<td>0.03</td>
<td>1313.3</td>
<td><a href="https://www.kaggle.com/c/santa-2020/submissions?dialog=episodes-submission-19093039" target="_blank">19093039</a></td>
</tr>
<tr>
<td>XGBoost</td>
<td>Pairwised</td>
<td>1500</td>
<td>0.10</td>
<td>1173.5</td>
<td><a href="https://www.kaggle.com/c/santa-2020/submissions?dialog=episodes-submission-19269952" target="_blank">19269952</a></td>
</tr>
</tbody>
</table>
<h1>Tips</h1>
<h2>Logarithm transformation for probabilistic calculation</h2>
<p>The chain of multiplication turns into zero because of the precision of floating point. For the calculation of posterior distribution of threshold, logarithm transformation and renormalization solved this problem.</p>
<h2>LRU Cache</h2>
<p>Precise calculation of posterior distribution is time consuming and it can be the cause of time our error. I employed LRU (Least Recently Used) Cache and reused the feature values for the same patterns of the actions and rewards.</p>
<h1>What I should have done (to beat more agents)</h1>
<p>When I calculated the winning rate of the agent for features #15-24, I set the probability approximately like <code>p = threshold ** decay / 101</code>. But I missed one important point here. The random generated in each round was integer, I should have rounded this part like <code>p = round(threshold ** decay) / 101</code>.</p> |
Santa 2021 - The Merry Movie Montage | 2428 by the numbers - no VRP, no TSP, no MIP | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2021 - The Merry Movie Montage
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks and congrats to you also. I'll be reading through your MILP Journey series with great interest!</p>
<p>Thanks to everyone for a fun competition, and congratulations to all who stuck with it. Extra congrats to those who got to 2428 a month before anyone else.</p>
<p>Along the way to 2428 I discovered that one solution method was fairly simple - not to be confused with "easy". It was only after using LKH3 to get to 2440 and spending a <em>whole lot of time</em> thinking about wildcards that I saw the underlying structure. I suspect the top finishers and perhaps others got to a similar place and went straight there without the VRP model. </p>
<p>The picture below from superpermutations.com shows the start of the palindromic, aka classic version, of the superperm for n=7. A great paper by <a href="https://www.kaggle.com/vincevatter" target="_blank">@vincevatter</a> described the structure of these first 42 permutations as a "trajectory". A trajectory has 6 cycles of 7 permutations each. Guess how many trajectories there are? 5! (meaning 120 which makes sense that for n=7 there are 7! permutations). And a nice property of the trajectories is that the last permutation always begins with 76. So I chose that sequence as my Mr & Mrs Santa.</p>
<p>Once I saw this feature, I noticed my superperms from LKH followed the same pattern. You can take the last 76xxxxxx perm in the trajectory and move it to the front of the line, and then trace the classic route to the end. Except you don't need to worry about the last cycle. All of those permutations can be made by combinations of 76xxxxxx perms. Assuming you break the trajectories into 3 groups of 40, you then have to add in the complementary set of 80 mandatory 76ers to each group. So the game then becomes how to sequence the 76ers and the longer trajectory substrings to cover all of the orphaned permutations. </p>
<p>The math works out quite nicely. Each reworked trajectory for this puzzle is 47 characters long (36 perms + 5 wasted characters + 6 characters at the start). There are 40 of these per route for a subtotal of 1880. Added in are the 560 characters from the rest of the required perms (80*7). And that's the 2440.</p>
<p>Others can tell the wildcard story better than me. If you look at the end of the trajectory you see the two highlighted permutations that form a repeating sequence of 7 characters.</p>
<p><img src="https://i.imgur.com/4rWiweM.png" alt="superperms"></p> |
Santa 2021 - The Merry Movie Montage | 2430 Solution - Using My 2500 Public Notebook | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2021 - The Merry Movie Montage
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks Kaggle for an exciting and fun 2021 Santa optimization challenge.</p>
<h1>My LKH Public Notebook - LB 2500</h1>
<p>In this competition, i ran my LKH public notebook <a href="https://www.kaggle.com/cdeotte/santa-2021-tsp-baseline-2500" target="_blank">here</a> over and over. Each time I started with my best solution, randomly swapped 1-cycles (groups of 7 permutations) between strings, removed repeated non-mandatory permutations among strings, and randomly added wildcards (to mandatory permutations). After thousands of runs, LKH finally found a 2431 solution! My procedure wasn't improving the score anymore, so i loaded 2431 into LKH MinMax CTSP <a href="https://www.kaggle.com/kostyaatarik/colored-traveling-salesman-problem" target="_blank">here</a>, added support for wildcards, and found 2430! </p>
<h1>Step 1 - Begin with 5913 Standard Permutation</h1>
<p>I begin with the standard permutations. When you break it into 3 equal sized substrings, you keep all the 1,2,3,4-cycles together. The splits occur at 5,6-cycles. (So only distances of 5 and 6 are broken apart). <strong>Then run LKH.</strong><br>
<img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Jan-2022/santa1b.png" alt=""></p>
<h1>Step 2 - Randomly Swap Cycles and Run LKH</h1>
<p>Starting with my best solution (from previous LKH runs), I would find 1-cycles (groups of 7 permutations) that were followed by a distance of 4 or more. Among all these, I would swap nine to fifteen 1-cycles between adjacent strings. <strong>Then run LKH.</strong><br>
<img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Jan-2022/santa2b.png" alt=""></p>
<h1>Step 3 - Permutation Balancing</h1>
<p>Early on, i observed that some non-mandatory permutations were contained in multiple strings. So i would take my best solution, remove non-mandatory permutations from strings that were present in other strings. <strong>Then run LKH.</strong></p>
<h1>Step 4 - Randomly Add Wildcards and Run LKH</h1>
<p>Starting with my best solution, I would find mandatory permutations that did <strong>not</strong> have their other 6 rotations (1-cycle) included in the string. Among these, I would pick two and add a wildcard replacing their first number 1. Then update the distance matrix. <strong>Then run LKH.</strong><br>
<img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Jan-2022/santa3b.png" alt=""></p>
<h1>Step 5 - LKH CTSP with Wildcards boost 2431 to 2430</h1>
<p>My procedure worked well but it stopped improving the score around 2431. I obtained 2430 by loading my 2431 solution into <strong>LKH MinMax CTSP</strong> <a href="https://www.kaggle.com/kostyaatarik/colored-traveling-salesman-problem" target="_blank">here</a> and added Wildcard support and ran it overnight!</p>
<p>😄Thanks. Congratulations on Silver medal. I was watching the last day and happy to see you didn't get bumped in Bronze. Good job!</p>
<p>Great question <a href="https://www.kaggle.com/crained" target="_blank">@crained</a> . This competition was all new to me. I have never worked on TSP nor LKH before. When I approach a new problem, I start by looking at examples and playing with code.</p> |
Santa 2021 - The Merry Movie Montage | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2021 - The Merry Movie Montage
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank the organizer for another year of interesting problems, and all the other participants for making the competition very fun. I would like to thank my teammates <a href="https://www.kaggle.com/nagiss" target="_blank">@nagiss</a>, <a href="https://www.kaggle.com/kibuna" target="_blank">@kibuna</a> and <a href="https://www.kaggle.com/hiroakikitahara" target="_blank">@hiroakikitahara</a> for the great collaboration. </p>
<p>In this post, rather than explaining the solution of 2428 directly, I would like to explain what we thought step by step. It may be a bit long, but I hope it will be helpful.</p>
<h2>First step</h2>
<p>Since dealing with 🌟 is complicated and we can only use at most two 🌟, we first worked on the solution without 🌟. If we fix which of the three strings each permutation appears in, we can solve the problem as an Asymmetric Travelling Salesperson Problem (ATSP). Since there are all 7! permutations and 2 * 5! additional mandatory permutations, if the permutations could be equally distributed among the three strings, the number of vertices per string would be approximately (7! + 5! * 2) / 3 = 1760. This 1760-vertex ATSP can be optimally solved in a realistic time using LKH and Concorde*. Thus, we only need to think about how to assign the permutations to the three strings.</p>
<p>*This means that it was true for this problem, not that it is true in general. For example, the superpermutation for n = 6 is a ATSP with 720 vertices, but it is considered difficult to find the optimal solution by Concorde in a straightforward way (<a href="https://arxiv.org/abs/1408.5108" target="_blank">Houston 2014</a>)</p>
<h2>Assignment based on 2-cycle (Score: 2480)</h2>
<p>First, we considered assigning the 5040 permutations in such a way that they contain the same number of mandatory permutations. Considering how other permutations should look like, it seems that permutations that can be moved at cost 1 (e.g. <code>7123456</code> -> <code>1234567</code>) should be in the same string as much as possible. Also, we want a group consisting of permutations that can be moved at cost 1 (e.g. <code>2345671</code> -> <code>3456712</code> -> … -> <code>1234567</code>), and a group that can be moved from the group at cost 2 (e.g. <code>3456721</code> -> <code>4567213</code> -> … -> <code>1345672</code>. The cost from <code>1234567</code> to <code>3456721</code> is 2), to be in the same string. Repeating such a cost 2 group-to-group move n (= 7) times brings it back to the original group and forms a cycle. This cycle is called a <a href="http://www.gregegan.net/SCIENCE/Superpermutations/Superpermutations.html#:~:text=method%3A%20Supermutate.js-,2%2Dcycle%20graphs,-A%20third%20way" target="_blank">2-cycle</a> in the context of the superpemutation (See the link for a more precise definition). We tried to use this 2-cycle to do the assignment.</p>
<p>There are 120 2-cycles, so 40 of them are assigned to each of the three strings. Each 2-cycle contains exactly one mandatory permutation, so at this point, every string will contain 40 mandatory permutations. By adding the 80 mandatory permutations that have not yet been added, we can satisfy the constraint in question.</p>
<p>The pseudo-Python code looks like this. After several iterations of solving the problem as an ATSP using LKH or Concorde with random 2-cycle assignment, a score of 2480 was achieved. We tried various methods of 2-cycle assignment, but could not find a solution with a score lower than 2480 without 2-cycle splitting described in the the following section.</p>
<pre><code>twocycle_nodes = []
for v in ["12" + "".join(s) for s in itertools.permutations("34567")]:
nodes = []
for y in range(6):
nodes.append(v)
v = v[2:] + v[1] + v[0]
for x in range(6):
nodes.append(v)
v = v[1:] + v[0]
assert v == twocycle
twocycle_nodes.append(nodes)
assignment = np.arange(120) % 3
np.random.shuffle(assignment)
groups = [[], [], []]
for idx_groups, nodes in zip(assignment, twocycle_nodes):
# TODO
groups[idx_groups].extend(nodes)
mandatory = nodes[0]
assert mandatory.startswith("12")
groups[(idx_groups + 1) % 3].append(mandatory)
groups[(idx_groups + 2) % 3].append(mandatory)
solution = (solve_as_tsp(groups[0]), solve_as_tsp(groups[1]), solve_as_tsp(groups[2]))
</code></pre>
<h2>Splitting 2-cycle (Score: 2440 w/o 🌟, 2430 w/ 🌟)</h2>
<p>When we checked the solution for 2480, we found that many edges of length 7 are used to make the mandatory permutations appear that are not in the 40 2-cycles assigned to that string. For example, all permutations of the 2-cycle including <code>1234567</code> were assigned to only one string, so the two strings that were not assigned the 2-cycle were likely to use the edge with cost 7 to move <code>1234567</code>. To alleviate this problem, we thought of giving some of the permutations in the 2-cycle to the other strings. To alleviate this problem, we considered giving some of the permutations in the 2-cycle to the other strings. Specifically, we gave the permutations that can move to <code>12xxxxx</code> by using only the cost 1 edge to other strings. The splitting part of the pseudo code looks like this.</p>
<pre><code>groups = [[], [], []]
for idx_groups, nodes in zip(assignment, twocycle_nodes):
mandatory = nodes[0] # e.g. "1234567"
neighbors = nodes[-6:] # e.g. ['2345671', '3456712', '4567123', '5671234', '6712345', '7123456']
nodes = nodes[1:-6]
groups[idx_groups].extend([mandatory] + nodes)
groups[(idx_groups + 1) % 3].extend([mandatory] + neighbors[:3])
groups[(idx_groups + 2) % 3].extend([mandatory] + neighbors[3:])
</code></pre>
<p>We tried several assignments in this way and got 2440 solution without 🌟. Furthermore, by adding the following two ideas, we reached 2430 with 🌟.</p>
<ul>
<li>For 2-cycle assignment, stop assigning randomly and create n = 7 2-cycles starting from n = 5 2-cycles and assign them in order.</li>
<li>Assuming that one 🌟 only shrinks one edge, randomly choose two mandatory permutations and replace the first 1 with 8 (🌟)</li>
</ul>
<h2>Non-TSP Solution by eijirou-san (Score: 2440 w/o 🌟, 2435 w/ 🌟)</h2>
<p>Here, I would like to share the solution by <a href="https://www.kaggle.com/eijirou-san" target="_blank">@eijirou-san</a>, who kept the first place on the Leaderboard in early stage of the competition. In this solution, the procedure of splitting 2-cycles is deeply considered, and the 2435 solution can be built without TSP solvers.</p>
<p>Consider a block constructed by deleting the permutations after the last 2-edge of a 2-cycle and connecting it with a 3-edge to a group brought from another 2-cycle (See the figure below). The total cost of this block is 56, but since it ends in <code>12</code>, it can be considered 54 due to the two-letter overlap with the next group. Furthermore, since one block contains two mandatory permutations, we can construct a solution with a score of 54 x 40 + 7 x 40 = 2440 solutions by adding the remaining 40 mandatory permutations at a cost 7.</p>
<p><img src="https://user-images.githubusercontent.com/8179588/149190973-a0ad6925-e501-45bf-8e18-4a0ce6d93e66.png" alt="image"></p>
<p>We can construct the 2435 solution by inserting two 12xxxxx while utilizing two 🌟 (e.g. splitting 1672345 -> 7234561 into 1682345 -> 8234576 and 1278345 -> 7834561). However, we could not find a better solution than 2435 by this splitting method and 🌟.</p>
<h2>The 2428 solution</h2>
<p>We verified the lower bound in a similar way to the <a target="_blank">one using TSP posted in Discussion</a>, and confirmed that this 2440 solution is optimal without 🌟, so we moved on to consider the construction of a solution using 🌟.</p>
<ul>
<li>As the 2440 w/o 🌟 method, Divide the 2-cycle into four blocks: S (1 mandatory permutation required), A (35 permutations), B (4 permutations), and C (2 permutations).<br>
<img src="https://user-images.githubusercontent.com/8179588/149297004-3e0e2971-637d-4c5e-93df-3e452d1bff33.png" alt="image"></li>
<li>In this case, one string should contain all 120 types of S and 40 of each of A, B, and C to achieve an even distribution.</li>
<li>Due to the position of the "1" and "2", the minimum cost of the transition between each block is as follows (all transitions are required to be the minimum cost to achieve 2428)<ul>
<li>S -> S … 7 (Can move to a different 2-cycle)</li>
<li>S -> A … 2 (Moves only within the same 2-cycle)</li>
<li>S -> B … 1 (Moves only within the same 2-cycle)</li>
<li>S -> C … 5 (Can move to a different 2-cycle)</li>
<li>A -> S … 7 (Can move to a different 2-cycle)</li>
<li>A -> A … 3 (Moves only within the same 2-cycle. Moving to the same block in the same 2-cycle does not make any sense.)</li>
<li>A -> B … 2 (Moves only within the same 2-cycle)</li>
<li>A -> C … 5 (Can move to a different 2-cycle)</li>
<li>B -> S … 3 (Can move to a different 2-cycle)</li>
<li>B -> A … 5 (More efficient to go through S)</li>
<li>B -> B … 4 (More efficient to go through S)</li>
<li>B -> C … 1 (Moves only within the same 2-cycle)</li>
<li>C -> S … 1 (Moves only within the same 2-cycle)</li>
<li>C -> A … (More efficient to go through S)</li>
<li>C -> B … (More efficient to go through S)</li>
<li>C -> C … (More efficient to go through S)</li></ul></li>
<li>The transition cost in the block is 38 for A, 3 for B, and 1 for C, for a total of 42</li>
<li>We tried to make a unit of permutations (containing three S's and one A, B, and C, which can transition to another unit with minimum cost) using these. For example, if we make a unit such that A -> S -> C -> S -> B -> S -> (next A), the cost is 42 + 7 + 5 + 1 + 1 + 3 + 2 = 61, and if we connect 40 units of these, we get 61 x 40 = 2440. The structure of our 2440/2430 solution looked like this.</li>
<li>Analyzing the 2430 solutions using 🌟 in a way that replaces two Ss with 82xxxxx, we find that while the cost of one A -> S is reduced from 7 to 1, the result is an improvement of -5 per 🌟, since we can no longer enter A at cost 2 elsewhere.</li>
<li>The key to improving this and getting to -6 per 🌟 was to use 🌟 as more than two letters. We use 🌟 at the end of the A Block to reduce the cost of A -> S from 7 to 1 as shown in the figure below. A single 🌟 is used as three characters: 7, 6, and 1.<br>
<img src="https://user-images.githubusercontent.com/8179588/149606773-bd730ca0-8356-461a-bae2-aca888fe8b54.png" alt="image"></li>
<li>In this usage, the last permutation of the A Block using 🌟 belongs to the 2-cycle of the transition destination S, so the last permutation of the original A Block has to be used somewhere. We note that if the A Block with 🌟 is from 1|2xyz<strong>ab</strong> (in the notation in <a href="http://www.gregegan.net/SCIENCE/Superpermutations/Superpermutations.html#TCG" target="_blank">the Greg Egan's page</a>), then S is from 1|2xyz<strong>ba</strong>, and the two-cycles are related by a swap of the last two letters. Therefore, if we use 🌟 in the same way for the A Block of 1|2xyz<strong>ba</strong>, the last permutation will be swapped among the A Blocks using 🌟, and there will be no remaining permutations. We want A -> S to be a transition between different 2-cycles, so when we use 🌟 for this pairing, we make sure that each is assigned to a different string. In summary, the procedure is as follows.<ul>
<li>Assign A Block of 1|2xyzab and 1|2xyzba to different strings</li>
<li>Swap the last permutation of the A Blocks between the two strings</li>
<li>Use 🌟 in the form 2xyza182xyzba and 2xyzb182xyzab respectively</li></ul></li>
<li>Using this, we can achieve -6 improvement per single 🌟, and reach the 2428 solution.</li>
</ul>
<p>Honestly, we haven't prove the optimality of 2428. I tried to use Concorde for some relaxation problems, but I could not come up with the formulation of unconstrained use of 🌟.</p>
<p><a href="https://www.kaggle.com/atamazian" target="_blank">@atamazian</a> Thanks!</p>
<p><a href="https://www.kaggle.com/niekvanderzwaag" target="_blank">@niekvanderzwaag</a> Thanks!</p>
<p><a href="https://www.kaggle.com/aayushpoddar" target="_blank">@aayushpoddar</a> Thanks!</p>
<p><a href="https://www.kaggle.com/atmguille" target="_blank">@atmguille</a> Thanks! Sorry for the delay, but I've added it. It's a bit long, but I hope you'll read it.</p>
<p><a href="https://www.kaggle.com/solverworld" target="_blank">@solverworld</a> I'm sorry I left out the important part. I've added it now!</p>
<p><a href="https://www.kaggle.com/zyu2017" target="_blank">@zyu2017</a> Thank you for explaining 😄. I think our solution is almost the same as yours.</p>
<p><a href="https://www.kaggle.com/robertking0" target="_blank">@robertking0</a> Thanks! I hope the solution to 2428 will be interesting for you too.</p>
<p>I'm sorry for the delay 🙏, but I've also added the 2428 solution.</p> |
Santa 2021 - The Merry Movie Montage | 4th place solution by hand | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2021 - The Merry Movie Montage
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I have finished editing the code. <br>
The final version is <a href="https://www.kaggle.com/cashfeg/4th-place-solution-code" target="_blank">here</a>. </p>
<p>Thanks for the fun competition. The Santa Competition for new-year holidays is my pleasure and I am pleased that this tradition has continued. </p>
<p>My solution is made by hand. I used a computer to simply enumerate. </p>
<h2>2440 without wildcard</h2>
<p>In this problem, we need to construct 3 strings such that </p>
<ul>
<li>Every permutation (5040 in total) of [1, 2, 3, 4, 5, 6, 7] is contained in at least 1 string. </li>
<li>Every permutation in the form of 12xxxxx (120 in total) is contained in all strings. <br>
(1: Mr. Santa, 2: Ms. Santa)</li>
</ul>
<p>Second condition seems to break the symmetry, but in reality it seems that, as a result of this too big break, there is a room to make a solution by symmetry. </p>
<p>The key idea for me is that 120 permutations of 12xxxxx are not waste. <br>
For example, <br>
<code>1273456 1237456 1234756 1234576 1234567</code> <br>
is not only 5 permutations but also it is a part of a 2-cycle <br>
<code>7 2345612 7 3456123 7 4561234 7 5612345 7 6123456 7 1234561 (7 2345612)</code></p>
<p>By looking for a good string (part of a 2-cycle) that does not overlap with these, I found strings like these:<br>
<code>1 2345672 1 3456723 1 4567234 1 5672345 1 6723456 1 7234567 (1 2345672)</code><br>
By removing the duplicates with 12xxxxx, we obtain <br>
<code>345672 1 3456723 1 4567234 1 5672345 1 6723456 1 723456</code> (length 45) <br>
If we fix 1 and 2, there are 5! = 120 sequences like this, so a simple assignment will create 3 strings consists of 40 such sequences and 120 12xxxxx. <br>
The length is 45 * 40 + 7 * 120 = 2640, Clearly, we can combine 1234567 and 3456721…, and so on, so the length is now 2440. <br>
More precisely, as 12xxxxx can be part of an important sequence, we need to be careful when combining them. But that's not a critical problem because all 3 strings contains all 12xxxxx. </p>
<p>This solution by <a href="https://www.kaggle.com/jpmiller" target="_blank">@jpmiller</a> seems similar. <br>
<a href="https://www.kaggle.com/c/santa-2021/discussion/300519" target="_blank">https://www.kaggle.com/c/santa-2021/discussion/300519</a></p>
<h2>2440-> 2428 with wildcard</h2>
<p>If we understand the structure of the 2440 solution, it's just a puzzle. <br>
When the leaderboard is updated, it's time to start a new race. Think without sleeping. I did this twice(2430 -> 2429 -> 2428).</p>
<h3>2430</h3>
<p>We start from 2440 solution, which consists of 40 sequences of the following form: <br>
<code>12 345672 1 3456723 1 4567234 1 5672345 1 6723456 1 723456</code> (length 47) <br>
and 80 12xxxxx. <br>
We should produce 12xxxxx by almighty because it's inefficient, so where to put almighty(denote by 8) should be: <br>
<code>12 345672 1 3456723 1 4567234 1 5672345 1 6723456 1 823456 7</code> (length 48) <br>
We added 1 character and one 1234567 is newly included, so seemingly this reduces length by 6, <br>
but no, because we have 1234567 twice. So we add one more character like this: <br>
<code>127 345672 1 3456723 1 4567234 1 5672345 1 6723456 1 823456 7</code> (length 49) <br>
thus we could reduce 5 characters by 1 almighty. </p>
<h3>2429</h3>
<p>Take 2 similar sequence of length 48 with duplicate, such as 1234567 and 1234576, <br>
<code>12 345672 1 3456723 1 4567234 1 5672345 1 6723456 1 823456 7</code> (length 48) <br>
<code>12 345762 1 3457623 1 4576234 1 5762345 1 7623457 1 823457 6</code> (length 48) <br>
and swap the last two characters.<br>
<code>12 345672 1 3456723 1 4567234 1 5672345 1 6723456 1 823457 6</code> (length 48) <br>
<code>12 345762 1 3457623 1 4576234 1 5762345 1 7623457 1 823456 7</code> (length 48) </p>
<p>If we joint these two sequences and insert one character, we obtain <br>
<code>127 345672 1 3456723 1 4567234 1 5672345 1 6723456 1 823457 6</code> (length 49) <br>
<code>2 1 3457623 1 4576234 1 5762345 1 7623457 1 823456 7</code> (length 41) <br>
This reduces 11 characters by 2 almighty. </p>
<h3>2428</h3>
<p>We can put these two sequences in different strings. <br>
<code>12 345672 1 3456723 1 4567234 1 5672345 1 6723456 1 823457 6</code> (length 48) <br>
<code>12 345762 1 3457623 1 4576234 1 5762345 1 7623457 1 823456 7</code> (length 48) </p> |
Santa 2021 - The Merry Movie Montage | A MILP Journey, part 2: 2440 solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2021 - The Merry Movie Montage
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Following up on <a href="https://www.kaggle.com/c/santa-2021/discussion/300507" target="_blank">https://www.kaggle.com/c/santa-2021/discussion/300507</a> I will now describe how a MILP model was used to find a 2440 solution.</p>
<h1>A first model</h1>
<p>The first evolution of the model was to explicitly represent the duplicated mandatory permutations. I also removed all the 7-edges and replaced them by 2 binary variables per node.</p>
<p>The variables are:</p>
<ul>
<li>next(p1, p2) is an integer variable whose value is the number of times p1 is immediately followed by p2 in the strings of the solution to the problem.<ul>
<li>start(p) if p is preceded by a 7-edge, whatever the 7 edge</li>
<li>end(p) if p is followed by a 7 edge, whatever the 7 edge</li></ul></li>
</ul>
<p>In order to make the problem easier to solve I enforced that nodes in my new graph were visited exactly once. A s result I no longer can remove arcs as I did for the lower bound model, because a path that goes from p1 to p2 will not necessarily visit an extra node p3 if cost(p1, p3) + cost(p3, p1) = cost(p1, 2) when p3 is already visited by another path.</p>
<p>Another restriction was to remove 4-arcs, 5-arcs, and 6-arcs as none of them appeared in the lower bound model solution. This restricts drastically the size of the model,</p>
<p>The constraints are simple:</p>
<ul>
<li>flow conservation. The sum over p1 of next(p1, p)) + start(p) = sum over p2 of next(p0, p2) + end(p) for all p0</li>
<li>subtour elimination. I reused the same as in the lower bound model. I later tried with the standard subtour elimination constraint, but to my surprise it took longer to solve that version of the MILP.</li>
</ul>
<p>The objective is the same as before. It is the sum of the costs time the next(p1, p2) variables where the cost of 7 edges to 7 minus epsilon (epsilon = 0.01).</p>
<p>Solving this model yields again a solution close to 7320. I then wrote a little cycle extraction, that starts from a non zero next(p1, p2) variable, then looks for which variable next(p2, p3) is no zero, etc, until it comes back. The hope was ot get 3 cycles of equal length, but this did not happen: Many more cycles were found. My subtour elimination constraints were not strong enough.</p>
<h1>a Better model</h1>
<p>I then added a variant of Miller–Tucker–Zemlin (MTZ) constraints to eliminate all subtours. The difference is that I have implicit arcs via the start and end variables. Another difference is that we look for 3 paths, not one. For this I add one variable u(p) for every node p. It represents the rank of p in the path that contains p. The constraints are:</p>
<p>u[i] - u[j] + N3 * nex(i,j) <= N3 - 1 for all i,j</p>
<p>u[i] <= (N3 - 1) * (1 - start_perm[i]) for all i</p>
<p>u[i] >= (1 - start_perm[i]) for all i</p>
<p>where N3 is the number of permutations divided by 3.</p>
<p>The first one is MTZ, the other 2 are saying that u[p] is zero if and only if start(p) is one.</p>
<p>A last twist is to use the know upper bound of 7320 as a cutoff. It helps the solver during its branch and bound: any node of cost higher than the cutoff is pruned.</p>
<p>This model was better, less short cycles. But it produced a lot of cycles of similar nature. Here is one:</p>
<p>123456712345672134567231456723415672345167234561723456</p>
<p>Issue is that the mandatory 1234567 is duplicated, which is not what we want. We want duplicates of the same mandatory permutations to appear in 3 different strings. for this I had to add a graph coloring piece to the model.</p>
<p>Other cycles were of the form</p>
<p>12345672134567231456723415672345167234561723456</p>
<p>These are 2-cycles for a mandatory minus 6 permutations. The missing 6 permutations are the rotations of the mandatory.</p>
<h1>Graph coloring</h1>
<p>I added 3 binary variables per node, class(p, k) with k=1,2,3 to represent which of the final 3 strings p belongs to.</p>
<p>The constraints state that a node can only have one class, and that the number of nodes having a given class is N3. They also state that if next(p1, p2) is one, then class(p1,k) == class(p2,k) for all k.</p>
<p>Adding these constraints made the problem very hard to solve. No solution was found after a day of computation.</p>
<p>I studied the previous model solution, and found few things:</p>
<ul>
<li>All nodes with a start were mandatory permutations. This was easy to enforce with an additional constraint.</li>
<li>All nodes with an end were mandatory permutations or were of the form 1x2xxxxx. Same, easy to enforce via constraints</li>
<li>At least one copy of each mandatory had both a start and an end. I decided to drop one duplicate for each mandatory. This reduced the size of the problem further.</li>
<li>The truncated 2-cycles for all mandatory cover all permutations that are not a rotation of a mandatory. I decided to enforce this by setting to 1 all the arcs appearing in these truncated 2-cycles.</li>
</ul>
<p>The resulting model solves in about 5 minutes, with a number of cycles. When the cycles are grouped by class and concatenated we get 3 strings of length 2160. Each of these string contain 80 mandatory. By appending the 40 missing mandatory we get 3 strings of length 2440 who meet all the original problem constraints.</p>
<p>Code is now available at It is at <a href="https://github.com/jfpuget/kaggle_santa_21" target="_blank">https://github.com/jfpuget/kaggle_santa_21</a></p> |
Santa 2021 - The Merry Movie Montage | A journey to get to 2428 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2021 - The Merry Movie Montage
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I meant: if you have already decided what should be the neighborhood of the wild card like ABCDEF*PQRSTU and fix these values, then you can treat this length 13 part as just one node. <br>
It is hopefully better explained in the notebook (which is now linked above).</p>
<p>This was a fun competition. I am sure most of the teams observed similar behavior about the shortest solutions.<br>
Just in case it is helpful, I have also posted the code <a href="https://www.kaggle.com/szabo7zoltan/santa-2021-getting-to-2428-in-four-steps" target="_blank">https://www.kaggle.com/szabo7zoltan/santa-2021-getting-to-2428-in-four-steps</a> that broke down my approach into four parts.</p>
<p>A few of the steps that helped me along the way:</p>
<p>1, Starting with the 5913 superpermutation to get to the 2440 (no wildcard solution) with LKH.</p>
<p>2, Investigating the 2440 no wildcard solution to understand how it works.<br>
This also helps in guessing where the wildcards would have maximal benefits.</p>
<p>3, Freezing the neighborhood of the wildcard and use LKH again to get to 2428. (This last part probably can be done by more elementary methods (simple mathematics), but LKH also works)</p>
<p>Thanks! It was a nice competition</p> |
Santa 2021 - The Merry Movie Montage | Get to 2430 with the Power of Randomization! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2021 - The Merry Movie Montage
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks Kaggle for hosting such an enjoyable Christmas competition - despite not achieving the perfect score, <a href="https://www.kaggle.com/product-feedback/299988" target="_blank">being banned twice</a> (causing multiple heart attacks), and the tension between the participants that has exploded in the last week, I can overall say that it was a pleasure to sit there with pen and paper, use a state-of-the-art TSP solver, harness the power of randomization, and trying really hard to get to 2428 until the end.<br>
In this competition I also won my first silver medal, becoming a competition expert: this gives me a lot of motivation to compete even more this year!</p>
<p>My journey in this competition can basically be broken down into four phases:</p>
<ol>
<li><p>For the first week or two, I basically did nothing but run public notebooks for <a href="https://www.kaggle.com/cdeotte/santa-2021-tsp-baseline-2500" target="_blank">TSP conversion</a>, <a href="https://www.kaggle.com/yosshi999/wildcard-postprocessing-using-dynamic-programming" target="_blank">wildcard placement</a>, and <a href="https://www.kaggle.com/kostyaatarik/permutations-rebalancing" target="_blank">permutation rebalancing</a> (big thanks to the authors!). By making small changes and doing a lot of experiments, I was able to get a LB of around 2490. </p></li>
<li><p>The next step was trying different starting superpermutations in Chris's notebook after reading <a href="https://www.gregegan.net/SCIENCE/Superpermutations/Superpermutations.html" target="_blank">Greg Egan's page</a> and understanding the reported strings were very different in nature. One morning, I found out that the permutation of length 5913 worked really well and following basically the same procedure of the previous point, I managed to get to LB 2481. And as soon as I did that, a <a href="https://www.kaggle.com/ks2019/santa-baseline-2481" target="_blank">crucial notebook</a> for the competition with the same score was released. </p></li>
<li><p>This stage was really a race against time. As soon as the previously mentioned notebook was released, I immediately knew it was a bomb and would move LB a lot. Handcrafting a baseline with a solid mathematical structure was the key to breaking the 2480 barrier and going straightly below 2450, a jump that was very well visible at the beginning of the competition since really few people were in this range. Things rapidly escalated and in a few days, I was able to get a 2440 perfect non-wildcard solution. This was done by going back and forth between <a href="https://www.kaggle.com/kostyaatarik/minmax-ctsp" target="_blank">MinMax CTSP</a> and <a href="https://www.kaggle.com/kostyaatarik/permutations-rebalancing" target="_blank">permutation rebalancing</a>. Starting from the 2481 solution, I noticed that when CTSP was stuck in a local minimum, it was a good idea to run the permutation rebalancing kernel increasing randomization by setting INITIAL_TOUR_FRACTION parameter and also changing the way permutations were rebalanced (the function <a href="https://www.kaggle.com/kostyaatarik/permutations-rebalancing?scriptVersionId=80839642&cellId=8" target="_blank">rebalance_perms</a> assumes a certain order in popping permutations, it is not the only one), then go back to CTSP for another run of joint optimization. With standard wildcard positioning code, however, the 2440 solution improved only to LB 2438.</p></li>
<li><p>The last phase was the most fun and creative: I took some days and pen and paper to analyze the solutions and think about a nice way to use wildcards. I really thought I made it after I wrote a <a href="https://www.kaggle.com/mviola/santa-2021-randomized-wildcard-postprocessing" target="_blank">Randomized Wildcard Postprocessing</a> script, but it came up a little short. The procedure is described in the notebook, so feel free to check it more in detail there, but the idea is just to take an optimal path and its associated distance matrix and for many times:</p></li>
</ol>
<ul>
<li>Randomly select two edges representing the transition from permutations of the form 1a2bcde to mandatory ones 12bcdea and set their cost to a negative value such as -1.<br>
<a href="https://postimg.cc/hQ1CGkF7" target="_blank"><img src="https://i.postimg.cc/4yrjWGBw/distance-matrix.jpg" alt="distance-matrix.jpg"></a></li>
<li>Reoptimize with LKH and find a new optimal tour that is very likely to include these transitions and that can later be optimized with a wildcard for an improvement of 6 characters each.<br>
<a href="https://postimg.cc/JDD0M8kP" target="_blank"><img src="https://i.postimg.cc/k4wbNXX3/path.jpg" alt="path.jpg"></a><br>
While I had no guarantees that this randomized algorithm would in general work well, I noticed that many runs could improve the score a lot and I could transform my set of 2440 strings to a set of 2442 strings but where each allows a gain of 6 for each wildcard, for a final LB of 2430.<br>
After making sure I couldn't do better by bute-forcing all possible choices, I spent the last three weeks brainstorming and trying to improve upon this idea with other wildcard bridges and joint optimization on the three strings with CTSP, but I was not able to perform well enough to get to 2428 or 2429.</li>
</ul>
<p>Thank you for spending time reading my solution summary, I can’t wait to now understand what I missed and other approaches: 2428 has been my new recurring thought hahaha…<br>
Happy (late) new year to everyone and happy Kaggling!</p> |
Santa 2021 - The Merry Movie Montage | LKH and Wildcards gets to 2430 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2021 - The Merry Movie Montage
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I hope I <code>triple</code> quoted all the <code>*</code> so that things didn't come out in italics</p>
<p>I haven't seen anyone describe using the TSP solver LKH with wildcards in the way we did, so I will give a short description of our approach.</p>
<p>Thanks to Kaggle for running a great competition and for the other competitors who provided much understanding along the way. Thanks to my partner <a href="https://www.kaggle.com/vkonstantakos" target="_blank">@vkonstantakos</a> for providing the 2440 solution I couldn't get to. </p>
<p>Spoiler alert: we didn't get to 2428</p>
<p>Consider each string of the solution as a group of permutations (a list in python) in the standard way, and compute the cost of each permutation to another in the usual way using distance(a,b).</p>
<pre><code>def distance(a,b):
#number of characters to append to a to get a string containing b
#Assume both a,b are length N
for n in range(N+1):
if a[n:] == b[:N-n]:
return n
</code></pre>
<p>Running the LKH solver on this group using this distance function will produce the shortest string for this given group of permutations. Note that this assumes that you have already partitioned the permutations into the 3 strings properly.</p>
<p>Now, suppose we want to add wildcards into our string. We did this as follows:</p>
<ol>
<li>Pick a wildcard pattern, such as <code>*234567</code> of any length, although as we will see, the wildcard should be not more than 7 characters from the beginning or end of the string.</li>
<li>Generate all the permutations that this string represents. For the example in 1, it would be a single string <code>1234567</code>. For a longer string, say, <code>s=1*234567</code>, it would be 1723456 and 1234567. Note that there can be at most len(s)-7 permutations, but you might not have that many because not all length 7 strings are a permutation.</li>
<li>Remove from your group all these permutations from step 2.</li>
<li>Add to your group the wildcard pattern from step 1.</li>
<li>Use a modified distance function wdistance (see way below), which calculates the distance from a permutation to another, considering the presence of wildcards. Example: <code>wdistance('1723456','*234567') is 1</code>.</li>
<li>Now you can run LKH with this modified group and distance function. The answer (assuming LKH returns the optimal answer) is the shortest path that can be obtained when the wildcard takes on a different value one node prior to the wildcard perm and (possibly) different value on the node immediately following the wildcard perm, but not beyond those. Put another way, if the solution obtained is:<br>
<code>p1->p2->p3-p_with_wildcard->p4->p5</code><br>
Then p3 and p4 can use any value for the wildcard, but p1 ,p2, and p3 must all use the same value, and p4 and p5 must use the same value. So you can see that this is not guaranteed to be the best possible use of the wildcard; but if you use enough digits around the wildcard you will eventually be guaranteed to get the best answer. For example, searching over all <code><6-digits>*<6-digits></code> strings will eventually get the best answer (for a single wildcard).</li>
</ol>
<p>This is our basic method of determining wildcards; as you can see it does not involve any real understanding of the underlying patterns others have described in detail. It takes about 3-4 minutes to run LKH on a group and get what is reasonably reliable optimal answer. We checked this by running for longer times (hours) over multiple runs and not seeing any improvement. You can also speed up the search by using an initial tour close to the original optimal (non-wildcard) string.</p>
<p>Here are the steps we took using this method</p>
<ol>
<li>We realized that the mandatory perms (beginning with '12' for us) seemed to be the most costly, so we started putting wildcards in those. For example, <code>*234567, 12*4567</code>. After a few attempts and a little thought, it became apparent that the wildcard only makes sense replacing the 1 or the 2.</li>
<li>Since we knew that we should be saving 5 or (hopefully) 6 per wildcard, the wildcard perm should follow another perm with a cost of 1. We called this the target for that perm. There is only 1 target for a perm once we pick the wildcard position. For example, target('1234567',wc=0) is '1723456'. target('1234567',wc=1) is '2173456', where wc is the index of the wildcard placement.</li>
<li>A few experiments showed that wc=0 fared much better than wc=1.</li>
<li>Some staring at solutions revealed a pattern of 36 permutations that started with a cost of 7 and ended at a target at a high frequency. This is the famous 42-perm cycle with the mandatories and their rotations removed. The targets for wc=0 generally seemed to be at the end of a 36 perm section and just before a cost of 7 perm. This appeared to be a good place to put the <code>*234567</code> perm following it's target, because the following cost of 7 would not be impacted. It looked like it would save 6 points. However, as <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> pointed out in his solution, that perm was ALWAYS at the start of the section. When we used that wildcard in our LKH solution, the moving of the perm to the end broke the pattern, losing 1 point, so the most any wildcard could save (and we tried them all) was 5 points.</li>
<li>After find a large number of wildcards that produced 2435 when used singly, we started combining them, while not every pair of 2435 solutions together produced 2430, enough did that we quickly found a pair for each string that produced 2430.</li>
<li>After finding 2430, we tried lots and lots of things to improve our score. First we tried to get to 2434 with a single wildcard: longer and longer wildcard strings <code>abcdef*ghijk</code> etc. We began to realize that maybe 2435 is the best you can do with 1 wildcard, but that 2 could do 2428 somehow. And we ran out of time….</li>
</ol>
<pre><code>def match_wc(a,b):
#do a and b match using * as wildcard in a or b?
assert len(x1)==len(x2)
for a,b in zip(x1,x2):
if a==b or a=='*' or b=='*':
continue
return False
return True
def wdistance(string,b):
#count the number of characters needed to have string a followed by perm b
#where string or b has possible wildcard *
#This is like distance but with wildcards, and string can be longer than 7
#distance can be 0 if they match
a=string[-7:] # take last 7 characters
for n in range(N+1):
if match_wc(a[n:],b[:N-n]):
return len(b)-7+n
</code></pre>
<p>I think I just slept a little less…</p> |
Santa 2021 - The Merry Movie Montage | My journey + Constrained TSP to get from 2440 to 2430 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2021 - The Merry Movie Montage
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First thanks to all organizers and participants. I really enjoyed this optimization competition. Hoping to see these more often.</p>
<p>My goal during this competition was to get a medal. Here's a quick journey and maybe some novel way to approach wildcard problem. I didn't want to put too much effort into the theory of superpermutations though and was hoping that my optimization experience would be enough.</p>
<h1>Getting to 2440</h1>
<p>Having some OR background but no commercial solver available, I formulated preliminary MILP model using GAMS + Gurobi/CPLEX and sent it to NEOS server but unfortunately ran into memory issues.</p>
<p>Luckily <a href="https://www.kaggle.com/kostyaatarik" target="_blank">@kostyaatarik</a> posted a nice <a href="https://www.kaggle.com/kostyaatarik/minmax-ctsp" target="_blank">solution with CTSP</a>, which provided very useful since it had potential to eventually reach 2440. Alterating between CTSP and rebalancing got me down to around 2445. From there I just let the model run for several days to get down to 2440.</p>
<p>Adding <a href="https://www.kaggle.com/yosshi999/wildcard-postprocessing-using-dynamic-programming" target="_blank">wildcard dynamic programming </a> by <a href="https://www.kaggle.com/yosshi999" target="_blank">@yosshi999</a> it got me down to 2438 and I was satisfied.</p>
<p>Until few days ago when I saw that even bronze was in danger and I need to optimize further.</p>
<h1>Using constrained traveling salesman to get lower score with wildcards</h1>
<p>Solution rests on following ideas/assumptions:</p>
<ul>
<li>Optimize individual strings;</li>
<li>Have wildcard options that can be used with the constraint of using only two wildcards;</li>
<li>The distance from wildcard to its non-wildcard permutation would be 0 to force the solver to replace permutation with its wildcard;</li>
<li>Wildcard would be used only in first position e.g. *234567 to simplify calculations;</li>
<li>Only mandatory permutations would be used for wildcards as these seemed to be putting the most additional length to superpermutation;</li>
</ul>
<p>I decided that <a href="https://www.kaggle.com/chaudharypriyanshu/st-21-a-perm-rebalancing-wildcard-optimization" target="_blank">permutation rebalancing notebook</a> by <a href="https://www.kaggle.com/chaudharypriyanshu" target="_blank">@chaudharypriyanshu</a> would be a good place to start to alter it to incorporate wildcards with constraint of using only two. Looking at <a href="http://webhotel4.ruc.dk/~keld/research/LKH-3/" target="_blank">possible models of LKH-3</a>, I decided to go with ACVRP (Asymmetric capacitated vehicle routing problem) with using single vehicle, reducing it to ACTSP. Setting non-wildcard permutations to have weight of 0, wildcard to have weight of 1 and setting the vehicle to have capacity = 2, I ran the solution for several hours to obtain 2430 solution just couple of hours before deadline to secure medal spot. Probably lower solution can be obtained by broadening some of the assumptions mentioned above.</p>
<p>The code is published in <a href="https://www.kaggle.com/taanieluleksin/wilcard-optimization-with-acvrp-up-to-2430" target="_blank">this notebook</a>.</p> |
Santa 2021 - The Merry Movie Montage | My solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2021 - The Merry Movie Montage
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank the organizer for another year of interesting problems, and all the other participants for making the competition very fun! </p>
<p>Since many participants have already shown their solutions and told in detail about the ideas and with very beautiful pictures, I will briefly talk about what I did. The first week I watched as top1 rapidly improves their solutions. Since I do not know how to use libraries for optimization, I thought about a constructive solution. When I saw a score of 2440 or less on the leaderboard, I figured that the distance between the subsequences should be almost zero. So I began to build greedily solution from a sequence and understood the idea of cycles, which was described many times on the forum. When I wrote the code to generate the solution for 2440, top1 had a result of 2429 (their solution <a href="https://www.kaggle.com/skril31/generating-a-2428-solution-in-100-lines-of-code)" target="_blank">https://www.kaggle.com/skril31/generating-a-2428-solution-in-100-lines-of-code)</a>. I figured that each star should give plus 6 to score and pretty quickly came up with an idea described in the solution <a href="https://www.kaggle.com/c/santa-2021/discussion/300509" target="_blank">https://www.kaggle.com/c/santa-2021/discussion/300509</a> . However, I had a terrible code that even I did not understand well and I had to manually rebuild the sequences all day to get 2428. All this day I was afraid that I would not be in time and someone would get 2428 faster than me, it was a little nervous for me)</p>
<p>Thanks! <br>
Clear eyes, full hearts…</p> |
Santa 2021 - The Merry Movie Montage | Santa 5th Place Solution (2428) & Insights | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2021 - The Merry Movie Montage
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes, I think that (understanding of structure) made the difference between those who got to 2428 quickly vs those who took some time going from 2430 to 2428. About notebook, I realized that when I saw many teams in 2440s range (mostly 2446) after 1-2 days of release of notebook. I went from 2481 to 2440 with significantly different approach. So, I didn't expect that while releasing notebook. Congratulations on the silver, btw.</p>
<p>First of congratulations to everyone who achieved a decent score in this competition. I am happy to get a solo gold. I solved most of it on paper and then coded it. Here is my solution in detail:</p>
<h2>Name Conventions:</h2>
<p>There are 3 types of sequences:</p>
<ol>
<li>Mandatory (M) - sequences starting with '12' - (120)</li>
<li>Mandatory Relatives (R) - sequences containing '12' but not starting with it - (120*5)</li>
<li>Non Mandatory Non Relatives (N) - All other sequences - (4320)</li>
</ol>
<h2>Solution:</h2>
<h3>1. Proving lower bound = 2400 (without wildcard, Mathematically)</h3>
<p>Due to additional constraint introduced in the problem, it was easy. The best length/sequence ratio we can have in a cyclic permutation with more than 8 sequences is 7/6 (6 one edges followed by a 2 edge - 2 cycles). There will be 120*3 M in the solution. This enforces additional constraints to have - 120 * 5 * 3 R. So, our permutation must have these 120 * 5 * 2 additional duplicates. </p>
<h5>Lets calculate lower bound</h5>
<ul>
<li>N - 4320 * 7/6 = 5040</li>
<li>R - 3 * 120 * 5 = 1800</li>
<li>M - 120 * 3 = 360</li>
<li>Total length = 7200</li>
<li>If we can split this cycle in 3 perms of equal length at 7-edges (if any), we get 2400. But we have relaxed some constraints for best case. May be adding those constraints can get us to 2440 mathematically. </li>
</ul>
<h3>2. Getting to 2440</h3>
<ul>
<li>This is similar to what other participants did. I had approximate count of N,R,M in mind and I didn't want to break N-N connections in 2 cycles for the start. This is may be where I get lucky, If I remove all R from a 2 cycle and keep only one M, I get a permutation of length 47 for each cycle (lets call it B, base permutations). Simple concatenating all these 120 cycles. I have permutation of length 120 * 47 = 5640 which contains all N sequences. </li>
</ul>
<h5>Sample base permutation (B) of 47 length: 12345672134567231456723415672345167234561723456</h5>
<ul>
<li>Now, for M & R -> we can combine pairs of M sequences with swapped chars at end positions '12345671234576'. We can easily make these connections, by appending corresponding sequence at the start of each 120 base perms which makes length of each base perms - 47+7 = 54. Total length = 120 * 54 = 6480.</li>
</ul>
<h5>Sample base permutation (B) of 54 length: 123457612345672134567231456723415672345167234561723456</h5>
<ul>
<li>Only sequence left to be added in each 2 cycle - #12#### (7123456). Now, lets add one M of length 7 with only constraint that it ends with a '7'. Base perm length = 54 + 7 =61. Total length 120*61 = 7320 . </li>
</ul>
<h5>Sample base permutation (B) of 61 length: 1236547123457612345672134567231456723415672345167234561723456</h5>
<ul>
<li>Use 40 permutations of length 61 in each subpermutations to get 2440. I could prove it visually but after <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> & other's confirmation, I was sure this is lower bound. </li>
</ul>
<h3>3. Getting to 2428</h3>
<ul>
<li>Once you know exactly how your solution is created, getting to 2428 is easy. For getting to 2428, we can modify our permutation and add corresponding sequence at the end instead of start of 47 length base permuation. Pairs (swapped at last position) of such wildcard permutations (length 48) will cover all sequences contained in base permutations of length 54. We miss R, but we have extra Ms to make up for it. Remember all we need to stack 2 M to save all Rs (12345671234576). We can stack 3M for these special 6 wildcard cases (124356712345671234765)</li>
</ul>
<h5>Sample wildcard permutation (B) of 48 length:12345672134567231456723415672345167234561*234576</h5>
<ul>
<li>Our solution size -> get reduced from 54 to 48 (save of 6 length). 6 wildcards - 3 pairs - 2428 solution. </li>
</ul>
<h3>4. Why I think 2428 is optimal and we can save only 6 per wildcard in best solution</h3>
<p>Assuming we can only use wildcard by breaking and making connections</p>
<ul>
<li>We can't save R. R is duplicated on average 3 times. If we use wildcard, to save R at one place, it will still show up at other place. I mean technically we can save R. But, when we save R, we actually remove a redundant edge. </li>
<li>Touching base permutation of length 47 (except at the start or end) - Base permutation is saturated. There are only edges of length 1 or 2. (6 ones followed by 2)</li>
<li>Case 1 - Break and make 2 ones - No benefit - Best you will have 2 ones with new wildcard in place</li>
<li>Case 2 - Break and make 1 one and 1 two - Best you will have 2 ones with new wildcard in place. After 2 you will be followed by 6 ones in 2 cycle - Again you'll be able to save only 1 distance</li>
<li>Case 3 - Break and make 3 twos - Best you will have 2 ones with new wildcard in place - Same logic - you'll be able to save only 1 distance</li>
<li>At the start or end - here you're not followed by saturated connections - So, here we have best chance of getting benefits. We convert a 7 edge -> 1 edge with wildcard. 7 edges can only exists for M (because of redundant R). </li>
<li>More than 6 save is only possible if there are multiple consecutive edges>=2 which is not possible here. If we observe base permutation, the distance between 7 edge and 2 edge is 7 which makes it impossible to use same wildcard to effectively reduce both edges.</li>
<li>This may not be proper mathematical proof, but, I see it hold true as long as we don't have a 7 and a >=2 edge close to each other. I tried to bring them close, only to realise that I will be distorting my base permutation (Adding extra penalty) which may give me better wildcard benefits but increase my 2440 base sequence length.</li>
</ul>
<p>Code here: <a href="https://www.kaggle.com/ks2019/getting-2440-in-simpler-way?scriptVersionId=80946172" target="_blank">https://www.kaggle.com/ks2019/getting-2440-in-simpler-way?scriptVersionId=80946172</a></p>
<p>P.S. If anyone has additional ideas for mathematically proving 2428 as lower bound, contact me. I will love to work on this together. </p> |
Santa 2021 - The Merry Movie Montage | Saving 10 places using Wildcards… by Hand (and Goodbye!) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2021 - The Merry Movie Montage
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thank you <a href="https://www.kaggle.com/ducktile" target="_blank">@ducktile</a> – wouldn’t have reached here without your inputs, and inspiration!</p>
<p>I tried many things but was stuck as 2442; and wild cards were saving me 2 points and I was at 2440. Not even bronze medal level… And early on, thought I had a chance for silver! </p>
<p>I tried many many things.. And was feeling like Edison– 100s of ways that did not work, yet no light bulb moment… I decided to focus my energy and time on wildcard that could save me 10 more places (max saving of 12 and I was saving only 2) rather than optimise to 2440.</p>
<p>I went through my notes for the competition. Ducktile suggested [ <a href="https://www.kaggle.com/c/santa-2021/discussion/291608#1603089" target="_blank">LINK</a> ]</p>
<blockquote>
<p>I'm trying to compete using a handwritten solution</p>
</blockquote>
<p>I was in awe with her approach. Never imaging that it was possible by hand and that I would need a solid understanding of numbers and patterns which I could not understand. As the coding methods left me high and dry, I finally realised I could use “by hand”!!</p>
<p>As Ducktile advised [ <a href="https://www.kaggle.com/c/santa-2021/discussion/293787#1611934" target="_blank">LINK</a> ] </p>
<blockquote>
<p>“ I know where there are "bad" places with an expensive transition from permutation to permutation (cost 7). And I try to improve them by using the wild card.”</p>
</blockquote>
<p>Many many nights and no success yet. Then last Sunday (9th Jan), i had my light bulb moment</p>
<p>I broke each of the 3 strings into sub-parts where parts had expensive transition… (cost 6 or 7). I also included cost 6 as I could fix it later by making sure it was after the required final number/ substring.</p>
<p>Find matching patterns of 4 in the strings; such that one was in the last 4 digits of one sub string and another near the first half. Like 5367 here:</p>
<p><a href="https://postimg.cc/PPSMnx5d" target="_blank"><img src="https://i.postimg.cc/t4jvPn0n/image.png" alt="image.png"></a></p>
<p>Result, 5 savings. Do this 2 times in each string to save 10 places in each string.</p>
<p>As a result, I saved 10 places from 2442 to reach 2432..and 60 something ranking</p>
<p>I thought I may spend more effort to save to places each without using wild cards and 2 more using wildcards… but let it be…!</p>
<p><strong>… Good bye!!</strong></p>
<p>As I celebrated my discussion <a href="https://www.kaggle.com/c/petfinder-pawpularity-score/discussion/298758" target="_blank">GM achievement </a> some days back with my family, my wife joked that Kaggle was like a mistress, keeping me awake at night and making me glued to the screen for hours and hours. And unlike a human mistress, she couldn’t not even take up the issue with Kaggle!</p>
<p>It was dead in the night (2-3 AM when I was finishing this) and a <a href="https://www.kaggle.com/c/santa-2021/discussion/298745#1638125" target="_blank">similar note from</a> <a href="https://www.kaggle.com/blankaf" target="_blank">@blankaf</a> hit me…</p>
<blockquote>
<p>And then somebody is wondering that "geeks" are considered the worst partners for life…</p>
</blockquote>
<p>Why was I Kaggle-ing so late at night? Kaggle is wonderful and I learnt more here than from many books and tutorials. That said, for me it is essential to spend more time with family … they have been more than supportive and I need to repay their patience and understanding! </p>
<p>Thanks for all the help, support and learning throughout my 3rd stint here at Kaggle!! 🙏🙌</p> |
Santa 2021 - The Merry Movie Montage | Towards a 2428 (pencil free) lower bound | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2021 - The Merry Movie Montage
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Correction, you dont need a 179521 TSP just a 44001 one because you need the full 7480 encoding just for 2 of the 24 segments of 220 (present in the 5280 single cycle) for the rest of 22 of them you just need a 6x220=1320 encoding because you can only have 2 wildcards</p>
<p>Well, you could use permutations from the set {*,1,2,3,4,5,6,7} or you can simplify and allow repetitions using the set {*,1,2,8,8,8,8,8} for example instead of "1234567" you will get "1288888" where on the road every 8 can match every 8 when placing the permutations. This model has only 42 classes of permutations {12xxxxx, 1x2xxxx, 1xx2xxx, … , xxxxx21}. While it looks much easier, the model is "all you need" to find the 2428 solution it can find the optimal placement of a wildcard to get a 61-length road down to a 55 one, to obtain a saving of 6 per wildcard, in a pencil/thinking/observation free manner ( I used dynamic programming in C++ it took 1 second), then after you find the {*,1,2,8,8,8,8,8} 55-long 1-wildcard road you can use a simple code to replace the many 8 with 3,4,5,6,7 so that netted me 256 (if i remember correctly) 55 long 1-wildcard roads with {*,1,2,3,4,5,6,7} that started like 1234567… (one of them is in your solution)<br>
In order to be precise, you start with a 5280 permutations cycle (of length 3x2440=7320) and attempt to insert 2 wildcards in order to get to 7308 length (or below).<br>
A 7308 TSP lower bound directly translates to 2428 lower bound for the full problem.<br>
By this type of simplifications you will still get 5280 permutations but now in just 42 classes so there are 120 repeating copies of each of 44 classes (we need to take 42+2 additional 12xxxx to get a solution lift-able to full 5280 cycle). Now instead of running a full cycle you can take smaller parts but large enough to have space for 2 wildcards to sit wherever they like.<br>
For the first example with 42 classes i selected 3 roads of permutations (wildcard free length of 3x61=183) so a model lenght of 3x44 permutations (allowing for a distance between wildcards in range 1-182), which translated into a 2025 node symmetric TSP model. This failed because it "did'nt preserve the 2440".<br>
But now it gets interesting i took the next step of using permutations for the set {*,1,2,8,9,9,9,9} so instead of "1288888" one would get {"1289999","1298999","1299899","1299989","1299998"} so now there are 42x5=210 classes so the full 5280 cycle can be split in blocks of 220 (210+ the additional 2x5 for the repeated 12xxxxx needed for an easy lift to 5280)<br>
So my next model attempt is to fit 2 unrestricted wildcards into 220 permutations of {*,1,2,8,9,9,9,9}. <br>
For this i need 34x220+1=7481 nodes in TSP. I do the standard encoding of 2x for ATSP to TSP conversion and the 17 comes from 17=1+7+1+7+1 each 1 represents a level of normal wildcard free permutation multiplied in 3 levels (for being before first wildcard, inbetween, or after both) and sandwiched between those 3 levels are levels of 7 permutations containing wildcards (im allowing even distance 1 between wildcards with the necessary level jump and of course no double wildcard permutation is counted by the model) .<br>
Of course while the 220 length translates to a 7481 node TSP then the full 5280 single cycle translates to a 179521 node TSP.</p>
<p>I have a 2426 lower bound (2425.21) which is improving every hour in Concorde for unrestricted wildcards (it runs on 26 cores for ~8 hours totaling ~800k seconds)<br>
in a symmetric TSP model with ~8k nodes.<br>
I made yesterday the ~2k node (more simpler) symmetric TSP model for which the best solution was gaining "only 12" for a pair of unrestricted wildcards but because this model was unable to preserve the 2440 non-wildcard solution it can't be taken into account since the "only 12" saving is on top of more dense packed (more constrained) lower than 2440 solution.<br>
It looks like the ~8k model has chance to prove a 2428 lower bound because its already at 2425.21 so the 2440 wildcard free solution does not collapse in this case so any wildcard placement will be optimal.<br>
In case it gets a lower than 2428 solution it might not translate into a lower score on the late submission leaderboard but it will be very interesting.<br>
Should it fail to reach 2428 bound the next model has ~30k nodes but this will take 3-4 days on Concorde just to start the master node (i don't have a CPLEX academic licence i am using Qsopt).<br>
Concorde is not optimized for this type of TSP problems with symmetries, I think would be useful to add a subroutine which runs nauty / bliss or trace to find automorphism group AutG of the initial weighted graph and then when adding a cut, add also the cut orbit over the AutG.<br>
For the 2428 solution i had at some point used the many subgroups of S5 symmetric group <br>
over road segments of length 61, 122, 244, 305 .., and the remarkable fact is that the "projection triplets of sets of permutations" which built a whole solution starting from a small road were not always unions of cosets of the (154 nontrivial) subgroups of S5. (I used a strong vertex cover program to find the necessary independent sets). This suggests that there is a rich theory in the action of a group on the PowerSet there are interesting subsets that behave like cosets but they are not part of factorisations of the whole group. I could not find much in the literature except papers mentioning the action on PowerSet but this is huge size (2^120 for S5) so they are weak results.</p>
<p>Update i have reached lowerbound around 2425.44 , but there is some bad news Concorde fails branching for a task on slave node (the rest of computation continues) with the message:<br>
FAILURE: ILLlib_strongbranch<br>
in function ILLlib_strongbranch in file lib.c line 643. <br>
FAILURE: QSopt_strongbranch <br>
in function QSopt_strongbranch in file qsopt.c line 399. <br>
QSopt_strongbranch failed<br>
CClp_strongbranch failed<br>
find_strongbranch_edges failed<br>
CCtsp_find_branch failed<br>
do_task failed<br>
CCtsp_grunt failed<br>
from what i see in logs LP has 8683 rows 24997 columns and 630929 nonzeros<br>
This is reproductible on a different machine restarted a run with 19 slaves (grunts)<br>
So in order to branch also this node i suspect i might need to find a way to run with cplex or an "exact" version of QSopt. Or maybe i will just modify Concorde code to hardcode an edge to branch for this task number.</p> |
Santa 2021 - The Merry Movie Montage | [21st place] Reflections on Santa 2021 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2021 - The Merry Movie Montage
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The inverted commas there are a stylistic device indicating my imagined verbatim quotation of something that might hypothetically have been said by a person self-deprecatingly deflecting credit to his or her teammates. No other interpretation was intended. </p>
<p>Since you ask directly, please don't mistake giving due credit to one's teammates with saying that "I was a complete passenger." Everything up to the end of "Level 3" on that write-up was before the team-up, so I was on 2439 solo about a month before the end of the competition. The first team-up was me (with a 2441 length non-w/c submission), combining with Daniel, who didn't have so short a non-w/c answer but had coded up the wildcard optimisation process. Between us that resulted in a score of 2432. When Rob joined, he had a 2440 non-w/c submission that reduced to 2430 under Daniel's wildcard scheme. It was Xuxu who worked out how to get down to 2428 (which was a significant achievment back before the "Complete How To Guide" was posted in the discussion).</p>
<p>This was a really interesting problem which was helpfully amenable to solution either through mathematical insights into its symmetry, or by the brute force of computer power, or even better by a combination of the two.</p>
<p>Like many Kaggle competitions, time spent reading the forum, working through the notebooks, and experimenting with one’s own adaptations of the code, was rewarded with nuggets of knowledge and understanding that helped unlock the higher level of this game. </p>
<p><strong>Level 1:</strong> 2507-2486</p>
<p><a href="https://www.kaggle.com/cdeotte/santa-2021-tsp-baseline-2500" target="_blank">The Santa 2021 TSP Baseline - [2500]</a> by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a></p>
<p>This was a really helpful starter to get into the problem, but beneficially for the competition its approach had a ceiling somewhere in the2480s. It is based on defining the three substrings using the order in which the permutations occur in the 5906 superpermutation, and then using the LKH TSP solver to minimise each substring separately. With this approach we have very limited flexibility to move permutations between substrings.</p>
<p><strong>Level 2:</strong> 2484-2481</p>
<p><a href="https://www.kaggle.com/kostyaatarik/minmax-ctsp" target="_blank">MinMax CTSP</a> by <a href="https://www.kaggle.com/kostyaatarik" target="_blank">@kostyaatarik</a> </p>
<p>This notebook helped me make a key step forward, as it represented the problem as a Coloured TSP in LKH and added some code to change the objective function so that it now minimised the maximum length of any substring. This was evidently the correct minimisation to be performing, but the coding required to patch the objective would have been beyond my capabilities.</p>
<p>However, at this point I was still splitting into three sets of permutations based on the ordering of the 5906 superpermutation. Both my own scores and a look at the division between sub-2440 and 2480+ scores on the LB at that point suggested that there was some ‘magic’ required to get from 2480 to 2440.</p>
<p>This seemed likely to involve the way in which permutations were assigned to substrings, and this somewhat cryptic <a href="https://www.kaggle.com/c/santa-2021/discussion/292998" target="_blank">thread</a> in the Discussion further suggested that this might be related to the 5913 recursively generated superpermutation, probably a better candidate for symmetry-related properties than the shorter 5906, 5907 or 5908 ones. However, my own efforts had not managed to split up any of these superpermutations in the magic way.</p>
<p><strong>Level 3:</strong> 2457-2439</p>
<p>The answer to this conundrum appeared in the notebooks</p>
<p><a href="https://www.kaggle.com/w3579628328/mathematic-laws-about-super-permutation" target="_blank">Mathematic laws about super permutation</a> by <a href="https://www.kaggle.com/3579628328" target="_blank">@3579628328</a></p>
<p>which suggested how to split the 5913 in a 3-fold symmetric way and</p>
<p><a href="https://www.kaggle.com/ks2019/santa-baseline-2481" target="_blank">Santa Baseline - 2481</a> by <a href="https://www.kaggle.com/ks2019" target="_blank">@ks2019</a></p>
<p>which did this in practice, giving substrings of length 2483 without wildcards (2481 with) that unlocked the potential of the MinMax CTSP. My very first minimisation starting there gave a then-surprisingly good (to me) score of 2457, and this method continued with roughly 7 nine-hour minimisations twice a day for about a week before getting stuck at non-wildcard lengths of 2441_2440_2440. Two subsequent repeats with slightly different minimisation protocols later stuck at 2442_2442_2441 and then 2441_2440_2440. It would be interesting to know if anyone managed to reach 2440_2440_2440 with the MinMax CTSP in LKH.</p>
<p>Up to this point, I was using</p>
<p><a href="https://www.kaggle.com/yosshi999/wildcard-postprocessing-using-dynamic-programming" target="_blank">Wildcard Postprocessing Using Dynamic Programming</a> by <a href="https://www.kaggle.com/yosshi999" target="_blank">@yosshi999</a> </p>
<p>to add wildcards. This got the score down by 2, but left me stuck on 2439. It was time to team up.</p>
<p><strong>Level 4:</strong> 2432-2428</p>
<p>The team-up with Rob <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a>, Daniel <a href="https://www.kaggle.com/danieldias" target="_blank">@danieldias</a> and Xuxu <a href="https://www.kaggle.com/xuxu1234" target="_blank">@xuxu1234</a> worked really well, in an extremely distributed manner. I will give only the briefest overview of my colleagues’ work.</p>
<p>Rob generated a 2440_2440_2440 non-wildcard solution. Daniel coded up the wildcard problem as an asymmetric TSP on Gurobi, using one substring at a time. Xuxu moved permutations around the substrings to optimise the benefits of wildcards, and placed our wildcards in the perfect places to reach 2428.</p>
<p>Congratulations to Daniel on becoming a Competitions Expert and to Xuxu on gaing the Competitions Master title.</p>
<p>Thanks to everyone in the community who analysed the problem in the discussion forum and posted helpful notebooks. Many of us benefitted from helpful hints and insights. For me, to finish in 21st place was a personal best in a medal-bearing competition, and I hope you all enjoyed the competition as much as I did.</p>
<p>That's smart, I never thought of trying it.</p> |
Santa 2021 - The Merry Movie Montage | [50th Place - Silver] Summary of our journey | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2021 - The Merry Movie Montage
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Post authored by: Guillermo García Cobo ( <a href="https://www.kaggle.com/atmguille" target="_blank">@atmguille</a> ) & Miguel González González</p>
<p>First of all, we would like to congratulate the competition winners and thank the Kaggle staff and community for this thrilling competition. We would also like to thank our other teammate <a href="https://www.kaggle.com/alejandrobravoserna" target="_blank">@alejandrobravoserna</a> for his hard work. Seeing so many inspiring posts from other teams narrating their experiences with the competition, we decided to summarize ours, even if we didn't end up reaching 2428, in case it becomes of interest to anyone.</p>
<p>We reached the non-wildcard 2440 only with CTSP solved by LKH3. Then we post-processed the strings with wildcards for 2430. </p>
<h2>Getting CTSP to work</h2>
<p>In order to solve CTSP, we wanted to use the solver LKH3 by Keld Helsgaun ( <a href="https://www.kaggle.com/keldhelsgaun" target="_blank">@keldhelsgaun</a> ). At first we thought solving the full CTSP would be too much (there are around 5.2k nodes which need to be duplicated internally for reasons we will shortly explain). Our first attempts used MTSP with the non-mandatory permutations to distribute them and then solved the final problem with individual TSPs, but this didn't reach to far as there are too many constraints to this approach.</p>
<p>Getting CTSP to work was challenging because of two aspects:</p>
<ol>
<li>LKH3 doesn't support asymmetric distance matrices for CTSP.</li>
<li>LKH3 doesn't support MINMAX objective for CTSP.</li>
</ol>
<p>The first problem could easily be solved by duplicating each node and setting specific distances with these new duplicated nodes, as explained in the paper <a href="http://home.eng.iastate.edu/~rkumar/PUBS/atsp.pdf" target="_blank">http://home.eng.iastate.edu/~rkumar/PUBS/atsp.pdf</a> . This had the disadvantage of setting the node count to around 10.5k, but since each node and its duplicate need to be visited consecutively, we could force the solver to fix those edges and save time. This is what (or something similar, I believe) the solver does internally for, say, MTSP, but for some reason CTSP is not supported.</p>
<p>For the second problem, we spent several days studying the C source code of LKH3 and then edited some functions to support MINMAX in CTSP. This was also published to kaggle by someone else several days after we had done it, with a very similar approach.</p>
<p>With CTSP working, we could start getting decent solutions. According to Kaggle user CPMP ( <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> ) the optimum without wildcards should be 2440. We started getting solutions below 2450 with CTSP, but got stuck rather quickly at 2443.</p>
<p>In order to overcome 2443 and finally reach 2440, we needed to use "<strong>tour augmentation</strong>", a technique I explained in <a href="https://www.kaggle.com/c/santa-2021/discussion/300618" target="_blank">this post</a>. </p>
<p>In parallel with our CTSP work, we started to get strong wildcard postprocessing that got us rather good scores. For example, one 2446 went down to 2432 with the postprocessing. This postprocessing is explained later in this post.</p>
<h2>Integer Progamming attempt</h2>
<p>Since some teams were getting good results with MILP, we also developed a Gurobi MILP model for CTSP following this paper: <a href="https://www.semanticscholar.org/paper/Colored-Traveling-Salesman-Problem-and-Solution-Li-Sun/a9fc6698b6be645315bbc831c6da1864f73630bd" target="_blank">https://www.semanticscholar.org/paper/Colored-Traveling-Salesman-Problem-and-Solution-Li-Sun/a9fc6698b6be645315bbc831c6da1864f73630bd</a> . Despite never having worked with gurobi before, we managed to get a reasonable model with several optimizations that solved smaller cases such as n=5. However, we ran out of RAM for n=7 (we just had access to our personal computers and small azure machines from student accounts) and didn't manage to get any good solution from MILP.</p>
<h2>Wildcard postprocessing</h2>
<p>As the Problem description states, we are given the option to use an special character that works as a wildcard to reduce our score. In this section, we will briefly described the steps we followed to achieved 2440->2430, but that also managed to save up to 14 in the score (2446->2432) along the way. A more detailed description can be found in this <a href="https://www.kaggle.com/c/santa-2021/discussion/300506" target="_blank">Kaggle post</a>.</p>
<h3>Best type of permutation to take the wildcards: mandatory ones</h3>
<p>After inspecting the distance between permutations in non-wildcard strings, one can quickly realize that mandatory permutations were the ones that were adding more distance. This is reasonable, based on the fact that mandatory permutations are repeated three times, so you quickly run out of permutations with the substring "12" that can overlap with them.</p>
<h3>Best symbol to take the wildcard: 1</h3>
<p>Following the argument of the last paragraph, we would be very interested in having more flexibility with the "12" substring. This means that the wildcard should be used to replace the "1" or the "2". After running some experiments, using the wildcard to replace the "1" definetely gave the best results.</p>
<h3>Best mandatory permutations to take the wildcard: heuristic</h3>
<p>Having clear that mandatory permutations should take the wildcard in the "1" symbol, we have to choose the best candidates among this subset. To address this challenge, we designed a heuristic that ordered candidates. Further details on the heuristic and its implementation can be found in <a href="https://www.kaggle.com/atmguille/santa-2021-best-place-for-wildcards" target="_blank">this notebook</a>.</p>
<h2>Other things we tried and did not work</h2>
<ul>
<li>Trying to insert wildcards with TSP in larger patterns (as shown in <a href="https://www.kaggle.com/miguelgonzalez2/santa-2021-wildcard-longer-patterns-groups" target="_blank">here</a> )</li>
<li>Trying to use our methods (CTSP, postprocessing…) using the 2-cycle decomposition of the problem. We reached an almost optimal non-wildcard solution with this but ran out of time.</li>
<li>CTSP with wildcards to balance permutations at the same time that wildcards are inserted.</li>
<li>Other wildcard positions other than the first symbol of a mandatory permutation.</li>
</ul> |
Santa 2021 - The Merry Movie Montage | 🎅 Generating a 2428 solution in 100 lines of code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2021 - The Merry Movie Montage
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Many thanks to the Kaggle team and community for this very interesting optimization competition!<br>
And congratulations to everyone, we hope that you enjoyed this puzzle as much as we do 🙂</p>
<p>We described our journey to a 2428 solution in this notebook: <a href="https://www.kaggle.com/skril31/generating-a-2428-solution-in-100-lines-of-code" target="_blank">https://www.kaggle.com/skril31/generating-a-2428-solution-in-100-lines-of-code</a>. </p>
<p>Hoping that you'll find it interesting 🙂</p>
<p>Happy new year to all!</p> |
Santa 2022 - The Christmas Card Conundrum | 1st place solution with visualized route | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2022 - The Christmas Card Conundrum
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We defined our cost as the sum of the optimal configuration cost and the color cost, defined as the get_cost function in the notebook below.</p>
<p><a href="https://www.kaggle.com/code/cnumber/lower-bound-using-minimum-spanning-tree" target="_blank">https://www.kaggle.com/code/cnumber/lower-bound-using-minimum-spanning-tree</a></p>
<p>2023/01/19 added visualized route of the optimal(?) route<br>
2023/01/23 made our <a href="https://github.com/chettub/santa2022" target="_blank">github repository</a> public</p>
<p>We ( <a href="https://www.kaggle.com/kibuna" target="_blank">@kibuna</a> and <a href="https://www.kaggle.com/cnumber" target="_blank">@cnumber</a>) would like to thank the organizers for this annual optimization challenge and very interesting problem.</p>
<h1>Overview</h1>
<p>This problem can be divided mainly into two steps. The first is to obtain the minimum cost path on the image with color cost and minimum reconfiguration cost as defined <a href="https://www.kaggle.com/competitions/santa-2022/discussion/370129" target="_blank">here</a>, and the second is to construct a valid arm configuration with minimal overhead costs. We worked on the former part by solving TSP with additional constraints using “genetic algorithm using edge assembly crossover (GA-EAX)”, and for the latter, beam search was used.<br>
The two important points are</p>
<ol>
<li>There exists impossible (or quite costly) routes due to the arm configuration that need to be removed when solving the TSP.</li>
<li>Once a reasonable path is obtained, it is relatively easy to obtain the arm configuration even if there are complicated moves, e.g. continuous long steps, because of the huge number of arm combinations which recover the same position.</li>
</ol>
<h1>TSP with path-dependent constraints:</h1>
<p>The basic approach to this problem is to search the minimum-cost path which starts from point (0, 0), goes around all pixels of the image, and returns to point (0, 0). This sounds like a standard TSP, but the arm constraint limits the possible routes. We incorporated this constraint with the normal TSP by adding a path-dependent cost; adding an additional cost when each of the constraints below are violated.</p>
<ol>
<li>Do not enter the \(x < 0\) region until moving more than or equal to 64 steps in the y-axis direction.<ul>
<li>from the need to rotate the 64-arm at least 64 times to go into the \(x < 0\) area</li></ul></li>
<li>\( 2 \times x < \) total y-axis direction moves<ul>
<li>from the need to rotate the 32, 16, …, 1-arms</li></ul></li>
<li>Do not connect any of the two of (-127, -127), (-127, 127), (127, -127), (127, 127) with a straight line<ul>
<li>It can be confirmed that there is no valid arm configuration, unless adding additional reconfiguration costs. </li></ul></li>
<li>When the arm moves from (1, -4) to (0, -4) or from (1, -5) to (0, -5), and then through the 4th-, 3rd-, 2nd-, and 1st-quadrant of the image, the arm must move more than 128 times in the y-plus direction before going into the 1st-quadrant.<ul>
<li>What we really wanted to do was to check if there were enough moves to rotate the 64-arm when the arm moves across the quadrants. However, this heavily increases the computational cost. We picked up the condition above after looking into typical paths generated with conditions 1, 2 and 3. In the end, these 4 conditions were enough to exclude undesired paths.</li></ul></li>
</ol>
<p>All of the constraints above have path dependency, therefore it is time consuming when incorporating them to a cost function in local search algorithms, because these constraints need to be checked every time a new candidate path is generated, which is one of the reasons why we chose genetic algorithm (GA). In terms of implementation, we didn’t exclude all the paths that violate the constraints, but instead imposed a penalty cost when the conditions are violated, so that the number of the violating paths in the population gradually reduce as GA progresses.<br>
We have also utilized a visualizer (see Apendix 1) to consider the restrictions.</p>
<h2>GA-EAX-restart</h2>
<p><a href="https://pubsonline.informs.org/doi/10.1287/ijoc.1120.0506" target="_blank">GA-EAX</a> was used for searching the minimum-cost path. The implementation is based on <a href="https://github.com/senshineL/GA-EAX-restart" target="_blank">GA-EAX-restart</a> with some modifications. The codes are available <a href="https://github.com/chettub/santa2022" target="_blank">here</a> [chettub/santa2022].<br>
GA-EAX is a GA-based TSP approximation algorithm using a fast and efficient crossover operator called edge assembly crossover (EAX), which is known to have updated the best known solutions for numerous TSP instances, especially for large <a href="https://www.math.uwaterloo.ca/tsp/data/ml/monalisa.html" target="_blank">TSP instances of ~100,000 cities</a>. We adopted this algorithm believing it had advantages in this competition due to the problem’s large size (66049 pixels), simplicity of implementing path-dependent restrictions, and the fact that the cost function is computed a relatively small number of times compared to local search algorithms.<br>
We made several modifications to the original code. First, we added the constraints mentioned above. Second, we parallelized the algorithm to speed up the computation. Third, we made it possible to specify the initial population and also to autosave the population during the calculation. This allowed us to suspend and resume calculations, merge any populations we chose, calculate only the final stage of the optimization algorithm to improve solutions further, and utilize “restart” methods described in the <a href="https://pubsonline.informs.org/doi/10.1287/ijoc.1120.0506" target="_blank">paper</a>.<br>
For higher accuracy cost calculation, the internal data type of the cost was modified to 64 bit integer (and to 128 bit integer later on). We have also modified the implementation of the distance function to retain only the necessary elements of the distance matrix for reducing memory usage, since the full-matrix distance was memory consuming.</p>
<h1>Arm Configuration</h1>
<p>Once the minimum-cost tour was found, the corresponding arm configurations needed to be found. Here, we searched for the optimal arm movement without additional reconfiguration cost by using dynamic programming (DP) and beam search with pruning.</p>
<h2>DP</h2>
<p>The configuration was most restricted by the 64-arm, so the first step was to determine whether the 64 arm could move along the given path. The 64-arm has 64*8 possible states. By considering the positions of the other arms, the following condition is required at each step.</p>
<p>$$ L^{\infty}(Position - 64ArmPosition) \leq 64 $$<br>
where<br>
$$ L^{\infty} = \max(|x|,|y|) $$</p>
<p>The above condition must be satisfied at every step. Also when considering only reconfiguration-cost optimal paths, the 64-arm can move only in the same direction as the path.</p>
<p>In the actual algorithm, we considered not only the 64-arm but also the 32-arm. Taking the condition above in to account, we calculated a 3-dimensional DP table ([step][32-arm configuration][64-arm configuration]) to check what 32-64-arm configuration was possible at each step. If none of the 32-64-arm configs were available at a step, the path was regarded as unrecoverable. The total size of the DP table was \( 64 \times 8 \times 32 \times 8 \times 66049 \sim 9 \times 10^9 \), and the number of state transitions from a 32-64-arm configuration was 9, therefore the entire DP table could be calculated in a few seconds when implemented with C++.</p>
<h2>Beam search</h2>
<p>Beam search was used to construct the arm configuration. All possible arm configurations at each step were calculated from the configuration of the step before. The width of the beam search was 500. To speed up the search, we added some prioritization and pruning.</p>
<ul>
<li>Using the DP table above, we excluded configurations with invalid 64 and 32-arms.</li>
<li>Configurations with more arms in the corner were selected with higher priority, as they have a higher degree of freedom.</li>
<li>If no configurations were available, the search was rerun from several hundred steps before with doubled beam width.</li>
</ul>
<p>Implementing the algorithms in C++ and adding various speedups allowed us to find an arm configuration for a given path in a few minutes. Also in the scope of our experiments, arm configurations were found for all paths that satisfied constraint 1~4.</p>
<h1>Experiments</h1>
<p>The LB 1st place solution could sometimes be found in less than a day of computation with a 30 vCPU instance.<br>
We have experimented with many more instances and have not yet found a better solution.</p>
<h1>Thoughts</h1>
<ul>
<li>Perhaps GA is suited for path-dependent constraints because<ul>
<li>GA-EAX-single is not too strong in local search?</li>
<li>of the population's diversity?</li>
<li>the number of cost calculation is relatively small, so it can perform relatively fast even when adding an additional time-consuming cost function.</li>
<li>it is stronger than LKH with large instances or when looking for global improvements?<br>
In this problem, being able to save the population and apply GA again after adding additional constraints helped us very much.</li></ul></li>
</ul>
<h1>Appendix</h1>
<h2>Appendix1. Visualizer</h2>
<p>We used a visualizer to examine paths that the arm movement couldn’t be recovered, and discovered the 4th constraint.</p>
<h2>Appendix2. What did not work</h2>
<h3>Consider non-optimal arm movements.</h3>
<ul>
<li>The color cost improvement was not worth the reconfiguration cost. For example, if we move 3 arms when moving to an adjacent pixel, the reconfiguration cost increases from 1 to sqrt(3) (an increase of 0.7). However, even with a constraint violation penalty of 0.7, we have not found any route that surpasses the current best solution. Also, removing the 3rd constraint decreased the TSP cost by 0.19. This is not worth the additional reconfiguration cost.</li>
</ul>
<h3>LKH</h3>
<ul>
<li>In the beginning, we tried LKH for unconstrained TSP. However, adding constraints was difficult from implementation cost and calculation speed perspectives. We tried to make LKH output a route that satisfied conditions 1, 2, and 4 by determining the route around the origin, but the performance was poor and the score could only be reduced to near 74077. (We were wondering that perhaps the difference between the 74075.* and 74077.* teams were the difference in whether or not they manually chose the route around the origin.)</li>
<li>One day of LKH calculation starting from the best GA-EAX solution found didn’t show any improvement.</li>
<li>We considered redefining the problem as “Traveling salesman problem with time windows (TSPTW)”, but gave up due to the small city number and long run time of LKH's TSPTW benchmark.</li>
</ul>
<h3>Concorde</h3>
<ul>
<li>It was impractical to complete the calculation as it takes several hours to several days for a subgraph with about 5000 vertices. On the other hand, parallelized GA-EAX solved the same 5000-verticies TSP within a few minutes.</li>
</ul>
<h3>GPX2</h3>
<ul>
<li>We tried applying GPX2 to the set of discovered solutions for further improvement, only to find that most of the solutions found using GPX2 were violating the constraints. It might be because while GPX2 can efficiently find a local minima, it cannot take into account route-dependent constraints (at least with our implementation).</li>
</ul>
<h2>Appendix3. What we didn’t have time to work on</h2>
<ul>
<li>Allow routes that passes a pixel more than twice<ul>
<li>Due to the convexity of the cost function, it is usually better to jump over a pixel than to go through it twice. Only places with long jumps where very large color costs are involved should be considered.<br>
Visualizing the difference between two paths and highlighting “circuits” that could be improved, like the idea used in GA-EAX.</li></ul></li>
<li>Merging initial population with free energy minimization (ref. regularization of TDGA)</li>
<li>LKH is reported to be stronger than GA-EAX for lattice-like TSP. In the later stages of the optimization, there were multiple paths with approximately the same cost, as in the case of lattice-like TSP. We wondered if the path could be further improved using LKH.</li>
</ul>
<h1>One of our 74075.70654 Solutions</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6624777%2F27ad5fc587f5c5892400058d82a5b36b%2FScreenshot%20from%202023-01-19%2009-58-28.png?generation=1674089926226086&alt=media" alt=""><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6624777%2Fd7c7f944b0cf314d03a060b00750dc40%2FScreenshot%20from%202023-01-19%2010-00-19.png?generation=1674090036921043&alt=media" alt=""></p>
<p>Thank you very much!<br>
I really would like to thank my teammate <a href="https://www.kaggle.com/kibuna" target="_blank">@kibuna</a> for the visualizer.</p>
<p>I added an image of our route. You can see that the route near the origin is quite complicated.</p> |
Santa 2022 - The Christmas Card Conundrum | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2022 - The Christmas Card Conundrum
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everyone !</p>
<p>First I would first like to thank the Kaggle team for this great challenge! I think it was very well calibrated and lots of fun to work on. I really enjoyed the arm twist which added a nice layer of complexity to the problem. Kudos to the creator(s)! </p>
<p>Congratulation to C-number and Kibuna for getting to the local(?) optimum first! I got there 48 hours late. I may possibly have found it earlier if I had started merging my solutions sooner but I did not expect at the time that anyone would hit an optimum so quickly. Well done Newtonians! Congratulation also to Rafbill for his impressive head-start and to all the other kagglers who made this competition fun and exciting! 😊</p>
<h3>Main steps</h3>
<p><strong>1.</strong> I first tried to acquire an intuitive grasp of the arm movements. It looks similar to the usual \( L^2 \) problem at first sight but it turns out to be much trickier to perform inverse kinematics because of the lack of rotational symmetry (here we only get the symmetry of the \( \mathbb{Z}^2 \) lattice). I found out that I like to think of the arms as some sort of nested <a href="https://en.wikipedia.org/wiki/Sliding_puzzle" target="_blank">slidding puzzles</a>. This way of looking at the arm movements really helped me figure out conditions required to make a tour "liftable".</p>
<p><strong>2.</strong> It was clear from the start that the configuration space is too big to work on it directly. So, as I guess most competitor did, I chose to factorize the problem by<br>
<strong>(i)</strong> Finding a low cost tour on the image space (for the \( \sqrt{L^1} + \hbox{color} \) distance).<br>
<strong>(ii)</strong> Lifting this tour from the image space onto the configuration space with minimal additional cost. </p>
<p>At first, I thought that going from (i) to (ii) would incur some cost due to reconfiguration of the arms needed to get a valid solution but, ultimately, it turned out that it is more effective to directly add constraints on the image tour so that it can be lifted at no additional cost. </p>
<p><strong>3.</strong> By looking at the LB, I realized early on that the cost difference between a tour on the image space and configuration space should be very small. Indeed, just a few days after the competition started, there were scores on the LB which were only a few tens of points above the best TSP tours I could generated. So I began generating as many TSP tours on the image space as possible. I figured they may be useful later… During that time, I looked for a way to lift a tour from the image space to the configuration space.</p>
<p><strong>4.</strong> It took me some time to find a effective method to convert a tour from the image space to the configuration space. I thought about using a MIP solver for that task but I am not familiar with these tools and I prefer to code. Instead, I wrote a custom program in C++. On my 4 years old Ryzen7 desktop computer, it usually takes under 15 minutes to convert a tour on the image space into a valid solution (provided that the tour is really liftable of course). This could certainly be improved but it worked well enough for my needs. More details about the program/algorithm in paragraph <strong>A</strong> below. </p>
<p><strong>5.</strong> Once I had a reliable way to convert an image tour to a solution, I started to look how to minimize the reconfiguration cost. Ultimately, I discovered that being "liftable" is a rather soft requirement. Most hard constraints are located around the origin i.e. at the beginning and end of the path because of the specified arm configuration at \( (0,0) \). In particular, the most important constraint concerns the duration of the first/last excursion of the path in the half space \( x\geq 0 \). By trials and errors, I came up with a set of necessary (and mostly sufficient) conditions for a path to be liftable. More details about that in paragraph <strong>B</strong>. </p>
<p><strong>6.</strong> I incorporated these conditions as a "penalty" in a custom TSP solver which is a crossover between LKH3 and a pure C++ TSP solver that I wrote from scratch for the "Santa Prime Maths 2018 challenge". Running this TSP solver while slowly increasing the penalty, I obtained "liftable" tours with scores around 74076.</p>
<p><strong>7.</strong> The final push was obtained by using all the (mostly unconstrained) tours I had gathered in the first weeks of the challenge. I used merging with IPT and some limited genetic algorithm to reach the final score of 74075.706541… In fact once I started merging tours, the solution popped up very quickly: I think it took less than 30 minutes to go from 76 to 75.706541…</p>
<h3>A. Lifting of a tour from the image space to the configuration space.</h3>
<p>I will make the code of the "lifting" program available on <a href="https://github.com/vindar/" target="_blank">my github page</a> soon for anyone interested (but beware that the code is very dirty, sorry…).</p>
<p>Given a tour on the image space, we can decompose the path in 5 parts \( A,B,C,D,E \) such that:</p>
<p>$$ (0,0) \overset{A}{\longrightarrow} \hbox{corner 1} \overset{B}{\longrightarrow} \hbox{corner 2} \overset{C}{\longrightarrow} \hbox{corner 3} \overset{D}{\longrightarrow} \hbox{corner 4} \overset{E}{\longrightarrow} (0,0) $$</p>
<p>Noticing that the corner pixels have a unique arm configuration associated with them, it follows that the lifting of the path may be performed independently on each sub-path \( A,B,C,D,E \) and there will be no problem in joining them together (i.e. no reconfiguration needed). Also, it turns out that paths \( B,C,D \) between corners are always straight lines so lifting then is trivial. Therefore, it only remains to lift path \( A \) onto the configuration space (and the same procedure can be used to lift \( E \) by simply considering the reverse path \( \tilde{E} \) from \( (0,0) \) to \( \hbox{corner 4} \) ).</p>
<p>The algorithm used to lift path \( A : (0,0) \to \hbox{corner 1} \) is a classical random search with a few additional tricks. Basically, having lifted the path onto the configuration space, starting from \( (0,0) \) and up to some given pixel, the algorithm tries to find an adjacent configuration located at the next pixel on the path. If there are several possibilities, it chooses one of them at random. Otherwise, the algorithm backtracks a random number of steps with geometric distribution with mean given by a temperature parameter T (just like in simulated annealing). However, this simple algorithm does not perform very well because the search space is too big and the lifted path must sometimes perform very unlikely moves… I added several improvements to the algorithm above but the three tricks which really made a difference are:</p>
<p><strong>1. Making the temperature fluctuate</strong>. When T is large the algorithm backtracks deeper so it helps to prevent getting stuck in narrow passages whereas for T small it explores the local neighborhood more thoroughly. </p>
<p><strong>2. Adding a fluctuating random drift for each arm in a given direction</strong>. This drift is used to bias the choice of the arm to move when more than one arm is admissible. Doing this biasing is especially important for the largest links (of sizes 64 and 32). Indeed, the largest link sometimes needs to move at lot during a short period. However, if we choose the moving arm uniformly at random, then the probability to select the largest link often enough becomes very small (of order \( e^{-c n} \) where \( n \) is the number of moves). On the other hand, whenever the drift aligns with the number of moved required, then this same event become a typical gaussian fluctuation (with probability of order \( 1/\sqrt{n} \) which is much higher). This strategy is, in a way, similar to <a href="https://djalil.chafai.net/blog/2018/03/09/tutorial-on-large-deviation-principles/" target="_blank">the proof of Cramer's large deviation theorem</a> where exponentially small events are made "typical" by tilting the underlying probability distribution.</p>
<p><strong>3. "Un-knitting" the path</strong>. When we encounter a dead-end, we can compute the distance (i.e. reconfiguration cost) needed to go to the next pixel. Then, going backward, we can try to recursively perform local change to this configuration path in ways that decrease the reconfiguration cost at the end of the path (for example, it is sometimes possible to swap the arm that moves at a given step without affecting the rest of the configuration path). This "un-knitting" procedure is, I believe, closer to how an MIP solver might proceed to lift a path…</p>
<h3>Conditions for a tour to be "liftable": the penalty function.</h3>
<p>As I mentioned above, I like to think of the arm movement as sliding puzzles. Since a picture is better than a thousand words, in the case of a \(17 \times 17 \) picture with 4 links \( (1,1,2,4) \): </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2513098%2F2b3ff0c618e588277c002be23ca972c0%2Fim-arm-bis.svg?generation=1674004994270266&alt=media" alt=""></p>
<p>Here, there are 4 nested blue squares of decreasing side lengths \( 9, 5, 3, 1 \), each one centered at the position of partial arm sums and the smallest unit blue square pinpoints the final arm position. With this representation, the arm movement rule translate to: <em>"each blue square can slide inside its larger blue square but it must always stay in contact with its boundary (and the largest blue square must stay in contact with the image boundary)"</em>. </p>
<p>With this representation, the starting configuration at \((0,0) \) looks like: </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2513098%2F4678ed1a4c285ef0ca364fa959881600%2Fstartconf.svg?generation=1674005168899206&alt=media" alt=""></p>
<p>Looking at this picture, it is clear that a valid configuration path can only enter the half-space \(x < 0\) after moving the largest blue square at least 64 times vertically (in the up direction if the path enters \(x < 0\) at some position \( (x_0, y_0) \) with \( y_0 > 0 \) and in the down direction otherwise). By reversal, a similar condition is also required at the end of the path. I found that these conditions for the first and last excursions are the most important ones in order to get a "liftable" tour but they are not sufficient by themselves. I considered three more conditions:</p>
<p><strong>1. Subsequent excursions</strong>. After the end of the first excursion i.e. when the path enters \( x < 0 \) at some point \( (x_0,y_0) \), we can again split the space, but this time horizontally, and look at the first (later) time when the path exits \( y < 0 \) (if \( y_0 >0 \) ) or \( y > 0 \) (if \( y_0 < 0\) ). With the same reasoning as before, we can deduce a lower bound on the number of horizontal moves that the largest link must perform during this excursion. By induction, we can repeat the argument for all the subsequent excursions of the path (alternatively splitting the space vertically and horizontally)…</p>
<p><strong>2. Do not move to early to the right</strong>. The starting position at \( (0,0) \) depicted above also clearly shows that the path cannot go "too fast" to the right at the very beginning without first moving a little bit up and down in order to "unlock some of the blue squares". In particular, the first move must be vertical. This condition is only restrictive for the very first steps (say the 15 first steps). Of course, the same condition also holds at the end of the path by symmetry.</p>
<p><strong>3. Breaking the lines at distance 1 from the image border</strong>. This last condition is a bit surprising: a liftable path cannot travel all the way along both a vertical and an horizontal lines which are exactly 1 pixel away from the image boundary (i.e. the lines with equation \( x = \pm 127 \) and \( y = \pm 127\) ) otherwise a reconfiguration with cost \( \sqrt{2} - 1 \) would be required. </p>
<p>I do not think that the set of conditions above is really sufficient to insure that a tour is liftable but, in my tests, most tours fulfilling them were indeed liftable… However, condition <strong>1.</strong> concerning the "subsequent excursions" of the path is very costly to compute (because we must inspect the whole path) so I only enforced it at the end of a search but not during local search for k-opt moves… </p>
<h2>Final remarks: computing TSP score with high precision</h2>
<p>I suspect other competitors encountered the same problem: in order to use LKH (or another TSP solver) one must to convert the floating point weights to integer values, for example, by multiplying each values by 10000 and then rounding them the closest integer. However, a significant precision is lost in doing so. My first solution to overcome this difficulty was to update my TSP solver to use 64 bits values instead of 32 bits so I could scale the values by \( 10^{10} \)… </p>
<p>However, there is a better solution that make use of the fact that the original PNG image was encoded in 24 bit RGB colors (8 bits per color channel). And indeed, if we look at the .csv file containing the colors, we can check that the floating points numbers inside only take the \( 256 \) different values \( \frac{i}{255.0} \) for \( i=0..255\). Therefore, in the TSP solver, we can instead set the color weights to be of the form \(k i\) for \(k \) large and then rescale the values \( \sqrt{j} \) appearing in the displacement cost accordingly. This improves precision significantly. However, when doing this, we are implicitly assuming that all the increments \( \frac{i+1}{255.0} - \frac{i}{255.0} \) are all equal which, although mathematically trivial, is unfortunately not true in the .csv file because we are reading values previously computed with 64 bits double precision… In fact these increments usually differ at the 17th digits. This means means that <em>two tours that would have the exact same score on the original PNG image will have different scores when computed using the values written in the csv file</em>. And indeed, I have found hundreds of tours at the current optimum which all have different costs w.r.t the CSV file but are all equivalent w.r.t. the PNG image.</p>
<p>Fortunately, because we are adding up 66000 numbers and because two tours which are equivalent on the PNG image usually differ by at most few hundred edges, the difference in scores does manifest itself before the 19th digits and therefore they all have the exact same score when rounded to double precision (which can only store up to 17 significant digits) as is done on the leaderboard. Because of this fact, I would bet that the last two digits of all 4 top scores are exactly equal (up to double precision) with value: 74075.706541690054. Could a kaggle organizer possibly confirm if this is the case ? And thanks again for this great challenge 😃</p>
<p>Yes, in a sense…</p>
<p>But even then there might be an ambiguity: do you consider the csv floating values to be equivalent to their double representation in 64 bits or to be equal to the exact finite decimal values ? </p>
<ul>
<li>In the first case, the best tour I could find has cost: 74075.7065416900536563…</li>
<li>whereas In the second case, the best tour(which is different) has cost: 74075.7065416900537868…</li>
</ul>
<p>Honestly, I feel like going above double precision does not really make much sense because the statement of the challenge itself becomes ambiguous. And furthermore, all these tours have the exact same cost when computed on a PNG image (i.e. when error rounding from double precision is removed)… Since C-number and Kibuna were the first to find the "real" solution, they certainly deserve their first place 😄</p>
<p>I understand your point of view but it still feels wrong to try optimize over a precision error… 😀</p>
<p>FYI, with your way of counting, I can get no better than 74075.7065416900537868</p>
<p>Yes. I understand your argument perfectly… </p>
<p>…but I think we should remember why we have been doing all this in the first place: <strong>to help the elf draw the Christmas cards !!!</strong></p>
<p>Therefore, it is the PNG color values that really matter in the end 😁 </p> |
Santa 2022 - The Christmas Card Conundrum | 4th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2022 - The Christmas Card Conundrum
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, I want to thank kaggle for hosting this annual santa competition. I enjoy it every time.</p>
<p><a href="https://www.kaggle.com/nagiss/4th-place-solution" target="_blank">Here</a> is a notebook that runs my solution in a short amount of time, with 74083 score.</p>
<hr>
<p>To Japanese speakers: The second half of this post contains the same content as the first half in Japanese. / 日本語話者へ: 投稿の後半に日本語で同じ内容を記載しています。</p>
<hr>
<p>Note that in my explanation, the 0-indexed, y-th pixel from the top and x-th pixel from the left are represented as (y, x). The center pixel is (128, 128).</p>
<p>My solution is to solve the constrained TSP using Concorde and a genetic algorithm, then convert the resulting tour to configurations.<br>
Before I describe the solution method in detail, I will explain the considerations that were necessary to arrive at this solution method.</p>
<p>First, some experiments show the following.</p>
<ul>
<li>The degree of freedom of the arm is high enough. For example, given a random pixel and a random configuration pointing to that pixel, the probability that we can move only one of the arms to the pixel to its right is quite high.</li>
</ul>
<p>From this, as already discussed, it is a valid strategy to first solve the TSP and then construct the configurations to follow that path.</p>
<p>However, it does not seem to work to convert the solution obtained by solving the pure TSP directly into configurations.</p>
<p>I thought that this was largely due to the restriction of the longest arm. For example, initially, the longest arm is straight to the right. Since the left side of the card cannot be printed when the longest arm is pointing to the right, the area with x < 128 (left half) must be painted after the longest arm has been moved up (or down) at least 64 times from the initial state.</p>
<p>In addition, there seemed to be three cases where constraints other than the longest arm prevented the construction of an efficient configuration:</p>
<ol>
<li>trying to paint x < 128 too early<ul>
<li>This is the same as the above example, but actually has a slightly stricter constraint than when only the longest arm is considered.</li></ul></li>
<li>going too far to the right too early<ul>
<li>For example, since the arms are all pointing left or right in the initial configuration, the first move is limited to up or down, and moving to the right is impossible. (The constraint of the longest arm already eliminates the option of left.)<br>
Moving up or down with the shortest arm on the first move allows two moves to the right, however, three moves, i.e., "down, right, right, right" from the initial state, still proves to be impossible.</li></ul></li>
<li>moving along the second pixel from the outer edge<ul>
<li>Perhaps due to an error in the algorithm that resizes the image, the outermost pixel is darker than the others, and the second outermost pixel is lighter than the others, so a pure TSP solution would contain a path along it. Of these, a path such as the second outer "冖" (e.g., (2, 1) → (1, 1) → (1, 2) → … → (1, 254) → (1, 255) → (2, 255)) cannot be used to construct an efficient configuration. (the cost of TSP and the cost of arm movements cannot be matched.)</li></ul></li>
</ol>
<p>Based on the above considerations, I was able to reach the score 74075.7065416901 using the method described below.</p>
<p>First, solve the TSP using a Genetic Algorithm (GA).<br>
At this time, the tours handled were always configurations that could be built. That is, each time a new tour was generated, the following check was performed, and if the result was False, the tour was immediately eliminated.</p>
<pre><code> ():
simulate_longest_arm(tour):
left_side_pixel_in_beginning_or_end(tour):
far_right_pixel_in_first_or_last_paths(tour):
has_冖(tour):
</code></pre>
<p><code>simulate_longest_arm()</code> uses dynamic programming to calculate the longest arm as being either not moving or moving one unit downward when moving down on a tour. (The same applies for directions other than down.)</p>
<p>Tours, the first generation of GAs, were generated using the following procedure:</p>
<ol>
<li>Create a tour that satisfies the above conditions in some way.</li>
<li>extract up to about 2400 vertices from the tour and replace them with the TSP solution by Concorde over and over again. (If the replaced tour does not pass the is_valid_tour() check, it is not replaced.)</li>
</ol>
<p>After about 100 tours are generated by this procedure, GA is performed.<br>
The edge assembly crossover (EAX) was used as the crossover algorithm for GA. I implemented it based on <a href="https://www.jstage.jst.go.jp/article/tjsai/22/5/22_5_542/_article/" target="_blank">this paper</a>, which is available in the Japanese language, but it is rather old.</p>
<p>It would have been simpler if I could have modified LKH rather than implementing EAX on my own, but the LKH code was too complicated for me.</p>
<p>After a good tour is obtained by GA, it is converted to configurations.</p>
<p>The conversion from tours to configurations uses a beam search as follows:</p>
<pre><code>last_pixel = tour[]
last_configuration_and_cost_candidates = {(, )}
pixel tour[:]:
delta = (pixel.y - last_pixel.y, pixel.x - last_pixel.x)
configuration_and_cost_candidates = {}
last_configuration, last_cost last_configuration_and_cost_candidates:
configuration, cost compute_all_possible_next_configurations(last_configuration, delta):
configuration_and_cost_candidates.add((configuration, cost))
last_configuration_and_cost_candidates = narrow_down(configuration_and_cost_candidates)
last_pixel = pixel
</code></pre>
<p>The beam search is a bit tricky to get it to work in practice; it seems that narrow_down() needs to include as many states of the longest arm as possible to find a good solution, rather than just picking the one with the smallest cost.</p>
<hr>
<p>私の解説では、0-indexed で上から y 番目、左から x 番目のピクセルを (y, x) と表現することにご注意ください。中央のピクセルは (128, 128) となります。</p>
<p>私の解法は、Concorde と遺伝的アルゴリズムを使って制約付きの TSP を解いた後、得られたツアーを configurations に変換するものです。<br>
解法を詳しく説明する前に、この解法に辿り着くために必要だった考察を説明します。</p>
<p>まず、いくつか実験を行うと、以下のようなことがわかります。</p>
<ul>
<li>アームの自由度は十分高い。例えば、適当なピクセルと、そのピクセルを指すランダムな configuration があったとして、そのうちの 1 つのアームだけを動かしてその右のピクセルに移動することができる確率はかなり高い。</li>
</ul>
<p>このことから、既に discussion でも議論されているように、最初に TSP を解いて、その経路に沿うように configurations を構築するのは有効な戦略だといえます。</p>
<p>ただし、純粋な TSP を解くことによって得られた解をそのまま configurations に変換するのはうまくいかないようです。</p>
<p>この原因は最長のアームの制約によるものが大きいと考えました。例えば、初期状態で最長のアームは真っ直ぐ右に伸びています。最長のアームが右に伸びた状態ではカードの左側を印刷することができないため、x < 128 (左半分) の領域を塗るのは、初期状態から最長のアームを上 (または下) に少なくとも 64 回動かした後である必要があります。</p>
<p>また、最長のアーム以外の制約により効率の良い configuration が構築できないケースとして、以下の 3 つがあるようでした。</p>
<ol>
<li>早い段階で x < 128 を塗ろうとしてしまう<ul>
<li>これは上記の例と同じものですが、実際には最長アームだけ考慮する場合よりも少し厳しい制約があります。</li></ul></li>
<li>早い段階で右に行きすぎる<ul>
<li>例えば、初期状態の configuration でアームは全て左右を向いているため、初手の移動は上か下に限定され、右に移動することは不可能です。(最長アームの制約により、左という選択肢は既に排除されています。)<br>
初手で最短のアームを使って上か下に移動することで右に 2 回移動することができるようになりますが、3 回の移動、すなわち初期状態から「下→右→右→右」のように移動することはやはり不可能だとわかります。</li></ul></li>
<li>外周から 2 番目のピクセルに沿って移動する<ul>
<li>画像のリサイズを行うアルゴリズムのエラーのためか、最も外側のピクセルは他より暗い色に、2 番目に外側のピクセルは他より明るい色になっているため、純粋な TSP の解はそれに沿うようなパスを含みます。このうち、2 番目に外側の "冖" のようなパス (例: (2, 1) → (1, 1) → (1, 2) → … → (1, 254) → (1, 255) → (2, 255)) からは、効率の良い configuration を構築することができません (TSP のコストとアームの移動コストを一致させられません)。</li></ul></li>
</ol>
<p>上記の考察を踏まえて、私は以下に説明する方法でスコア 74075.7065416901 に到達できました。</p>
<p>まず、遺伝的アルゴリズム (GA) を使って TSP を解きます。<br>
この時、扱うツアーは常に configurations を構築可能なものとします。すなわち、新しいツアーを生成するたびに以下のようなチェックを行い、結果が False の場合はそのツアーを即座に消去します。</p>
<pre><code> ():
simulate_longest_arm(tour):
left_side_pixel_in_beginning_or_end(tour):
far_right_pixel_in_first_or_last_paths(tour):
has_冖(tour):
</code></pre>
<p><code>simulate_longest_arm()</code> は、ツアー上で下に移動するとき最長のアームは動かないか下方向に 1 単位移動することが可能として、動的計画法を用いて計算します。(下以外の方向の場合も同様です。)</p>
<p>GA の最初の世代となるツアーは、以下の手順で生成しました。</p>
<ol>
<li>適当な方法で上記の条件を満たすツアーを作成する。</li>
<li>ツアーから最大 2400 個程度の頂点を抜き出し、Concorde により TSP の解を求めて置き換えるのを何度も繰り返す。(置き換えると is_valid_tour() でのチェックが通らなくない場合は置き換えない。)</li>
</ol>
<p>2 では、ツアーの l 番目から r 番目までの頂点を抜き出す方法の他に、カード内の矩形領域を抜き出す方法も使っています。</p>
<p>この手順により 100 個程度のツアーを生成し、GA を行います。<br>
GA の交叉アルゴリズムには、枝組立交叉 (edge assembly crossover, EAX) を使用しました。日本語で利用可能だった<a href="https://www.jstage.jst.go.jp/article/tjsai/22/5/22_5_542/_article/" target="_blank">この論文</a>を参考に実装しましたが、やや古いです。</p>
<p>自力で EAX を実装するよりも、LKH を改造することができればよりシンプルな解法となったように思いますが、私には LKH のコードは難しすぎました。</p>
<p>GA により良いツアーが得られたら、それを configurations に変換します。</p>
<p>ツアーから configurations への変換には以下のようなビームサーチを用います。</p>
<pre><code>last_pixel = tour[]
last_configuration_and_cost_candidates = {(, )}
pixel tour[:]:
delta = (pixel.y - last_pixel.y, pixel.x - last_pixel.x)
configuration_and_cost_candidates = {}
last_configuration, last_cost last_configuration_and_cost_candidates:
configuration, cost compute_all_possible_next_configurations(last_configuration, delta):
configuration_and_cost_candidates.add((configuration, cost))
last_configuration_and_cost_candidates = narrow_down(configuration_and_cost_candidates)
last_pixel = pixel
</code></pre>
<p>このビームサーチを実際に動かすには少しコツが必要で、narrow_down() では単にコストが小さいものを選ぶのではなく、なるべく多くの最長のアームの状態を含むようにしなければ、上手く解を見つけられないようです。</p>
<p>Oh, my explanation may have been somewhat inadequate. By solving TSP with the constraints I wrote above, we can generate tours that can be converted to arm movements without increasing the cost.</p>
<p>Seeing that no one could get a score lower than 74075.7065416901, I am about 85% sure that this is the optimal solution.</p>
<p>Thank you very much. And I apologize for not sending you an email declining the team merge and ignoring you. I only noticed your email after the deadline. I’m glad your team won the gold medal!</p>
<blockquote>
<p>When you started what was your expectation regarding how long it would take to run?</p>
</blockquote>
<p>I expected that if I didn't use C++, I wouldn't be able to get a solution in a reasonable amount of time.</p>
<blockquote>
<p>Your hashing function helps with pruning, but also increases the risk that an existing solution is not found. Can you elaborate on the process you followed to adjust it? How many times did you have to change it?</p>
</blockquote>
<p>Except for minor modifications like adjusting the constants, I think I only changed it once. It was a modification so that the states with different angles of the longest arm would always have different hashes.</p>
<blockquote>
<p>How good was the last version of your hashing function? Did it work for all the viable tours of points you tried it on?</p>
</blockquote>
<p>The shared code will probably work for more than 90% of the viable tours.</p>
<blockquote>
<p>The code you shared is a bit different from the pseudo code you included in your description. Is it because the code it's not the last version, or because the pseudo code just summarizes the key components of the algorithm?</p>
</blockquote>
<p>In the shared code, the equivalent of narrow_down() is done with hash in a nested for loop, but it is not an essential difference. Or perhaps I am not remembering the shared code well and have summarized it incorrectly. The code I shared is the latest version.</p> |
Santa 2022 - The Christmas Card Conundrum | 60th place solution (standard configuration + LKH) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2022 - The Christmas Card Conundrum
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Writing this here to show a contrast between top solutions and 60th solution. Also to encourage participation in these optimization challenges as I couldn't come up with very novel solutions and re-used things that have been posted.</p>
<p>I also published a <a href="https://www.kaggle.com/code/taanieluleksin/modified-standard-configuration-with-lkh/settings" target="_blank">notebook</a> with contains necessary code to replicate the solution.</p>
<h1>Configuration space <-> Point space</h1>
<p>Main issue for me to get somekind of a better solution as for everybody else was to convert very hard problem of generalized TSP in configuration (35,184,372,088,832 destinations with 66,049) to less hard TSP problem in point space (66,049 destinations).</p>
<p><a href="https://www.kaggle.com/ambrosm" target="_blank">@ambrosm</a> <a href="https://www.kaggle.com/competitions/santa-2022/discussion/376306" target="_blank">posted a good idea to limit arm's degrees of freedom</a>. Which I used to convert each point from configuration space to point space that could be fed to TSP solver as LKH.</p>
<p>But with great limitations comes less-optimal solution. 😃</p>
<ul>
<li>For instance there is a boundary between each quadrant so moving from quadrant to quadrant is optimal only from the edge of the image.</li>
<li>It doesn't contain start and end configuration.</li>
</ul>
<p>Since I had limited time at that point, I just modified standard configuration for small ranges for start and return trip.</p>
<pre><code>if ((x == 0) and (y >= 0) and (y < 64)):
config = [(64,y),(-32,0),(-16,0),(-8,0),(-4,0),(-2,0),(-1,0),(-1,0)]
config = [(64,y-1),(-32,0),(-16,0),(-8,0),(-4,0),(-2,0),(-1,0),(0,1)]
</code></pre>
<p>To calculate adjacency matrix I used <code>get_path_to_configuration</code> and calculation principles made in <a href="https://www.kaggle.com/cnumber" target="_blank">@cnumber</a> 's <a href="https://www.kaggle.com/code/cnumber/lower-bound-using-minimum-spanning-tree" target="_blank">lower bound calculation idea</a>.</p>
<p><code>cost = total_cost(get_path_to_configuration(standard_config(x,y),standard_config(nx,ny)),image)</code></p>
<h1>Running LKH</h1>
<p>To run LKH I made adjacency matrix with upper half only. I used the simplest of parameters, which performed quite well. See <a href="https://www.kaggle.com/code/taanieluleksin/modified-standard-configuration-with-lkh/#Write-input-file-for-LKH" target="_blank">implementation</a>. Indexing for points was also made in similar manner as in <a href="https://www.kaggle.com/cnumber" target="_blank">@cnumber</a> <a href="https://www.kaggle.com/code/cnumber/lower-bound-using-minimum-spanning-tree" target="_blank">lower bound calculation notebook</a>.</p>
<h1>Getting the result</h1>
<p>As TSP solution is an unicyclic graph it is possible to "roll" (<code>np.roll</code>) to desired start point. Now reverse operations can be done:</p>
<ul>
<li>TSP index to x,y coordinates;</li>
<li>x,y coordinates to configuration using standard configuration;</li>
<li>Add end configuration;</li>
<li>Remove duplicates just in case (I had 1 which improved result).</li>
</ul>
<h1>Final words</h1>
<p>Big thanks to organizers! I really enjoyed this competition and would be interested in write up how this problem was created.</p> |
Santa 2022 - The Christmas Card Conundrum | 8th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2022 - The Christmas Card Conundrum
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We, <a href="https://www.kaggle.com/qiaoshiji" target="_blank">@qiaoshiji</a>, <a href="https://www.kaggle.com/vialactea" target="_blank">@vialactea</a>, <a href="https://www.kaggle.com/zyu2017" target="_blank">@zyu2017</a> and <a href="https://www.kaggle.com/runningz" target="_blank">@runningz</a>, thank the Kaggle team for this intriguing problem and all fellow competitors! Our solution is in a rather brutal force fashion and we briefly summarize our intuitions in this thread.</p>
<p>As many other teams do, we first find a good initial TSP tour and then make it configuration friendly. Our solution can be summarized roughly into four stages</p>
<ol>
<li>Finding a good initial TSP tour</li>
<li>Manually freeze a few edges</li>
<li>Optimizing TSP with our choice of edges</li>
<li>Convert TSP tour to solution</li>
</ol>
<h3>TSP without constraints</h3>
<p>Initially we were not quite sure whether it is fruitful to start with a TSP tour without considering configurations. However, as <a href="https://www.kaggle.com/cnumber" target="_blank">@cnumber</a> pointed out, in this <a href="https://www.kaggle.com/competitions/santa-2022/discussion/370129" target="_blank">thread</a>, that the lower bound by minimum spanning tree is approximately 72599 and considering it is relatively loose as also mentioned by <a href="https://www.kaggle.com/elvenmonk" target="_blank">@elvenmonk</a>, at <a href="https://www.kaggle.com/competitions/santa-2022/discussion/376079#2086550" target="_blank">here</a>. We were vaguely speculating that top score in LB starts with a TSP tour. </p>
<p>We run LKH for a few days and it gave us 74077 solutions. We also run a few sub-problems with Cplex and hope it would give us a better solution. Although Cplex wasn't able to solve the problem because of its size, it was able to moving the lower bound very slowly which ends up tighter than the one obtained by spanning tree. We were convinced that we were probably on the right track given the gap between lower and upper bound.</p>
<p>We spent quite an amount of computation with LKH, the best solution is around 74075.3. Our secret sauce for a better TSP tour is <a href="https://pubsonline.informs.org/doi/10.1287/ijoc.1120.0506" target="_blank">GA-EAX</a>, which is also used in <a href="https://www.kaggle.com/competitions/santa-2022/discussion/379167" target="_blank">1st</a> & <a href="https://www.kaggle.com/competitions/santa-2022/discussion/379080" target="_blank">4th</a> place solutions. GA-EAX eventually gives us a <strong>74074.95</strong> tour.</p>
<h3>Delve into the restrictions</h3>
<p>We suspected that one could convert TSP tour to solution for free once the first link successfully reached <code>(64, ±64)</code>, because the degree of freedom is plenty. To validated our hypothesis, we begun with a tour going straight up/down, i.e starting with<br>
<code>(0, 0), (0, 1), (0, 2), ... (0, 64)</code><br>
and ending with <br>
<code>(0, -64), (0, -63), (0, -63) ... (0, 0)</code>.</p>
<p>We had a success with this tour with a few extra cost. This proof-of-concept experiment helps us to identify another restriction. (Our method of converting tour to solution is deferred to last section.)</p>
<ol>
<li>Do not connect any two of (-127, -127), (-127, 127), (127, -127), (127, 127) with a straight line. (quoted from <a href="https://www.kaggle.com/cnumber" target="_blank">@cnumber</a> and <a href="https://www.kaggle.com/kibuna" target="_blank">@kibuna</a>'s <a href="https://www.kaggle.com/competitions/santa-2022/discussion/379167" target="_blank">solution</a>)</li>
<li>Do not travel all four quadrants in a short span. (We later found this restriction but it was an easy fix)</li>
</ol>
<h3>Manual choice of starting and ending subtour</h3>
<p>Our best tour (blue tour in the background) turns left too early. We therefore manually perturbed a few edges (for example, one is circled in red). It ends up with two fixing sub-paths, the one in orange leaving the origin, and the one in purple arriving to the origin. These fixed paths mostly follow our best solution, with a few perturbations, but end up in good regions which allow the first link to successfully reach <code>(64, ±64)</code>.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1423374%2Ff1c90121535974f821984e0c2893d6b7%2Fcenter.jpg?generation=1674165323873258&alt=media" alt=""></p>
<p>To get these two paths, we locally optimize what to perturb within a 65x65 sub-image in the center. We eventually freeze these paths and re-run LKH and GA-EAX to get our final tour.</p>
<h3>Tour to solution</h3>
<p>We formulate the tour to solution problem as Integer Programming (IP), to meet the arm moving constraints, and solved it with Cplex. It is impossible to solve the entire tour due to the size of configuration space. We thereby divided tour into smaller pieces with length 200. Sometime it is also infeasible to solve a 200-points piece, say the interval [K, K+200), because of a bad initial configuration at K. We gradually increase the interval, i.e. [K-100, K+200), [K-200, K+200) …, until it is solved.</p>
<h3>Others we have tried or planed to</h3>
<ul>
<li>We also considered the possibility of tours with repeated points rather than a Hamiltonian one. (didn't improve).</li>
</ul>
<p>Thank you for your kind words!! My vocabulary may limit my expression of gratitude, but I sincerely appreciate working with you guys. It is also my first teaming competition and must be my best Kaggle experience so far. I will see you guys around in future competitions!</p>
<p>Thank you, Vincent!</p>
<p>Thank you, Hassan!</p> |
Santa 2023 - The Polytope Permutation Puzzle | 12th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><blockquote>
<p>Why do this perform better than the vanilla, is it because it generates more variability among different beams?</p>
</blockquote>
<p>My understanding is that it achieves a similar effect to having a well-thought-out evaluation function.</p>
<p>Beam search can be seen as an extension of Dynamic Programming (DP). (See <a href="https://hakomof.hatenablog.com/entry/2018/12/06/000000" target="_blank">this article</a> for a detailed explanation, available in Japanese but understandable through machine translation.)</p>
<p>When viewed as a type of DP, the approach can be described as follows:</p>
<p><strong>standard beam search</strong></p>
<ul>
<li>Criterion for state equivalence: depth</li>
<li>Transition of dp[i]: only to dp[i+1]</li>
</ul>
<p><strong>our variant</strong></p>
<ul>
<li>Criterion for state equivalence: match count</li>
<li>Transition of dp[i]: unlimited (dp[i+1], dp[i+2], …)</li>
</ul>
<p>We preferred equating states by match count over incorporating both match count and move number into the standard beam search's evaluation function, aiming for simplicity and effectiveness.</p>
<blockquote>
<p>Where can I read more about it?</p>
</blockquote>
<p>This beam search variant is common in heuristic contests (like AtCoder's AHC or TopCoder's Marathon Match), but I am not aware of any dedicated literature on the subject. Reviewing past contest write-ups may be helpful.</p>
<p>result: <a href="https://gist.github.com/sash2104/fc2628f2ce6770e92c827e799a9726d1#file-95484-csv" target="_blank">95484.csv</a></p>
<p>First of all, thanks for organizing the competition. This is my third consecutive year participating, and I've enjoyed this problem the most.<br>
I'll mainly discuss the large-size Cube problem, as it took up most of our time.</p>
<h2>Overview of the Approach</h2>
<p>For all wreaths and small-size cubes, solutions were obtained either through bidirectional BFS or simple beam search.</p>
<p>For all globes and large-size cubes, the following steps were taken:</p>
<ol>
<li>Solve the special parts. <ul>
<li>The main objective is to resolve parity.</li></ul></li>
<li>Beam search with moves that preserve parity as candidate moves.<ul>
<li>The foundational strategy involved using 3-rot commutators as candidate moves, and for further improvement, the pool of candidate moves was expanded.</li></ul></li>
</ol>
<p>The solutions obtained were further optimized using public notebooks such as <a href="https://www.kaggle.com/code/shitovvladimir/optimize-any-solution-with-group-theory-approach" target="_blank">here</a> and <a href="https://www.kaggle.com/code/glazed/humble-hillclimber" target="_blank">here</a>, and when applicable, the best solutions from public notebooks were adopted.<br>
We are grateful for the valuable insights gained from utilizing these public notebooks.</p>
<h2>Scores</h2>
<pre><code>
//
//
//
//
//
//
//
//
//
//
//
/
/
/
/
/
/
/
/
/
/
/
/
/
/
/
</code></pre>
<h2>Terminology</h2>
<ul>
<li>facelet: The smallest unit of state. (e.g. There are 6534 facelets in cube_33/33/33)</li>
<li>cubie: An individual small cube in a Cube puzzle. (e.g. In cube_33/33/33, there are 5766 center cubies, 372 edge cubies, and 8 corner cubies)</li>
</ul>
<h2>Large-size Cube Solution</h2>
<ol>
<li><p>Solve the special parts.</p>
<ul>
<li>Use the moves that appear an odd number of times in the sample_submission.csv to resolve parity.<ul>
<li>Since executing the same move an even number of times does not change parity, arranging only moves that appear an odd number of times can always resolve parity.</li>
<li>The sequence of moves obtained is manually fine-tuned to reduce the number of moves further (e.g., <code>r1.d1</code> is removed because it does not change parity).</li></ul></li>
<li>For cubes with an odd N, also align the positions of the central six cubies.</li></ul>
<p>Examples: For id=283, use <code>f0.f16.f2.f4.f5.f6.f11.f12.f13.f14</code>; for id=257, use <code>f3.r3.f1.f2</code>.</p></li>
<li><p>Beam search with moves that preserve parity as candidate moves.</p>
<ul>
<li>The beam width varies with N, ranging from 1 to 100.</li>
<li>The evaluation metric is the fewness of moves. The number of matches with the solution state on a cubie basis is considered equivalent to turns.</li></ul>
<p>The candidate moves used are as follows:</p>
<ul>
<li>Moves that do not disturb parity in two steps.<ul>
<li>Example for N=33: <code>r1.-d31</code></li></ul></li>
<li>All combinations of 3-rot.<ul>
<li>After enumerating all 8-move commutators of 3-rot (e.g. <code>R=-d1.-r0.d0.r0.d1.-r0.-d0.r0</code>), use breadth-first search to find the remaining moves in the form of <code>A.R.-A</code>.</li></ul></li>
<li>Commutators of 4 to 8 moves that alter a small number of facelets.<ul>
<li>Examples for N=33: <code>f1.r1.-f1.-r1</code>, <code>-d32.-r31.d22.r31.d32.-d22</code></li>
<li>Adjusted based on N. For N=5, all 4, 6, and 8 moves that alter up to 24 facelets; for N=33, all 4 and 6 moves that alter up to 12 facelets.</li></ul></li>
<li>Commutators of rotations of two corners and rotations of two central edges.<ul>
<li>Example of corner rotation moves for N=33: <code>r0.f0.-r0.-f0.r0.f0.-r0.-f0.r32.f0.r0.-f0.-r0.f0.r0.-f0.-r0.-r32</code></li>
<li>Example of edge rotation moves for N=33: <code>-r16.d0.r16.-d0.-r16.d0.d0.r16.-d32.-r16.-d0.-d0.r16.d0.-r16.-d0.r16.d32</code></li></ul></li>
<li>Commutators composed of the above.<ul>
<li>Compositing multiple commutators can reduce the number of moves.</li>
<li>Example: <code>-r1.r2.d0.f1.-f2.-d0.r1.-r2.d0.-f1.f2.-d0</code> is a composition of four 3-rot such as <code>-r1.d0.f1.-d0.r1.d0.-f1.-d0</code> and <code>r2.d0.-f2.-d0.-r2.d0.f2.-d0</code>.</li></ul></li></ul></li>
</ol>
<h2>Globe Solution</h2>
<ol>
<li><p>Solve the special parts.</p>
<ul>
<li>Roughly align positions using only <code>r{i}</code> moves.</li>
<li>Most cases were solved without explicitly resolving parity; for unsolvable cases, a few random moves were added before reattempting.</li></ul></li>
<li><p>Beam search with moves that preserve parity as candidate moves.</p>
<ul>
<li>The beam width varies with N, ranging from 1 to 10.</li>
<li>The evaluation metric is the fewness of moves, with the number of matches with the solution state on a facelet basis considered equivalent to turns.</li></ul>
<p>The candidate moves used are as follows:</p>
<ul>
<li>All combinations of 3-rot. (e.g. <code>r0.f0.r1.f0.-r1.-f0.-r0.-f0</code>) </li>
<li>Commutators of 4 to 12 moves that alter a small number of facelets. (e.g. <code>r1.f3.f4.-f3.-r1.-f4</code>)</li></ul></li>
</ol>
<p>In standard beam search, one typically maintains a priority queue of nodes, denoted as <code>nodes := priority_queue<Node></code>. However, as an alternative, we maintain a vector of priority queues, denoted as <code>nodes_per_match := vector<priority_queue<Node>></code>.<br>
The value function simply counts the number of moves, and for each <code>nodes_per_match[t]</code>, it keeps the best k nodes with the least number of moves.</p>
<p>For instance, let's say there is a node with a match count of t after n moves. If a candidate action that requires 8 additional moves and increases the match count by 3 is selected, then a new node with a value of n+8 is added to <code>nodes_per_match[t+3]</code> .</p> |
Santa 2023 - The Polytope Permutation Puzzle | 16th Place Solution for Santa 2023 - The Polytope Permutation Puzzle Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>And we actually did have a strictly better solution for exactly 1 puzzle. For puzzle #105 we have a 21-move solution and the best solution from teams 1, 2 & 4 was a 23-move solution.</p>
<p>I will publish it - I just haven't got there yet. Thanks for pulling together the best submissions. I'm not expecting that we've got a winning solution for any of the puzzles (except for the optimal ones) - the leaders had half our score overall.</p>
<p><a href="https://www.kaggle.com/solverworld" target="_blank">@solverworld</a> - I've attached our final submission to the OP.</p>
<p>Glad it was helpful for you. It's nice to have a little collaboration alongside the competition, even if it helps some people/teams beat us 😉.</p>
<p>Updated with full details of the globes from globe_1/8 to globe_6/8.</p>
<p>Updated with details of how we solved the smaller puzzles.</p>
<h1>Context</h1>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/santa-2023/overview" target="_blank">https://www.kaggle.com/competitions/santa-2023/overview</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/santa-2023/data" target="_blank">https://www.kaggle.com/competitions/santa-2023/data</a></li>
</ul>
<h1>Overview of the approach</h1>
<p>We used bidirectional breadth-first search to optimally solve the small puzzles. The backbone of the larger puzzles were solved with commutators and conjugates, but with several special features (detailed below) that made a big difference.</p>
<h1>Details of the submission</h1>
<h2>Solving small puzzles optimally</h2>
<ul>
<li>We solved the <code>cube_2/2/2</code>, <code>wreath_6/6</code> and <code>wreath_7/7</code> puzzles optimally using simple breadth-first-search (BFS).</li>
<li>To extend to <code>wreath_12/12</code>, we implemented bi-directional BFS (BBFS). As well as enabling us to reach the optimal solution for <code>wreath_12/12</code> where we couldn't before (because we ran out of RAM), BBFS was vastly faster than vanilla BFS in finding the solutions and became a standard part of our toolbox for solving parts of other puzzle types. </li>
<li>To extend to <code>wreath_21/21</code>, we made optimizations to compress the state representation and ran on a box with <em>lots</em> of RAM.</li>
</ul>
<h2>Medium globes</h2>
<p>Even the smallest globes weren't solvable by BBFS. What we did here was to solve the "first" N cells using BBFS and then iteratively expand. We did this by masking out the other cells - setting them all to have the same symbol as other (<code>.</code>) - which means that many unique states collapse down to a single pseudo-state and the puzzle is therefore much smaller. Once we'd solved the first N cells, we'd extend to N+M cells. We called this Iterated BBFS (IBBFS). It no longer gives optimal solutions, but it gives pretty good ones. We then tweaked it a little further so that N and M were set adaptively - i.e. if BBFS found the solution in very few moves, we'd unwind and make it solve a bigger chunk at once. If BBFS was taking too long to solve the next M cells, we'd abort and reduce M. We called this Adaptive IBBFS (AIBBFS). AIBBFS was our best-performing method for globes 1/8, 2/6, 3/4, 6/4 and 6/8.</p>
<h2>Large globes</h2>
<p>Solving the larger globes proceeded in two phases. The first phase tried "random" moves to solve as many cells as possible. The second phase used commutators and conjugates (in Rubik's cube parlance) to solve the remainder without upsetting what was already solved.</p>
<h3>Phase 1</h3>
<p>Phase 1 simply used BFS, up to a fixed depth, to find the set of moves that solved the maximum number of cells. Then we did <em>just the first of</em> that set of moves and repeated the process. Taking a single step before doing the next iteration of BFS (rather than doing the full set of moves found) improved the efficiency of this phase and markedly improved the total number of cells solvable by this phase. Given that this phase had an efficiency in the region of 1 move/cell and phase 2 is more like 6 moves/cell, getting more cells solved in this phase was a big win.</p>
<h3>Phase 2</h3>
<p>We used BBFS to find a large set of short two-pair commutators - i.e. a small number of moves that swapped cells <code>{A,B}</code> and <code>{C,D}</code> but left everything else where it was. These commutators was short (8 or 10 moves) but pretty limited in the pairs they could swap. Then we extended them using conjugates - i.e. perform any set of moves as a preamble to bring cells of interest into positions <code>A, B, C & D</code> that were covered by a commutator, then use the commutator, then perform the inverse of the preamble.</p>
<p>We built a large (10s of millions of entries) database of pairs that could be swapped in this way. Then, at each step, we searched for the database entry that gave us the best bang-for-buck improvement to the current state (i.e. number of additional cells solved / number of moves taken to solve) and applied that.</p>
<p>(Also worth mentioning something that worked nearly as well and ran much faster. There are some commutators that swap two north-south pairs and other commutators that swap a north-north pair and a south-south pair. For much of the competition we first got all the pieces into the correct hemisphere and then did the within-hemisphere swaps.)</p>
<p>In one of the other write-ups, it sounds like the are also 3-cell commutators available for the globe puzzles. I had wondered about about adding these too, but I was out of time / focusing on helping my teammates solve the largest cubes.</p>
<h2>Cubes</h2>
<p>It was primarily my teammates who worked on the cubes, so I have less detail here. But the outline is a lot like the larger globes…</p>
<ol>
<li>Find a prefix that solves as many cells as possible.</li>
<li>Use an orbit solver that uses commutators to solve each of the 24-cell orbits independently.</li>
<li>For puzzles with wildcards, solve the knapsack problem.</li>
<li>Assemble the solutions for each orbit using a travelling salesman solver.</li>
</ol>
<p>The last couple of phases are different to anything discussed so far.</p>
<h3>Phase 3 - Knapsack solving</h3>
<p>Given that phase 2 independently solves each orbit and given that some solutions have wildcards, it may be possible to leave some orbits unsolved. But which orbits should we do that for? This is the traditional <a href="https://en.wikipedia.org/wiki/Knapsack_problem" target="_blank">knapsack problem</a>. The value of omitting an orbit is the number of moves taken to solve that orbit. The cost (or "weight") of omitting an orbit is the number of cells that will be left in an incorrect state if we don't solve the orbit.</p>
<p>This mostly applied to puzzle #277 which had 176 wildcards (8% of the state). Since it had limited applicability to other puzzles and we had limited time, we just used the straightforward approach of greedily omitting orbits based on their <code>value/weight</code> (subject to the "weight" not exceeding the number of wildcards available). There are ways of solving the knapsack problem that get better solutions, but this is often a reasonable starting point.</p>
<h3>Phase 4 - Travelling salesman</h3>
<p>Phase 2 solves the orbits independently. Because of that, it's possible that there are moves than cancel out at the end of one orbit solution and the beginning of the next. If we re-order the orbits, we can seek to maximize the total number of moves the cancel out. We noticed that this can be framed as an instance of the (asymmetric) travelling salesman problem. First compute, for each ordered pair of orbits <code>[A, B]</code>, the number of moves that can be cancelled ('C') when solving <code>B</code> immediately after <code>A</code>. The (directed) distance between <code>A</code> and <code>B</code> is just <code>-C</code>. Pass to a <a href="http://akira.ruc.dk/~keld/research/LKH-3/" target="_blank">travelling salesman solver</a> (which I used for <a href="https://www.kaggle.com/competitions/santa-2022" target="_blank">Santa 2022</a>) to minimize the distance and therefore maximize the amount of cancellation possible.</p>
<p>We should also have used this method for the larger globes but, again, we were out of time.</p>
<h1>Sources</h1>
<p>TODO</p>
<h1>Appendix A: The soft stuff</h1>
<ul>
<li><strong>Thanks</strong> to Kaggle and to Ryan in particular for setting this competition and dealing so graciously & efficiently with the little wrinkles.</li>
<li>Why <strong>Always Day Zero</strong>? This year I invited a couple of my colleagues to join a team with me. The company that we work for has a culture that it's "always day one" (which you can read more about <a href="https://aws.amazon.com/executive-insights/content/how-amazon-defines-and-operationalizes-a-day-1-culture/" target="_blank">here</a> if you aren't familiar with the term). I've previously competed in internal capture-the-flag security competitions where obviously "day zero" refers to an exploit that hasn't been patched yet. So, as a little joke, I competed in those competitions under the team name "Always Day Zero" and I've stuck with it ever since.</li>
<li>I'm really pleased that so many of you benefitted from my <strong>progress updates</strong> and were able to use them to focus your attentions on the puzzles where you were furthest away. Thanks for the encouragements and interesting discussions in the chat. Personally, I find it really adds to the experience.</li>
</ul>
<h1>Appendix B: Final scores by puzzle type</h1>
<table>
<thead>
<tr>
<th>cube size</th>
<th>total moves</th>
<th>wreath size</th>
<th>total moves</th>
<th>globe size</th>
<th>total moves</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>315 (†)</td>
<td>6</td>
<td>150 (†)</td>
<td>1/8</td>
<td>1,104</td>
</tr>
<tr>
<td>3</td>
<td>2,821</td>
<td>7</td>
<td>128 (†)</td>
<td>1/16</td>
<td>1,434</td>
</tr>
<tr>
<td>4</td>
<td>4,953</td>
<td>12</td>
<td>173 (†)</td>
<td>2/6</td>
<td>181</td>
</tr>
<tr>
<td>5</td>
<td>5,336</td>
<td>21</td>
<td>176 (†)</td>
<td>3/4</td>
<td>680</td>
</tr>
<tr>
<td>6</td>
<td>3,825</td>
<td>33</td>
<td>383</td>
<td>6/4</td>
<td>444</td>
</tr>
<tr>
<td>7</td>
<td>2,185</td>
<td>100</td>
<td>642</td>
<td>6/8</td>
<td>2,129</td>
</tr>
<tr>
<td>8</td>
<td>2,930</td>
<td></td>
<td></td>
<td>6/10</td>
<td>2,673</td>
</tr>
<tr>
<td>9</td>
<td>3,810</td>
<td></td>
<td></td>
<td>3/33</td>
<td>11,853</td>
</tr>
<tr>
<td>10</td>
<td>4,829</td>
<td></td>
<td></td>
<td>8/25</td>
<td>4,038</td>
</tr>
<tr>
<td>19</td>
<td>14,637</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>33</td>
<td>41,078</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<p>† We know we have an optimal solution.</p>
<h1>Appendix C: Score progression</h1>
<p>Here's a graph of our score progression over the competition.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2217262%2Fc6e9b03d54d9ad6a4d2f13d09e0b7583%2Fhistory.png?generation=1706777568273724&alt=media"></p>
<h1>Appendix D: Final submission</h1>
<p>See the attachment for our final submission.</p>
<p>Runtime wasn't really the problem - but 30 minutes sounds about right. It was the memory usage I really had to work on.</p>
<p>We got upgraded to 16th. Don't know how that happened, but still silver medal, so it doesn't really make a difference either way.</p> |
Santa 2023 - The Polytope Permutation Puzzle | 1st Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We solved 3x3x3 first simply because it was the first strategy that came to mind and we did not see any significant disadvantages. One of the advantages of solving the 3x3x3 first is that we can insert 3-rots even within the sequence of moves to align the 3x3x3. This increases the degrees of freedom, especially for small cubes. In addition, finding the exact optimal solution for the 3x3x3 can be quite time-consuming, so we reused precomputed solutions when making improvements to the program and re-running it. When 3x3x3 is solved later, face moves can change the 3x3x3 problem to be solved, requiring recalculation each time.</p>
<p>Due to the significantly limited range of available moves, we believe that even with the 4x4x4 cube, we are likely not achieving the optimal solution.</p>
<blockquote>
<p>Do you have human readable version of found sequences corresponding to each 3 elements? I think it can be crucial for future development of algorithms to solve large rubik's cubes.</p>
</blockquote>
<p>You can find here: <a href="https://github.com/wata-orz/santa2023_permutation_puzzle/blob/main/solution/src/rotate_all.txt" target="_blank">https://github.com/wata-orz/santa2023_permutation_puzzle/blob/main/solution/src/rotate_all.txt</a><br>
The ids 0-23 are assigned in lexicographic order of (y,x), one is fixed at 0 and other cases are calculated from symmetry.</p>
<blockquote>
<p>Also have you considered applying same bidirectional search to entire 24 element clusters in any intermediate state after restoring parity (making them even), or is it not feasible?</p>
</blockquote>
<p>While it might be possible to find the optimal solution for a single cluster by focusing solely on it, we believe that solving multiple clusters simultaneously is likely to result in more cancellations and potentially a shorter overall solution. </p>
<blockquote>
<p>Should any configuration of a cluster with even number of swaps have diameter of 14 (same as diameter of 2x2x2 cube) or with requirement to keep remaining cube unchanged this does not stand?</p>
</blockquote>
<p>The maximum of 14 moves is simply the shortest number of moves to rotate a certain 3 pieces in a same cluster, and we do not know the maximum number of moves required to completely solve the cluster.</p>
<blockquote>
<p>Is this because they occupy same space on 2 adjacent faces?</p>
</blockquote>
<p>Yes. Unlike other pieces on the edge, we have to consider the operation of exchanging two adjacent colors.</p>
<blockquote>
<p>So basically you can solve those special parts as regular 3x3x3 cube? (or 2x2x2 for even sizes)</p>
</blockquote>
<p>Yes</p>
<p>Simply terminated when the remaining color difference is below the specified wildcard.</p>
<p>Solving 10x10x10 cube (272) in 454 steps<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F36720%2F98429ee0f21875cfa4638910db8a2b59%2F272.gif?generation=1706750566377979&alt=media" alt="Solving 10x10x10 cube (272) in 454 steps"></p>
<p>Among the three types of puzzles, the wreath puzzle was not particularly focused on because a very short solution could be found using simple beam search, and it had a lower total score compared to the others, making it a lower priority. Therefore, we concentrate on the solutions for the remaining two types, the cube and the globe puzzles. The strategies for solving both puzzles are based on the same principle.</p>
<p>Due to each move affecting a large number of pieces at once, simply applying moves as is can lead to a somewhat organized state, but fully solving the puzzle becomes extremely challenging. Thus, the key is to find sequences of moves that only swap a few pieces while leaving the rest unchanged. Remarkably, for both puzzles, there exist sequences of moves that only rotate three pieces and leave the rest unchanged, referred to as 3-rot.</p>
<ul>
<li>Example for the cube puzzle: d3.f2.d2.-f2.-d3.f2.-d2.-f2</li>
<li>Example for the globe puzzle: f0.r0.f0.r1.f0.-r1.f0.-r0</li>
</ul>
<p>When considering the cluster decomposition of all pieces (a cluster being a set of pieces that can be interchanged through moves), it turns out that for most clusters, except for special parts like the cube's corners, there exists a 3-rot for any three elements within them.</p>
<ul>
<li>For the cube puzzle: Corners, centers of faces, and centers of edges are exceptions. For other clusters, taking symmetry into account, it suffices to consider the diagonal parts of a 4x4x4 (1,1), the cross parts of a 5x5x5 (1,2), the non-diagonal and non-cross parts of a 6x6x6 (1,2), and the edge parts of a 4x4x4 (0,1). For each, a bidirectional search was performed to enumerate all shortest 3-rot for any three elements, with the longest being 14 moves.</li>
<li>For the globe puzzle: The central row when the number of rows is odd is an exception. For other clusters, with the number of rows as 2 and the number of columns as 2c, R=f0.r0.f0.r1.f0.-r1.f0.-r0 corresponds to a 3-rot of ((0,0) (1,c) (1,c-1)). A 3-rot for any three pieces can be found by first finding a sequence of moves A that moves those three pieces to a state where R can be applied (where other pieces can move freely), using breadth-first search, and then the 3-rot can be obtained by A.R.-A.</li>
</ul>
<p>When solving each cluster using 3-rot, it is important to note that all 3-rot are even permutations, so the overall permutation must also be even. Combining these insights, we can use the following approach:</p>
<ol>
<li>Solve the special parts.</li>
<li>Operate without disrupting the solved parts so that all remaining clusters become even permutations.</li>
<li>Solve each cluster independently using 3-rot.</li>
</ol>
<p>To achieve shorter solutions, we employ the following key ideas:</p>
<ul>
<li>Since 3-rot requires at least 8 moves and is lengthy, it's more efficient to bring the puzzle to a somewhat solved state using elementary moves or short sequences before employing 3-rot for the final touches.</li>
<li>When adding a new sequence B to an existing sequence A, canceling out the end of A with the beginning of B can shorten the overall sequence. For example, if A=A'.ri and B=-ri.B', then A.B becomes A'.ri.-ri.B'=A'.B', thus saving 2 moves.</li>
<li>If the current sequence is A=a[0]…a[T-1], instead of appending a 3-rot B for (i j k) at the end to form a[0]…a[T-1].B, at arbitrary time t, inserting a 3-rot B' for some (i' j' k') to achieve a[0]…a[t-1].B'.a[t]…a[T-1] results in the same state. Selecting the appropriate time t can make B' shorter than B or result in more cancellations. Therefore, rather than constructing the sequence from the front, it is better to try the insertions for all times and select the best time.</li>
</ul>
<p>Based on these ideas, we use the following approach:</p>
<ol>
<li>Solve the special parts.</li>
<li>Use elementary moves to make all clusters even while roughly aligning them.</li>
<li>Bring to a somewhat solved state using short sequences.</li>
<li>Insert 3-rots at arbitrary times.</li>
</ol>
<p>The details of each part and the score table can be found in our repository: <a href="https://github.com/wata-orz/santa2023_permutation_puzzle" target="_blank">https://github.com/wata-orz/santa2023_permutation_puzzle</a></p> |
Santa 2023 - The Polytope Permutation Puzzle | 243rd Ranked Solution for the Santa 2023 - The Polytope Permutation Puzzle Challenge | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Thank You</h1>
<p>I thank all the competition participants, peer solution writers, community members and the supporting team of the platform for hosting another great competition for learning and implementation. </p>
<h1>Overview</h1>
<p>The objective of this competition is that you will solve cube-like puzzles in the fewest moves, but instead of the usual cubes, the puzzles come in a variety of geometric shapes. </p>
<h2>The Data</h2>
<p>1) puzzle_info.csv<br>
puzzle_type - This file identifies the type of puzzle; puzzles of the same type have a common set of moves.<br>
allowed_moves - It describes the moves allowed in solutions of this puzzle type. Each move represents a permutation given in array form. You are also allowed to use the inverses of these moves in your puzzle solutions.</p>
<p>2) puzzle.csv<br>
id - The file is a unique identifier for each puzzle.<br>
puzzle_type - The file corresponds to that corresponding to puzzle_info.csv.<br>
solution_state - The solution_state file provides an arrangement of "colors" describing the solved state of the puzzle, with a semicolon ; delimiter.<br>
initial_state - It provides an arrangement of colors describing the initial state of the puzzle, with a semicolon ; delimiter. A solution to a puzzle must transform the initial state to the solved state through a sequential application of the puzzle's allowed_moves.<br>
num_wildcards - The file is designated to denote the number of "mistakes" allowed in the final state of a solution.</p>
<p>3) sample_submission.csv - A submission file in the correct format.<br>
id - The file is corresponding to puzzles.csv.<br>
moves - It provides an initial & unoptimized solution.</p>
<h1>The NxNxN Translator Solution Model</h1>
<p>The main objective is to solve the problem using the most optimal number of moves. We use the following reference for understanding this technique :- <a href="https://www.kaggle.com/code/seanbearden/solve-all-nxnxn-cubes-w-traditional-solution-state" target="_blank">https://www.kaggle.com/code/seanbearden/solve-all-nxnxn-cubes-w-traditional-solution-state</a>.</p>
<p>We walk through all the steps of writing the solution from initiation to making the submission file for a detailed understanding.</p>
<h2>First Part - Cloning from Git</h2>
<p>The first part is where we develop a potential solution by cloning an existing version from Git -</p>
<p>'<br>
%%bash<br>
git clone <a href="https://github.com/dwalton76/rubiks-cube-NxNxN-solver.git" target="_blank">https://github.com/dwalton76/rubiks-cube-NxNxN-solver.git</a><br>
cd rubiks-cube-NxNxN-solver<br>
make init<br>
'</p>
<p>'<br>
%%bash<br>
cd ..<br>
git clone <a href="https://github.com/dwalton76/kociemba.git" target="_blank">https://github.com/dwalton76/kociemba.git</a><br>
cd kociemba/kociemba/ckociemba/<br>
make<br>
sudo make install<br>
'</p>
<h2>Second Part - Completing the Essentials</h2>
<p>The next part is to complete the essential parts before devising the solution by first importing libraries -</p>
<p>'<br>
import gc</p>
<p>import os</p>
<p>os.chdir('rubiks-cube-NxNxN-solver')<br>
print("Current Working Directory: ", os.getcwd())</p>
<p>%%writefile 'script.py'</p>
<p>import time<br>
import math<br>
from tqdm import tqdm<br>
import pandas as pd<br>
import numpy as np<br>
import matplotlib.pyplot as plt<br>
import os, subprocess<br>
import json<br>
'</p>
<p>We then start picking the essential files necessary for our main code -</p>
<p>'<br>
p = '/kaggle/input/santa-2023/'<br>
puzzles = pd.read_csv(p + 'puzzles.csv', index_col='id')<br>
puzzle_info = pd.read_csv(p + 'puzzle_info.csv', index_col='puzzle_type')<br>
submission = pd.read_csv('/kaggle/input/so23-collective-benchmarks/submission.csv', index_col='id')<br>
'</p>
<h2>Third Part - The Main Solution Program</h2>
<p>Now, we move to devising the core algorithm to develop the program solution for NxNxN system.</p>
<p>We define a state-2-ubl function for defining the dimensional and state changes. The code for the same is below :-</p>
<p>'<br>
def state2ubl(state):<br>
state_split = state.split(';')<br>
dim = int(np.sqrt(len(state_split) // 6))<br>
dim_2 = dim*<em>2
s = ''.join([U_dict[f] for f in state_split])
return s[:dim_2] + s[2</em>dim_2:3<em>dim_2] + s[dim_2:2</em>dim_2] + s[5<em>dim_2:] + s[4</em>dim_2:5<em>dim_2] + s[3</em>dim_2:4*dim_2]<br>
'</p>
<p>The next is to define a moving translation function to contruct the moves and translation vectors. The code tor the same is below :-</p>
<p>'<br>
def move_translation(dim):<br>
M = {}<br>
M["U"] = f'-d{dim-1}'<br>
M["R"] = "r0"<br>
M["B"] = f"-f{dim-1}"<br>
M["F"] = "f0"<br>
M["L"] = f"-r{dim-1}"<br>
M["D"] = "d0"</p>
<pre><code> dim > :
M = f
M = f
M = f
M = f
M = f
M = f
dim >= :
M = f
M = f
M = f
M = f
M = f
M = f
width_max = dim
(, width_max + ):
M = f + M
M = M + f
M = f + M
M = M + f
M = f + M
M = M + f
m (M):
M = M + + M
M:
M = M(,)
:
M = (()])
return M
</code></pre>
<p>allowed_moves = {}</p>
<p>for puzzle_type, row in puzzle_info.iterrows():<br>
allowed_moves[puzzle_type] = eval(row['allowed_moves'])</p>
<p>def move_state(state, move, moves_pool):<br>
if '-' in move:<br>
move = move[1:]<br>
rev = True<br>
else:<br>
rev = False<br>
move = moves_pool[move]<br>
if rev:<br>
new_state = state[:]<br>
for i in range(len(move)):<br>
new_state[move[i]] = state[i]<br>
state = new_state<br>
else:<br>
state = [state[idx] for idx in move]<br>
return state<br>
'</p>
<p>The code goes in continuation as we define the manipulations needed to be made in the vector matrix and also we define the output solution state for generating output files. The code for the same is below :-</p>
<p>'<br>
outputs = {}<br>
for id, row in puzzles.iterrows():<br>
if row['puzzle_type'][:4] == 'cube':<br>
dim = int(row['puzzle_type'].split('/')[-1])<br>
if id not in range(125, 150):<br>
continue<br>
moves = eval(puzzle_info.loc[row['puzzle_type'], 'allowed_moves'])<br>
for move in list(moves):<br>
moves['-'+move] = np.argsort(moves[move]).tolist()<br>
M = move_translation(dim)<br>
init_state = row['initial_state']<br>
sol_state = row['solution_state']</p>
<pre><code> sol_state[:*dim**-] != .join([]*dim**):
.join(((sol_state.split()))) == :
(, , )
sb_solution = .join([fc*dim fc ])
sol_state_transl = .join(sol_state.split())
init_state_transl = .join(init_state.split())
changes = []
i ((sb_solution)):
sb_solution[i]!=sol_state_transl[i]:
changes.append(sb_solution[i]+sol_state_transl[i])
(, (changes)/)
(, row[])
:
(, , )
sol_state[:*dim**-] == .join([]*dim**):
state = state2ubl(init_state)
(, , (submission.loc[, ].split()))
output = subprocess.check_output([ + state], shell=, stderr=subprocess.STDOUT).decode().split()
outputs[] = output
output[-][:] == :
sol = output[-].split()[]
:
n (, ):
output[-n]:
sol = output[-n].split()[].split()[]
mmoves = .join([M[m] m sol.split()])
new_state = init_state
move mmoves.split():
new_state = .join((np.asarray(new_state.split())[np.array(moves[move])]))
I = [.join([ i (dim)]) j [, , ]]
manipulations = [] + I + [i1 + + i2 i1 I i2 I]+ [i1 + + i2+ + i3 i1 I
i2 I i3 I]+ [i1 + + i2+ + i3 + + i4 i1 I i2 I i3 I i4 I]
init_moves manipulations:
temp_state = new_state
(init_moves) > :
move init_moves.split():
temp_state = .join((np.asarray(temp_state.split())[np.array(moves[move])]))
temp_state == sol_state:
(init_moves) > :
mmoves += + init_moves
(, (mmoves.split()))
state = row[].split()
move_name mmoves.split():
state = [state[i] i moves[move_name]]
:
row[].split() == state
mmoves_length = (mmoves.split())
best_moves_length = (submission.loc[, ].split())
mmoves_length < best_moves_length:
submission.loc[, ] = mmoves
()
submission.reset_index().to_csv(, index=)
AssertionError:
()
</code></pre>
<p>'</p>
<p>Further due to the solution, we define an SCO-optimizer for solving the problem and we define the code for the same below :-</p>
<p>'<br>
def SCOptimizer(pidr, depthr, verbose=False, verbose2=False):<br>
for x in tqdm(pidr): <br>
og_benchmark = len(paths['moves'][x].split('.'))<br>
if verbose: print('Starting *** Pid', x, "Moves", og_benchmark)<br>
for depth in depthr: <br>
pid, pt, moves, allowed, initial_state, solution, wild = paths[['id','puzzle_type','moves','allowed_moves','initial_state','solution_state','num_wildcards']].values[x]<br>
moves = moves.split('.')<br>
benchmark = len(moves)<br>
state = initial_state.split(';')<br>
allowed = {k: Permutation(v) for k, v in eval(allowed).items()}<br>
best_moves = []<br>
solution, num_wrong = validState(moves, state, allowed, solution.split(';'))<br>
ogsolution = solution<br>
best_end_state = solution<br>
best_total_moves = []<br>
best_end_moves = []<br>
print("Pid",pid, "Depth", depth, 'Wilds', wild, 'Moves', len(moves))<br>
best_for_depth = 0<br>
moves_to_states = []<br>
for i in range(len(moves)): <br>
moves_to_states.append(';'.join(oState(moves[i], state, allowed)))<br>
state = moves_to_states[-1].split(';')<br>
mc = [k for k in allowed] + ['-'+k for k in allowed] <br>
while len(moves) > depth:<br>
solution = best_end_state<br>
best_round_end_moves = []<br>
dstates = getDepth(best_end_state, allowed, depth)<br>
# added -- Start: check from initial stage here<br>
if initial_state in dstates:<br>
best_moves = []<br>
moves = []<br>
best_end_state = initial_state<br>
else: #added -- END<br>
for i in range(len(moves)):<br>
state = moves_to_states[i]<br>
if state in dstates:<br>
best_moves = moves[:i+1]<br>
moves = moves[:i+1]<br>
best_end_state = state<br>
break</p>
<pre><code> state = best_end_state()
while (state) != solution:
m mc:
temp_state = (m, state, allowed)
s2 = (temp_state)
s2 == dstates:
state = temp_state
best_end_moves(m)
break
best_total_moves = best_end_moves + best_total_moves
best_for_depth > (best_moves + best_total_moves) - benchmark:
best_for_depth = (best_moves + best_total_moves) - benchmark
verbose2: (,pid, , depth, , (best_moves + best_total_moves) - benchmark, , (moves))
change_moves = best_moves + best_total_moves
(change_moves)> and (change_moves) < benchmark:
paths = (best_moves + best_total_moves)
state = initial_state()
best_end_moves =
verbose: (,pid, , depth, , (paths()) - og_benchmark)
gc()
</code></pre>
<h1>-----------</h1>
<p>pids = [336,337]<br>
SCOptimizer(pids, range(1,2), False, True) <br>
score = paths.moves.map(lambda x: len(x.split('.'))).sum()<br>
print(score, score - baseline)</p>
<h1>-----------</h1>
<p>'</p>
<h1>Future Scope</h1>
<p>The solution prescribed provides a vector matrix manipulation approach to solve the NxNxN translation method to provide a method to optimize a 1x1x1 primary base approach. We can add a reinforcement learning agent to make the method for adaptable to the changes in the data, or can add gradient optimization algorithms to build better optimizations for edge cases. Further changes can be alteration to the NxNxN technique.</p> |
Santa 2023 - The Polytope Permutation Puzzle | 25th place solution + thought process🥈 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This post outlines our approach which got us 25th</p>
<p>First of all, thanks to <a href="https://www.kaggle.com/codicon" target="_blank">@codicon</a>, <a href="https://www.kaggle.com/timothygao" target="_blank">@timothygao</a>, <a href="https://www.kaggle.com/alvaroborras" target="_blank">@alvaroborras</a> and <a href="https://www.kaggle.com/marksix" target="_blank">@marksix</a> for teaming up and contributing ideas throughout the competition, and Kaggle for organizing.</p>
<p><a href="https://www.kaggle.com/code/yeoyunsianggeremie/santa-2023-25th-place-moves-distribution/notebook" target="_blank"><strong>Notebook: Our final scores by puzzle type</strong></a></p>
<h2><strong>Cube</strong></h2>
<p>Initially, we utilised public repos, DWalton for edges and RCube for centers. Parity trick was used for AB cubes. These alone got us 23k for 281 and 282, and a 614k 3rd place submission 1 week into the comp. However, it is unable to solve N-type cubes</p>
<p>N-type cubes are more challenging. We first tried to solve it like a normal cube, by recoloring (i.e. for cube 4/4/4, N1 to N16 are labelled as A, N17 to N32 are labelled as B, etc). After solving, we notice that the edges of the cubes are completely solved, only the centers are left. This brings us to the idea of using <strong>cube commutators</strong>.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5256933%2F29eef6a5efea2fd0a13b0b53b4bbccd3%2FScreenshot%202024-02-01%20174550.png?generation=1706780860616174&alt=media"></p>
<p>For example, let’s take those squares underlined in blue. There’s 4 of them in each face. In total 24 in the entire cube. Same goes for the squares underlined in green. <strong>For the centers only, the cube can be reduced into solving many sub-problems of 24 squares.</strong></p>
<p>We first tried to solve the sub-problems using bidirectional BFS, but that ended up too slow and not feasible, and hence we switched to beam search. Beam Search helped us reduce puzzle 283 from 108k to 27k moves, and also slightly improved the other cubes.</p>
<p>Final solution: Use the DWalton solver to solve the edges (and the centermost square for odd N), then perform beam search on the commutators to solve the centers. </p>
<p><strong>Total score for all cubes ~ 132k</strong></p>
<h2><strong>Globe</strong></h2>
<p>Initial Constructive Solution</p>
<ol>
<li>Split the N/M globes into sub-globes of size 2/M. Then solve each 2/M using (2) and (3).</li>
<li>Move top pieces in the bottom to the top and vice versa. You can move 1 piece from the bottom to the top and one from the top to the bottom via f0.r0.f0 for example. </li>
<li>Cyclically rotate the top by 1 if the parity of inversions for top and bottom don’t match up. Next, remove inversions via the last commutator in the resource, which swaps a pair in both the top and bottom.</li>
<li>Finally, merge solutions by using r0 and -r0 to make all the fx moves f0 moves (fx = x r0’s + f0 + x -r0’s). <br>
This improved 3/33 (32k -> 13k each). </li>
</ol>
<p>To further improve the large globes, we implemented a modified version of the Minkwitz algorithm. The paper for the original Minkwitz algorithm can be found <a href="https://core.ac.uk/download/pdf/82526418.pdf" target="_blank">here</a>, and terms from that paper will be in “quotes”. </p>
<p>The key differences are as follows:</p>
<ol>
<li>For a vastly faster “Improve” function with comparable results, only choose a single “j”. We chose this j randomly via a weighted distribution where 0..n-1 had weights n..1 respectively.</li>
<li>Iterate over “i” in reverse for the “Fill Orbits” function so that words found can be used for subsequent i (this only helped a bit).</li>
<li>For the starting word “t”, pick a random word length from 1 to max_start_word_length and then find a random word of that length. The starting words would be too short to be maximally useful otherwise, and repeat words can be useful.</li>
</ol>
<p>At first, we tried to do a greedy “base” selection, but this led to short term gains at the cost of exponentially increasing “word” lengths for later “table” entries. Instead, the best base was the elements when iterating column by column and bottom to top in each column (we tried a zigzag alternating top to bottom and bottom to top, but this is worse probably due to less order).</p>
<p>Implementation wise, we used C++ for maximum speed. Furthermore, we implemented permutation operations like inverse in place to avoid memory allocation (3x speedup).</p>
<p>Table filling was done until the table appeared to converge as follows. Let the notation for hyperparameters be: (rounds, improvement rounds (“s”), max_start_word_length, new Improve or the paper’s Improve). The reason we switch to the paper’s Improve later is because it tries more combinations of existing short words. This is necessary since it becomes increasingly difficult to find new short words.</p>
<p>3/33:<br>
(1e9, 1e6, 4, fast Improve) (2 hours)<br>
(1e8, 1e6, 32, paper’s Improve) (3 hours)<br>
(1e8, 1e6, 8, paper’s Improve) (2 hours)</p>
<p>8/25:<br>
(4e8, 1e6, 4, fast Improve) (2 hours)<br>
(3e7, 1e6, 32, fast Improve) (20 minutes) (realized it was time to switch to paper’s Improve)<br>
(1e8, 1e6, 32, paper’s Improve) (4 hours)<br>
(1e8, 1e6, 8, paper’s Improve) (2 hours)</p>
<p>All other globes:<br>
(1e9, 1e5, 8, fast Improve) (< 2 hours)</p>
<p>After filling the tables, a solution can be quickly determined via factorization, so we can again use randomization. We applied 1-16 random initial moves for 8/25 and 3/33 and 1-8 for the rest before factorization. We used ~1e7 runs per puzzle, greatly reducing moves (e.g. ~70% for 3/4 and ~20% for 8/25 and 3/33).</p>
<p>This algorithm solves each 3/33 in ~1700 moves, and each 8/25 in ~2500 moves.</p>
<p><strong>Total score for all globes ~ 26k</strong></p>
<h2><strong>Wreath</strong></h2>
<p>We found the <a href="https://www.kaggle.com/code/glazed/humble-hillclimber" target="_blank">public hillclimbing notebook</a> from <a href="https://www.kaggle.com/glazed" target="_blank">@glazed</a> was extremely useful. It improved almost all of our wreaths, and got the wreath 100/100 down to 2500 moves after many runs, with some code modifications.</p>
<p><strong>Total score for all wreaths ~ 3.6k</strong></p>
<p>Updated with a detailed description of our globe algorithm</p> |
Santa 2023 - The Polytope Permutation Puzzle | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Here is our (team PreferredSantaClaus) solution. The repository is <a href="https://github.com/hogloid/kaggle_pfsanta_public/tree/main" target="_blank">here</a>.</p>
<h1>Cube & Globe</h1>
<p>The key ideas behind our solution for cube and globe puzzles are:</p>
<ul>
<li>Using a sequence of moves which permutes only a small number (like 3) of pieces (we called such sequences <strong>combo</strong>), and</li>
<li>Gradually inserting moves to (possibly the middle of) the solution sequence to reduce the number of mismatches.</li>
</ul>
<p>For the first part, we ran a series of exhaustive searches on sequences up to length 8. Here we show some examples of combos:</p>
<ul>
<li>cube (<code>cube_9/9/9</code>)<ul>
<li><code>f4.-d2.-f0.d2.-f4.-d2.f0.d2</code> (length 8; permuting 3 pieces)</li></ul></li>
<li>globe (<code>globe_1/8</code>)<ul>
<li><code>f0.r1.-r0.f0.-r1.r0</code> (length 6; permuting 4 pieces)</li>
<li><code>f0.r1.f0.-r1.f0.-r0.f0.r0</code> (length 8; permuting 3 pieces)</li></ul></li>
</ul>
<p>For the second part, suppose we apply two moves p, q (which themselves can be a composition of moves) to a piece ordering a, denoted by q(p(a)).<br>
Here, inserting a move f between p and q to make the final piece ordering q(f(p(a))) is equivalent to adding a move f' = q . f . q^-1 (here . denotes the function composition), which is a "conjugate" of f, to the end of the move sequence.<br>
If f affects only a small number of pieces (just like the sequences we found in the first part do) and we precompute q, f' can be quickly computed.<br>
Thus, for a sequence of moves m1 . m2 . … . mn, if we compute mn, m(n-1) . mn, …, and their inverses one by one, we can simulate inserting move(s) to any position in the sequence very efficiently.</p>
<p>The benefit for inserting a move in the middle of the sequence is twofold:</p>
<ul>
<li>We obtain a much larger number of combos available.</li>
<li>We can expect that some moves cancels out to produce shorter sequences (e.g. "f0.-f0" can be just removed).</li>
</ul>
<h2>The pipeline for cube puzzles</h2>
<p>Here is the overall pipeline for solving cube puzzles:</p>
<ol>
<li>Change the color of edge pieces based on its original color and the color of the piece which is attached to it in real Rubik's cubes.</li>
<li>Starting with the empty sequence, reduce the number of mismatches by repeatedly inserting a move or a combo to any position of the sequence.</li>
<li>Fix the position of the very center pieces (odd-sized cubes only) and the parity mismatches.</li>
<li>For each 24/48 pieces problem (detailed later), make all the pieces matched by applying combos (again, we allow inserting combos in the middle of the sequence).</li>
<li>Fix the position of corner pieces to produce the final solution sequence.</li>
</ol>
<p>Optimization in step 2 and 4 is done by the greedy search or the beam search. </p>
<h3>Center-piece alignment & Parity resolution</h3>
<p>For odd-sized cubes, we have to fix the position of the very center pieces. This can be done by applying center-row rotations and we searched for the optimal rotation sequence (if possible, it is desirable that positions of pieces except for the very center pieces are not changed) by BFS.</p>
<p>We found some "parity" issues which cannot be resolved by applying combos.</p>
<p>Let N be the size of the cube. For i in 2, 3, …, floor(N/2), consider all the edge pieces in row / column i and N+1-i (i=1 is omitted as it corresponds to corner pieces). Here we consider "3D" pieces (a piece and the one attached to it are considered as one) rather than "pieces" in this problem and we ignore the orientation of 3D pieces (more precisely, the orientation of a piece when placed to a slot can be uniquely determined). Then surface rotations are even permutations on these pieces, while rotations in row / column i and N+1-i are odd permutations (other rotations do not affect these pieces). Also all combos are even permutations. Therefore, if the permutation of the 3D pieces is odd, we have to add one row / column i and N+1-i rotation.</p>
<p>A similar rule also applies to the middle pieces in edges (odd-sized cubes only). Here surface rotations as well as center-row rotations are odd permutations. Also we encountered another parity problem which has not been explained. We empirically found that this parity problem can be fixed by trying combinations of 2 surface rotations. To test whether this parity is resolved, we just ran the optimization process in Step 4 and checked whether the number of mismatches in edge-middle pieces became 0.</p>
<p>Clearly, addition of moves for parity resolution may increase the number of mismatches. To mitigate this problem, we chose the position of insertion and the move to insert so that the increased number of mismatches is minimized.</p>
<h3>24 / 48 pieces problem</h3>
<p>Let (x,y) denote the piece in the y-th row and x-th column in a cube face (the orientation does not matter). Then pieces in (a,b), (b,a), (N+1-a,b), (N+1-b,a), (a,N+1-b), (b,N+1-a), (N+1-a,N+1-b), (N+1-b,N+1-a) can move around only in these 24 / 48 positions. Therefore, if we use combos which affects these pieces only, we can solve the puzzle independently for different (a,b). For a fixed (a,b), we ran a greedy / beam search to reduce the number of mismatches to 0 (just like in Step 2) allowing using such combos only.</p>
<h3>Corner pieces</h3>
<p>So far we have completed the puzzle except for corner pieces and now we can treat the remaining puzzle as 3x3x3 one. However, for even-sized cubes, there remains a parity issue which does not happen in 3x3x3 cubes and they should be treated as 4x4x4 ones.</p>
<p>We used following solvers for this part:</p>
<ul>
<li><a href="https://github.com/dwalton76/rubiks-cube-NxNxN-solver" target="_blank">https://github.com/dwalton76/rubiks-cube-NxNxN-solver</a></li>
<li><a href="https://github.com/cs0x7f/TPR-4x4x4-Solver" target="_blank">https://github.com/cs0x7f/TPR-4x4x4-Solver</a></li>
</ul>
<h2>The pipeline for globe puzzles</h2>
<p>The pipeline for globe puzzles is similar to that for cube, but differs in details.</p>
<ol>
<li>Starting with the empty sequence, reduce the number of mismatches by repeatedly inserting a move or a combo to any position of the sequence.</li>
<li>Fix parity mismatches.<ul>
<li>Let us consider pieces affected by "r0" and "r(k-1)", where k is the number of rows. Then the moves "r0" and "r(k-1)" are odd permutations on these pieces. Also, moves "fi" (hemisphere rotation) are even (if the number of pieces in each row is a multiple of 4) or odd (otherwise). Again, all combos are even permutations. Thus we have to fix the parity mismatches as in cube puzzles.</li></ul></li>
<li>Make all the pieces matched by applying combos. Here we did not split the problems into smaller ones (like 24 / 48 pieces problem in cube).</li>
<li>For difficult instances (specifically, 391 and 395; globe_3/33 and globe_33/3 with all pieces differently colored), enumerate all 3-piece rotations and apply them to make all pieces matched.<ul>
<li>We note that there are combos which affects only on 3 pieces. By repeatedly taking their conjugate (f -> g . f . g^-1), we obtain combos for rotating any 3 pieces in "r0" and "r(k-1)". We can use these combos to make all pieces matched, as the parity issues have been resolved so far.</li></ul></li>
</ol>
<h1>Wreath</h1>
<p>For puzzles smaller than or equal to 21/21, the optimal solution (with no wildcards) can be obtained through a bidirectional search. This entails performing a breadth-first search from both the initial state and the solution state; once they meet at a common state, the sequence of actions from the initial state is connected with the reversed sequence of actions from the solution state. Please note that this process can consume considerable memory, hence the encoding of the state into 128-bit integers is advised.</p>
<p>For puzzles with 8 wildcards, we have devised a specific algorithm. Let's focus on the 100x100 puzzle (the explanation still applies, but it's simpler for a 33x33 puzzle). You may google ‘hungarian puzzle’ to see what the puzzle looks like.</p>
<p>First of all, let's examine the solution state. The left ring consists of 'A' cells, while the right ring consists of 'B' cells. There are 'C' cells at the two intersection points. The distance (total number of cells + 1) between two intersections is 25 in the left ring and 26 in the right ring. Let’s say one of the intersection cells is ‘bottom’, and the other is ‘top’. To simplify, we can recolor 'C' cells as 'B', which will cost at most 4 mismatch errors.</p>
<p>Let’s enumerate the indices of the 'B' cells from 0 to 99 in the clockwise order so that the 0-th cell is in the bottom intersection point, and the 26-th cell is in the top intersection point at first.<br>
A cell in the right ring can be recolored as 'B' by relocating the cell to the bottom intersection point, revolving the left ring, and then returning the cell back to its original position. However, this can risk changing the state of the right cell, which is in the top intersection point.</p>
<p>That said, it's possible to rectify the state of cells in the right ring in the following order of indices: 0, 26, 52, 76, 4, 30, …, 48, 74. After this sequence, only the 0th cell is wrong, because rectifying 26<em>k (mod 100)-th cell risks only 26</em>(k+1)-th cell, by putting 26<em>k-th, and 26</em>(k+1)-th cell in the intersection point when revolving the left ring.</p>
<p>Apply the same rule for 1, 27, 53, 77, 5, 31 …, 49, 75. This process may induce at most 2 mismatch errors for even/odd cells, yielding a total of 8 mismatch errors at the most.</p>
<p>Following this algorithm as is results in too many actions, considering it necessitates 26 * 100 actions for the right ring. To alleviate this, an additional preprocessing step is carried out to correct randomly chosen sets of cells in the right ring in ascending order from 0 to 99. That is, do the following:</p>
<pre><code>toCorrect= randomof [0,1] of length 100
for i in range(100):
toCorrect[i]:
revolve the left ring so that the bottom intersection point becomes ‘B’
revolve the right ring once in counterclockwise
</code></pre>
<p>This approach requires at most 100 steps for the right ring, and aids in reducing the number of actions in subsequent processes, because for example, after rectifying the state of 26-th cell, if the state of 52, 76, 4-th cell is already ‘B’, only 4 steps to revolve the right ring is required to get the 30-th cell to the bottom intersection point. Trying toCorrect sequence about 10**6 times gives us a solution with length 681.</p> |
Santa 2023 - The Polytope Permutation Puzzle | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We would like to thank the organizers for this annual optimization challenge and very interesting problem.<br>
Our repo is <a href="https://github.com/Lgeu/santa23" target="_blank">here</a>.</p>
<h1>Overview</h1>
<p>We solved every problem with the same strategy, using beam search. For state transitions during beam search, sequences of operations that have few pieces being changed, pre-computed by Depth-First Search (DFS) and/or Breadth-First Search (BFS), were mainly used.</p>
<h1>Cube</h1>
<p>Like many other teams, we reduced the puzzle to normal 3x3 by solving the faces and the edges, and then used a 3x3 solver. We found it difficult to solve the faces and edges simultaneously, so we solved the faces and edges separately with beam search.</p>
<p>For solving the faces, since the diversity of the state transitions during the beam search had a strong effect on the quality of the solution, we finally precomputed all sequences under 8 operations (9 operations for small cubes) with few pieces changed with DFS and pruning. The number of sequences became so enormous that more than 200 GB of memory was used when solving the 33x33 cube. On the other hand, for solving the edges, the number of effective sequences was not that large. In addition to the pre-computed sequences, transitions were added that we call "parallelize" the previous sequence. For example, when the previous transition was the sequence "f0.r2.b0," we tried changing it to "f0.r2.r1.b0" or "f0.r2.r3.b0" etc.</p>
<p>The evaluation function was designed to minimize the number of pieces that were not aligned (Hamming distance). However, there were some pieces that were difficult to align, and the weights of those pieces were increased.</p>
<p>The first implementation of beam search was computationally slow and could not solve 33 cube in a realistic time even when the number of state transitions was not that large. Therefore, by speeding up the process through more efficient calculation of the score during state transitions and through multi-threading (180 threads in the end), we were able to finally achieve both diversity of state transitions and beam width, both necessary for high-quality solutions.</p>
<p>Key points other than beam search</p>
<ul>
<li>Since the parities of the edges must be consistent when attributing a large cube to a 3x3, the parities of the edges were adjusted on a rule basis separately from the beam search.</li>
<li>Since the optimal solutions of HFM and QTM are often different, we used a <a href="http://kociemba.org/download.htm" target="_blank">QTM optimal solver</a>.</li>
</ul>
<h1>Globe</h1>
<h2>Precomputation of moves</h2>
<p>Initially, sequences of moves used in the state transition during beam search were generated. Depending on the problem size, tens of millions of sequences were used.</p>
<h3>DFS</h3>
<p>Depth-First Search (DFS) was initially used to enumerate the moves. Without loss of generality, the first flip is fixed to f0, and after enumerating the moves, the other moves were obtained by rotating the entire set. Pruning was applied by prohibiting consecutive operations that cancel each other out , and ensuring an even number of identical flips in the sequence, enabling search up to a depth of 10 in a practical amount of time.</p>
<h3>Duplicate removal</h3>
<p>Since different sequences of operations can result in the same piece permutation, these duplicates need to be removed. In this process, the shortest sequence leading to the same permutation was retained. If the lengths of the sequences were also identical, the one that comes first lexicographically was kept. This made it slightly more efficient to parallelize and cancel out the moves, which is described later.</p>
<h3>Augmentation</h3>
<p>The moves obtained through DFS (X) were augmented by sandwiching them between an operation A and its inverse -A (A.X.-A). During this process, the permutations resulting from the sequence were checked by rotating the Globe to determine how many pieces were changed, and those with excessive changes were discarded. This method made it possible to generate sequences of moves longer than 10, which were difficult to explore with DFS alone.</p>
<h2>Beam Search</h2>
<p>In beam search, the Globe was divided into units of two rows each and solved independently. If the Globe has an odd number of rows, the center one is solved simply by rotation.</p>
<h3>Evaluation function</h3>
<p>The current state and the solution state’s Hamming distance served as the evaluation function, using the aforementioned moves to transition states. States with smaller values of the evaluation function were retained for each number of moves used, and these states were expanded and evaluated in order of fewer moves. During state transitions, if the previous operation and the next operation to be performed were inverse operations, adjustments were made dynamically to reduce the count of moves.</p>
<h3>Combining unit solutions</h3>
<p>After solving each unit, their solutions were combined. Different units’ flips could be performed simultaneously to shorten final solutions similarly to the “parallelization” in Cube, so this is handled in this step. The optimal flips to share were calculated using a method similar to the one used to solve the Longest Common Subsequence problem, employing dynamic programming. When there were more than three units to combine, the same process was applied to two units at a time, combining solutions in sequence. All possible orders of combination were tried, and the shortest solution was chosen as the final solution.</p>
<h3>Parity for special Globes</h3>
<p>For the Globe where each piece in the solution has a different color, parity of the permutation needs to be cared to obtain a solution. Before starting the search, the parity of the permutation of the initial state and solution state was checked, and if it odd, a operation “r” was performed once to align the parity. Only even-length moves were used during the beam search, preserving the parity.</p>
<h1>Wreath</h1>
<p>For smaller Wreath problems, the best solutions found through exhaustive search were used. Please refer to the public code. <a href="https://www.kaggle.com/code/ruiyaoyang/optimization-problems" target="_blank">https://www.kaggle.com/code/ruiyaoyang/optimization-problems</a></p>
<p>For larger problems (21, 33, 100), simple beam search was employed using unit moves {l, -l, r, -r} for state transitions. To evaluate the states during the beam search, DFS was used to search up to a certain number of moves, keeping the best score at each depth. To increase the depth to explore, the types of moves used for state transitions were limited based on conditions:</p>
<ul>
<li>Prohibit consecutive inverse moves.</li>
<li>When the first intersection of rings contains A: used moves = {l, -l}</li>
<li>When the first intersection of rings contains B: used moves = {r, -r}</li>
<li>When the first intersection of rings contains C: used moves = {l, r}</li>
</ul>
<p>The score function to compute penalties for each piece not matching the solution state is as follows.</p>
<ul>
<li>Penalties for each ring supposed to align with A containing something other than A</li>
<li>Penalties for each ring supposed to align with B containing something other than B</li>
<li>Penalties when something other than C is in the intersection of rings</li>
<li>Penalties when the distance between two Cs differs from the solution state</li>
<li>When there are more than four wildcards, it's acceptable to ignore hard-to-match Cs, so no penalty for C-related errors</li>
</ul>
<p>Particularly for large-scale problems such as wreath_100/100, without sufficient depth of exploration, it led to the score plateauing and hundred further search progress. However, extending the scoring search depth to 50 facilitated the achievement of solutions.</p> |
Santa 2023 - The Polytope Permutation Puzzle | Bronze medal: using RCube to solve large Santa puzzles | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Some time ago I've shared a <a href="https://www.kaggle.com/competitions/santa-2023/discussion/469167" target="_blank">video</a> of solving 65K cube with RCube package. The package is designed to solve huge cubes of arbitrary sizes and its main strength is to optimally solve centers, as the number of corners and edges becomes negligibly small when the dimensions grow. Unfortunately, in Santa the largest cube we had was only 33x33x33, so it was not obvious if it is worth to use the solver designed for thousands layers. </p>
<p>However, after some research I found that the package is still pretty efficient for cube sizes similar to the ones in this competition:</p>
<br>
<br><p></p>
<p>Additionally, I did a few benchmarks and realized that RCube could give a powerful boost in optimizing cubes downto 19 layers on each side. </p>
<p>The main issue, however, was inconsistency between the competition's definitions and the ones used in the package. It was interesting to see that a few fellow kagglers tried to follow the same direction in public notebooks, however, were puzzled by RCube's inability to even load a predefined cube state from a file. </p>
<p>To make it work I had to do a few things:</p>
<ul>
<li>write a convertor to translate Santa cube states to RCube, as the face and tile numbering conventions were different;</li>
<li>design a way to read in text state files, by default RCube can only deserialize its own objects;</li>
<li>replace optimized RCube solving procedures with the physical moves, as the package tries to avoid moving pieces, instead, changing the coordinate system;</li>
<li>develop some logging capabilities to dump moves in Santa definitions.</li>
</ul>
<h1>Results</h1>
<p>After all the above developments I was able to get the following improvements:</p>
<table>
<thead>
<tr>
<th>Puzzle ID</th>
<th>Puzzle type</th>
<th>Sample submission</th>
<th>RCube</th>
</tr>
</thead>
<tbody>
<tr>
<td>277</td>
<td>cube_19/19/19</td>
<td>24,419</td>
<td>9,423</td>
</tr>
<tr>
<td>278</td>
<td>cube_19/19/19</td>
<td>21,455</td>
<td>11,107</td>
</tr>
<tr>
<td>279</td>
<td>cube_19/19/19</td>
<td>32,249</td>
<td>11,065</td>
</tr>
<tr>
<td>280</td>
<td>cube_19/19/19</td>
<td>24,194</td>
<td>11,176</td>
</tr>
<tr>
<td>281</td>
<td>cube_33/33/33</td>
<td>123,431</td>
<td>25,747</td>
</tr>
<tr>
<td>282</td>
<td>cube_33/33/33</td>
<td>139,629</td>
<td>25,621</td>
</tr>
</tbody>
</table>
<p><br><br>
The puzzle 282 is the ABAB one and luckily had the odd number of layers, so after converting it to a conventional form I simply reverted the original solution to get the conventional initial state, then solved it with RCube too. One thing to note is that the package moves are not optimized for corners and edges, so I got an additional boost after using publicly available optimizers on RCube solutions.</p>
<p>These results may be pretty far from gold, but when combined with the publically available top solutions were in the silver zone for a long time going down to bronze by the end of the competition.</p>
<h1>Credits</h1>
<p>The work presented in this post was greatly inspired by the following notebooks:</p>
<ul>
<li><a href="https://www.kaggle.com/code/jazivxt/using-rcube-solver" target="_blank">Using RCube Solver</a> by <a href="https://www.kaggle.com/jazivxt" target="_blank">@jazivxt</a>;</li>
<li><a href="https://www.kaggle.com/code/nikitababich/hint-on-decoding-some-cubes-50k-decrease" target="_blank">Hint on decoding some cubes (>50k decrease)</a> by <a href="https://www.kaggle.com/nikitababich" target="_blank">@nikitababich</a>;</li>
<li><a href="https://www.kaggle.com/code/shitovvladimir/optimize-any-solution-with-group-theory-approach" target="_blank">Optimize any solution with group theory approach</a> by <a href="https://www.kaggle.com/shitovvladimir" target="_blank">@shitovvladimir</a>.</li>
</ul>
<p>And, of course, thanks to Kaggle for designing and hosting this awesome competition!</p>
<p>Nice tip! Yeah, RCube doesn't care about the edges, because they're nothing for 65K cubes. It would solve 2x2x2 in the crazy 300 moves, because in this case the whole cube is the edge… I've used DWalton for smaller cubes, but didn't try the hybrid approach as joined pretty late and, unfortunately, had no time for this kind of experiments. </p> |
Santa 2023 - The Polytope Permutation Puzzle | ML approach for all puzzles | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I focused on using a universal ML approach to solve all puzzles. It is a transformer-based model as below. The model is used to predict the next move one step ahead. I solved all Wreath puzzles, all Globe puzzles except the 8_25, and the Cube puzzles from 3 to 6. I could not finish all the puzzles because of the time constraint, but I think it is possible to solve all puzzles.<br>
Here is the <a href="https://www.kaggle.com/code/wuliaokaola/s2023-dev-cube6-2-0127" target="_blank">notebook of cube_6/6/6 (puzzle 256)</a>. The result is 739 moves.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2006644%2Fa5b17ae0eb1354a599fc5336a6a00e1b%2Fs2023.png?generation=1706796789124762&alt=media"></p>
<p>I didn't use any other dataset than random move for cube 2 ~ 5. For cube 6 I was trying to add some moves created by CFOP formula.</p>
<p>For cube 6, it can be trained in about 2 hours with TPU. And one more hour to predict the moves. </p> |
Santa 2023 - The Polytope Permutation Puzzle | Silver Medal Solution (15th) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for watching, I added our best sub to the repo. It's the best way to prove that we are not cheating. <br>
<a href="https://github.com/k-harada/santa2023/tree/main/submissions" target="_blank">https://github.com/k-harada/santa2023/tree/main/submissions</a></p>
<p>(My repo is not enough to reproduce it, because of the post process and parallel runs)</p>
<p>This year too, we won a silver medal in the top group. You may want to read the already published <a href="https://www.kaggle.com/competitions/santa-2023/Discussion/472405" target="_blank">#1 solution</a> and solutions from other winners. <br>
But please allow me to make my <a href="https://github.com/k-harada/santa2023" target="_blank">code</a> public and write the solution, to prove that we are not cheating. </p>
<h3>Cubes:</h3>
<p>We can solve regular patterns by using <a href="https://github.com/dwalton76/rubiks-cube-NxNxN-solver" target="_blank">nxnxn solver</a>, but the way of counting moves is different between this solver and the competition, so if you use it as is, It will be less efficient. For example, in the worst case, 16Uw2 is 1 move for the solver (and for humans), but it is counted as 32 moves in this competition. </p>
<p>We first aligned the edges and corners by repeatedly using the solver on a 5x5x5 problem. The approach to extracting 3x3x3 subcubes with corners and centers is shown in the first solution. By repeating 5x5x5, we can solve all the diagonal parts ((1,1) of 4x4x4), the cross parts ((1,2) of 5x5x5), and the edge parts ((0,1) of 4x4x4). <br>
All that is left to do is align the inside ((1,2) of 6x6x6). We did greedy search to find efficient next 3-rots. (We called it "the magic of swapping three points" :-) ). <br>
Using a 5x5 solver is also not efficient, but we didn't have an efficient way to align the edges, so we did it this way. </p>
<p>For the cube N0;N1;.., by coloring it and performing the same method, the edges and corners will be aligned.The rest is essentially the same. </p>
<h3>Globes:</h3>
<p>I solved it by reducing m x n to multiple 1 x n. Since using f_k freely will affect other layers, I limited it to f_0 and f_n. <br>
In 1 x n globes, there is also "the magic of swapping 3 points", so we can get a solution with sufficient performance.</p>
<h3>Wreaths:</h3>
<p>I was able to get a good enough solution using a simple heuristic.</p>
<p>Post-processing by <a href="https://www.kaggle.com/tomokiyoshida" target="_blank">@tomokiyoshida</a> and parallel execution with different parameters by <a href="https://www.kaggle.com/sfujiwara" target="_blank">@sfujiwara</a> significantly improved the score.</p> |
Santa 2023 - The Polytope Permutation Puzzle | [85th place] Group of Order 5 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Firstly, many thanks to the organisers. This was a wonderfully designed competition, with its 398 separate puzzles ensuring that there was minimal risk of a fully optimal score being quickly reached and therefore retaining jeopardy throughout. Worth waiting for, if you remember those uncertain "is Santa 23 happening" days before Christmas.</p>
<p>I need to thank our utterly amazing team <a href="https://www.kaggle.com/annafabris" target="_blank">@annafabris</a>, <a href="https://www.kaggle.com/victorfernandezalbor" target="_blank">@victorfernandezalbor</a>, <a href="https://www.kaggle.com/shitovvladimir" target="_blank">@shitovvladimir</a> and <a href="https://www.kaggle.com/alexandervc" target="_blank">@alexandervc</a>, who performed awesomely and worked tirelessly throughout. The standard in this competition was really high, and we were all amazed by the scores at the top of the leaderboard, with the two leaders jostling for position lap-by-lap, and regularly overtaking one another before losing the place again. The scores went significantly lower than where I would have anticipated finding the optimum – and 53770 seems like a score from a different planet.</p>
<p>Our team brought together complementary skills in mathematics, coding and science, and were always committed to trying a variety of approaches. Alexander and Victor worked very hard on the ML aspects, but since Alexander has discussed that <a href="https://www.kaggle.com/competitions/santa-2023/discussion/472594" target="_blank">in detail</a> I will refrain from duplication. The ML approach gave us a number of good solves of globe_2/6 and globe_3/4 puzzles, though unfortunately we weren’t able to get it working well enough to crack the larger puzzles.</p>
<p>Vladimir produced sterling work on some of the more challenging cubes, particularly the N0, N1 … variety, and also applied his group theory expertise to build us a souped-up version of his public optimization notebook. Anna displayed amazing skills in improving existing solutions by various means, becoming our hill climbing wrangler. Anna’s expertise here became very important, as cutting down the length of the (reversed) scrambling pathways in the sample solution was the best option left to us for most of the bigger puzzles.</p>
<p>I concentrated mostly on wreaths, developing a protocol of building a multi-part trajectory from a number of A-star runs. For this, I used the “number of wildcards” parameter as a “reaction coordinate”, with each successive A-star run along the pathway aiming to “solve” the puzzle up to a decreasing number of (fake) wildcards. This was generally a useful approach for puzzles in approximately the 40-200 size range. I got a globe_1/8 proof of concept before running out of time. I also spent a lot of effort ensuring that all our best solutions, plus the top public ones, made it into one ensemble, and also maintaining the ongoing scorecard.</p>
<p>Although we finished down in 86th place, our submission scoring 683920 had 15 single-puzzle wins against 1st place, 36 versus 2nd place, 12 compared to 4th place and 17 over 17th place, as per <a href="https://www.kaggle.com/solverworld" target="_blank">@solverworld</a>’s <a href="https://www.kaggle.com/code/solverworld/compare-your-submission-to-the-best" target="_blank">notebook</a>. This includes one puzzle, cube_3/3/3 #104, where we beat all of them.</p>
<p>As a computational chemist, I see a useful analogy with statistical thermodynamics. The underlying permutations are like microstates with the observable “which colour is where” states behaving like macrostates; there are typically many permutations corresponding to the same observed state. This is a limitation of those approaches that find sets of moves corresponding to identical permutations; identifying these is of some use, but doesn’t address the degeneracy arising from the many-states-per-permutation property of most puzzles.</p>
<p>Finally, if there were a leaderboard for most submissions, we’d be in third place with a mighty 246 of the things. Just as well that overfitting isn’t a thing in this competition. For anyone interested, our best submission file and scorecard are in <a href="https://www.kaggle.com/datasets/jbomitchell/santa-2023-public-files" target="_blank">this dataset</a>, both indicated by the number 683920, our score.</p> |
Santa Gift Matching Challenge | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa Gift Matching Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This is my solution.
<a href="https://github.com/wata-orz/santa17">https://github.com/wata-orz/santa17</a></p>
<p>I implemented a branch-and-bound method with a min-cost flow lower bound.
See the following pdf for more detail.
<a href="https://github.com/wata-orz/santa17/blob/master/paper.pdf">https://github.com/wata-orz/santa17/blob/master/paper.pdf</a></p> |
Santa Gift Matching Challenge | 3rd Place Solution and Proof | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa Gift Matching Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Sorry, this one
<a href="https://github.com/KazukiOnodera/Santa2017/blob/master/harada/notebook/brute_34267_1227.ipynb">https://github.com/KazukiOnodera/Santa2017/blob/master/harada/notebook/brute_34267_1227.ipynb</a></p>
<p>Thank you for all kagglers and administrators.
My teammate @hoxosh could solve and prove this problem.</p>
<ul>
<li><p>Git
<a href="https://github.com/KazukiOnodera/Santa2017">https://github.com/KazukiOnodera/Santa2017</a></p></li>
<li><p>Proof
<a href="https://github.com/KazukiOnodera/Santa2017/blob/master/solution/proof.pdf">https://github.com/KazukiOnodera/Santa2017/blob/master/solution/proof.pdf</a></p></li>
</ul> |
Santa Gift Matching Challenge | Round 1 - 2nd Place Submission | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa Gift Matching Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://github.com/mgalati13/kaggle/tree/master/Santa2017">https://github.com/mgalati13/kaggle/tree/master/Santa2017</a></p>
<p>Kaggle 2017 Santa Challenge: <a href="https://www.kaggle.com/c/santa-gift-matching">https://www.kaggle.com/c/santa-gift-matching</a></p>
<p>Round 1 (the contest was relaunched because it was "too easy"). We were the 2nd team to find optimal and therefore placed 2nd (Team = "Matt and Rob")</p>
<p>We found optimal using SAS/OR's (MILP - mixed integer linear programming solver). See the code in sas/. The model is built using PROC OPTMODEL - a math programming modeling language in SAS. It took about 1 hour to build and solve.</p>
<p>We also found optimal using COIN/CBC (an open source mixed integer linear programming solver). See the code in coin/. The model is built in using COIN's C++ APIs. The solver used at the time was Cbc 2.6.2. It took about 40 hours to build and solve. You can download COIN here: <a href="https://www.coin-or.org/">https://www.coin-or.org/</a></p>
<p>Any questions send to: Matthew Galati ([email protected]) Rob Pratt ([email protected])</p> |
Santa Gift Matching Challenge | What is the secret of getting 0.9363015? | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa Gift Matching Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Sounds like exactly what we did except we also include edges with negative CH + GH. We thought this can be an improvement since there might be a case that one child being unhappy while other children being much happier. But this might be the crucial factor limiting us from solving the entire problem.</p>
<p>How about twins and triplets getting different gifts? How did you fix that?</p>
<p>EDIT:
I think I understand the difference between your approach and ours. <br>
"For each gift type add an edge to the sink with capacity 1000 and weight 0" is a crucial step. This consolidates all the unhappy assignment of each gift into 1000 arches and so we don't need to add negative edge at children level (9*10^8). That is smart!!</p>
<p>Thanks!</p>
<p>Congratulations to the winners. I am very impressed that there are more than 15 teams who got almost the optimal solution. What is the secret for getting near 0.9363015? <br>
Is it a special min-cost max-flow solver which can solve with the constraint?
Or is it a fast MIP solver that can fit into the memory? Or something else?</p>
<p>We have been trying this with min-cost max-flow solver from Google ortools. The issue is that if you include all the unhappy edge, it doesn't fit into the memory (~16 GB). So we come up with a method which optimize only ~100 gift types for each run. </p>
<p>The other issue is that max-flow might assign different gifts for twins and triplets even though their happiness toward different gifts are averaged. Therefore, we implemented a fixer after each MCMF run. The strategy is to ask the inconsistent twins or triplets to decide one gift and let the rest of children run another MCMF with those gift removed. And repeat this until all the twins and triplet got the same gift. This approach works when the gift types in each MCMF run are less than ~100 otherwise it get worse or will fix forever.</p>
<p>We have also tried MIP from ortools. But for the memory we had, we can only put in ~40 gift types each time. And it stop improving after 0.9356.</p>
<p>So What is the secret for getting near 0.9363015? Is it a completely different approach from what we had?</p>
<p>Cool! I managed to obtain the solution for the entire relaxed problem using this approach! It only uses 9GB of memory and finishes in 10 mins!</p> |
Santa Gift Matching Challenge | Winner solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa Gift Matching Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I'll go straight into my solution to keep this post as concise as possible. <br>
For simplicity, some things might be inaccurate or don't reflect what I actually did. <br>
code: <a href="https://github.com/ckomaki/kaggle-santa-2017-winner-solution">github</a> <br>
<br>
<br></p>
<h1>Term</h1>
<p>- Solution: A strategy or a sequence of algorithms to solve the ORIGINAL santa problem. <br>
- Assignment: A set of "which present will be given to which child". What we submit to kaggle is an assignment. I also occasionally compute assignments for the relaxation problem. <br>
<br>
<br></p>
<h1>TL;DR</h1>
<p>- I maximized (1,000,000 * ANCH + ANSH). <br>
- My hunch tells, an assignment maximizing (1,000,000 * ANCH + ANSH) also maximizes ANCH ^ 3 + ANSH ^ 3. <br>
- There are many almost-optimal assignments close to the optimal assignment for the relaxation problem. However, the actual optimal ones are located at a bit more distant places, though still close. <br>
- I used depth first beam search. <br>
-- This can compute a strictly optimal assignment. <br>
-- Main node selection strategy is human eyeballing and hunch. <br>
-- I used a min-cost flow algorithm to solve a relaxation problem. (time: 1 ~ 3 hours, code: <a href="https://github.com/ckomaki/kaggle-santa-2017-winner-solution/blob/master/src/rust/data_structure/min_cost_flow_dup_binary_heap.rs">min_cost_flow_dup_binary_heap.rs</a>) <br>
-- I developed another min-cost flow algorithm to be used in beam search. <br>
--- It solves a min-cost flow problem by repeatedly improving an initial flow. (time: 5 ~ inf seconds, code: <a href="https://github.com/ckomaki/kaggle-santa-2017-winner-solution/blob/master/src/rust/loop_solver.rs">loop_solver</a>, <a href="https://github.com/ckomaki/kaggle-santa-2017-winner-solution/blob/master/src/rust/loop_canceler.rs">loop_canceler</a>) <br>
<br>
<br>
<br></p>
<h1>Observations</h1>
<p><strong>1. I kind of feel that optimal assignments for (1,000,000 * ANCH + ANSH) are also optimal for ANCH ^ 3 + ANSH ^ 3.</strong> <br>
child_wishlist.csv is 100 times larger than gift_goodkids.csv.
I guess even a completely random assignment scores 100 times larger ANCH compared to ANSH.
We are maximizing cubics of them and ((ANCH + delta) ^ 3) - (ANCH ^ 3) ~= 3 * (ANCH ^ 2) * delta.
Therefore, increasing ANCH achieves 100 ^ 2 times better score gain than ANSH.
Actually, an optimal assignment has around 1,000 times higher ANCH. So, I optimized for (1,000 ^ 2) * ANCH + ANSH.
<strong>I'll talk about maximizing (1,000,000 * ANCH + ANSH) below</strong>
<br>
<br>
<strong>2. A relaxation problem which ignores twin/triple constraints can be formulated as a min-cost flow problem.</strong> <br>
You can check this <a href="https://www.kaggle.com/c/santa-gift-matching/discussion/45857">post</a> (Thank you, mtnn)
I'll call this problem "the relaxation problem" below.
<br>
<br>
<strong>2.1. Most of the twins/triplets in an optimal assignment for the relaxation problem follow the twin/triplets constraints.</strong> <br>
In the relaxation problem, costs of twins/triplets were averaged, and have the same values.
So, a pair of twins is likely to receive the same present (same for triplets).
As a result, for each present, except for some edge cases, there are at most one twin and one triplet which don't follow the twin/triplet constraints.
<br>
<br>
<strong>3. If we know how many singles, twins, and triples receive each present, the original santa problem can be formulated as a min-cost flow problem.</strong><br>
An example of this problem is like,<br>
- we already know present 1 will be given to 990 singles, 4 twin, 6 triplets, ...., present 1,000 will be given to 986 singles, 2 twin, 12 triples.<br>
- These numbers are valid (present 1 will never be given to 2 * n + 1 twin nor 3 * n + 2 triplets).<br>
Then, we can compute an optimal assignment by re-formulating this problem into a min-cost flow problem.<br>
<br>
I'll skip the detail of this formulation, but the point is that we can ignore the original twin/triplet constraints.
An optimal assignment for this problem automatically follow the original twin/triplet constraints because of what I said above (section 2.1).
The graph will have 3000 present nodes. Each present will have three node for single, twin, and triplets.
<br>
<br>
<strong>3.1. Here is an inefficient naive solution using the above formulation.</strong><br></p>
<pre>def compute_optimal_assignment_naive():
scores = []
for assignment_num_present_1_to_single in range(0, 1001):
for assignment_num_present_1_to_twin in range(0, 1001):
for assignment_num_present_1_to_triplet in range(0, 1001):
.
.
for assignment_num_present_1000_to_single in range(0, 1001):
for assignment_num_present_1000_to_twin in range(0, 1001):
for assignment_num_present_1000_to_triplet in range(0, 1001):
if are_these_assignments_valid(assignment_num_present_1_to_single, ....):
scores.add(compute_optimal_assignment(assignment_num_present_1_to_single, ...))
return np.max(scores)
</pre>
<p><br>
<strong>4. A fusion problem of the above problem and the relaxation problem can be formulated as a min-cost flow problem.</strong><br>
Here is an example of this problem.
We know present 343 will be given to 990 singles, 4 twins, and 6 triplets,
We also know present 532 will be given to 983 singles, 8 twins, and 9 triplets,
And, we don't know about the remaining.
We are allowed to ignore twin and triplet constraints for these 998 no-constraint presents.
<br>
This problem can be formulated as a min-cost flow problem.
I'll skip the detail of this, but we can simply combine the formulations for the above problem and the relaxation problem.
<br>
<br>
<strong>4.1. Here is a depth first beam search solution using the above formulation.</strong><br></p>
<pre>def compute_optimal_assignment_depth_first_beam_search():
# Create beam search candidate queue.
fusion_state_queues = [PriorityQueue() for _ in range(1000 + 1)]
# Add initial state
initial_fusion_state = create_1000_no_constraint_presents()
initial_fusion_score = solve_min_cost_flow(initial_state)
fusion_state_queues[0].push((initial_fusion_score, initial_fusion_state))
final_scores = []
while are_there_remaining_states(fusion_state_queues):
for present_id in range(1000):
state, score = fusion_state_queues[present_id].pop()
if score < np.max(final_scores):
continue
for assignment_num_to_single in range(1001):
for assignment_num_to_twin in range(1001):
for assignment_num_to_triplet in range(1001):
next_state = current_state.clone()
next_state.add_constraint(present_id, assignment_num_to_single, ...)
next_score = solve_min_cost_flow(next_state)
if nex_state.is_valid_state():
fusion_state_queues[present_id + 1].push((next_score, next_state))
return np.max(final_scores)
</pre>
<p><a href="https://github.com/ckomaki/kaggle-santa-2017-winner-solution/blob/master/src/rust/beam_search.rs#L264">My actual code</a><br>
<br>
<br>
<br>
</p><h1>My solution</h1><br>
<br>
A. Compute an optimal assignment for the relaxation problem.<br>
B. Run depth first beam search for a limited space close to the assignment of the relaxation problem.<br>
B.. There is a fast min-cost flow algorithm only for this specific problem.<br>
C. Re-configure present priority for node selection based on my eyeballing impression during the previous beam search.<br>
D. Repeat 2 and 3 with broader search spaces.<br>
<br>
<br>
<strong>A. Compute an optimal assignment for the relaxation problem.</strong><br>
I implemented a min-cost flow algorithm. This is my very first program in rust.
This is an algorithm for a graph with non-negative cost using a concept named "potential". I'll skip its detail.
Each step in the algorithm, it's detecting multiple min-cost paths on a residual network and saturate their flows at once. <br>
Code: <a href="https://github.com/ckomaki/kaggle-santa-2017-winner-solution/blob/master/src/rust/data_structure/min_cost_flow_dup_binary_heap.rs">min_cost_flow_dup_binary_heap.rs</a>
<br>
<br>
<strong>B. Run depth first beam search for a limited space close to the assignment of the relaxation problem.</strong><br>
As I said "Most of twins and triplets in an optimal assignment for the relaxation problem follow twin, triplets constraints".
So, the search space is around the optimal assignment and its really small.
<br>
<br>
<strong>B.. There is a fast min-cost flow algorithm only for this specific problem.</strong><br>
An optimal min-cost flow must not have a negative cost cycle in its residual graph,
so we can solve a min-cost flow problem by repeatedly detecting negative cycle and cancel it, starting from some initial valid flow.
In depth first beam search, we add one constraint at a time. This means we know the optimal assignment before adding constraint.
We can create a nearly-optimal valid assignment from this previous optimal assignment by some heuristic.
This nearly-optimal assignment can be usually transformed into an optimal by cancelling a few negative cycles, so it doesn't take time.
We can use Bellman-Ford algorithm for negative cycle cancelling.
It takes O(|V||E|) = O(1,000,000 * (100 * 1,000,000)) for cancelling one cycle.
However, since our graph is almost bipartite graph, so we can pre-compute present-present distances beforehand, and detect a negative cycle in a present only graph.
This will reduce time complexity to O(|present| (|present| * |present|)) = O(1,000 * (1,000 * 1,000)).
Furthermore, in our problem, almost all of the negative cycles are short.
So, we can make this faster. I'll skip its detail. <br>
Code: <a href="https://github.com/ckomaki/kaggle-santa-2017-winner-solution/blob/master/src/rust/loop_solver.rs">loop_solver</a>, <a href="https://github.com/ckomaki/kaggle-santa-2017-winner-solution/blob/master/src/rust/loop_canceler.rs">loop_canceler</a><br>
<br>
<br>
<strong>C. Re-configure present priority for node selection based on my eyeballing impression during the previous beam search.</strong><br>
There are only several presents which affects score during beam search.
If we know which present would affect score, the beam search can be finished within the depth of 10 not full present 1000.
So, I run beam search with a really limited search space first, and listed up presents which affected score, then gave them priority. <br>
Code: <a href="https://github.com/ckomaki/kaggle-santa-2017-winner-solution/blob/master/src/rust/beam_search.rs">beam_search.rs</a><br>
<br>
<br>
<strong>D. Repeat B and C with broader search spaces.</strong><br>
Yeah, we can repeatedly increase the search space.
When we increase this search space to the infinite, the best score is proven to be optimal.
<br>
<br>
<br>
<br><p></p>
<p>2 ~ 3 hours to find an optimal assignment for the relaxation problem.
5 ~ 30 minutes to compute optimal assignment in beam search with human manipulation.
Probably several days to compute all candidates and prove what we get above is optimal.</p>
<p>Yeah, I feel you're right (but not so sure). We can probably get the optimal score by, first maximizing ANCH without considering ANSH, then maximizing ANSH maintaining the maximized ANCH. I think most of the 0.9363015XX scorers succeeded in maximizing ANCH and were struggling to maximize ANCH.</p>
<p>And, yes, I also think the key to getting an optimal assignment was to exploit the smallness of the number of gifts. As long as a solution is approaching from that point, any reasonable one would have a potential to achieve the optimal score. In fact, top 3 competitors used different solutions.</p> |
Santa's Stolen Sleigh | Ninth place solution (12404781212) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa's Stolen Sleigh
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My main optimisation routine was based around a relatively simple algorithm.</p>
<p><strong>The core components were:</strong></p>
<ul>
<li>A routine that would place a single gift into the lowest cost position in a group of trips (including if desired an empty trip).</li>
<li>A distance metric between gifts that was a weighted mix of haversine distance, closest distance to the same longitude, and distance between longitudes at the equator.</li>
<li>A closest_n_trips function that would find a select group of trips in a solution based on the above distance metric (a trip is considered closer if <em>any</em> of its contained gifts is closer to the starting gift).</li>
<li>A sampling function that would randomly select gifts in one or more trips and remove them to an "unassigned gifts" pool.</li>
</ul>
<p><strong>The initial solver was just a random search:</strong></p>
<ul>
<li>For each gift in turn (from a shuffled list):
<ul><li>Find the closest 25 trips (including trip that chosen gift was from). Distance metric weights were 0.025 haversine, 0.85 closest same longitude point, 0.15 equator-equivalent same longitude point.</li>
<li>Repeat 500 times for those trips and their contents:
<ul><li>Randomly sample from 1 to 3 gifts from each trip</li>
<li>Shuffle the array of sampled gifts</li>
<li>Re-insert each gift in order, in the optimal position considering the whole group of trips (plus an initially empty trip)</li>
<li>Keep the new arrangement if the score is better, otherwise discard.</li></ul></li></ul></li>
<li>Once every 100 starting gifts:
<ul><li>Check all trips in the solution, and split apart into two trips (at a single breakpoint), if that would reduce the overall cost. This seemed to help by preventing trips greedily growing into inefficient combinations.</li></ul></li>
</ul>
<p>Running this routine from a starting point of one trip per gift (scoring 29,121,011,015) got a result of 12,422,511,600 in about 4 hours (single thread, and measured on laptop - 2.9 GHz Intel Core i7 - no big compute stack for me!) I got to this solution by manually tuning the parameters such as distance weightings, number of trips to consider, sample size etc.</p>
<p><strong>Further improvements were more complex, and progressed more slowly:</strong></p>
<p>I was aware that the local random searches were likely to miss some important combinations and potentially useful changes. So I constructed a few different searches:</p>
<ul>
<li><p>A cross-splice search that checked whether two trips could exchange start and end sections to reduce score. I would of liked to generalise this to more splice points, but it was too time consuming to add more degrees of freedom in the search.</p></li>
<li><p>A "heavy trip breaker" which would take a chosen gift (selected by weight times how much extra "non-necessary" distance led up to it) and construct a new greedy trip from it - stealing gifts from other trips. This was re-optimised using the random search and re-apply algorithm described above; if the ending score reduced across all the selected trips, then the result was kept. Although in practice this only ever found 3 or 4 new trips, I had noticed that the impact of creating those new trips could be very high.</p></li>
<li><p>A gift search that would take each gift in turn and place it most optimally across the whole solution.</p></li>
<li><p>A trip destroyer which would remove a trip and try to place all its contents optimally over the rest of the solution. This was kept only if it improved the overall score.</p></li>
<li><p>A simulated annealing variant of the initial greedy search, typically working on 35 trips with the same distance metric - I would run 1 million iterations starting from a temperature of 5000 down to 50. Then repeat whole process for another sub-group of 35 trips. Each group of trips would take around 30 minutes to complete.</p></li>
</ul>
<p>Of the above, all were useful and contributed to the end score. But by far the most useful in terms of score reduction was the simulated annealing. I suspect it would not have done so well without the wider searches done by the greedy searches though.</p>
<p>Getting the score from 12.422 billion down to 12.412 billion was relatively quick, probably under 2 days' effort. Beyond that point, progress seemed to halve - or worse - each day. I doubt my current code could ever get below 12.404 billion.</p>
<p><strong>Lessons for next time:</strong></p>
<p>What I think I should of done next was look into some different moves for continued simulated annealing - either within the local trips, or looking at the whole solution. I think the initial success of the random search and problems with my first attempts at simulated annealing both contributed to me losing time looking at other less productive things first.</p> |
Santa's Stolen Sleigh | Other Solutions | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa's Stolen Sleigh
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I would like to know other solutions you guys tried.
The solutions I tried is below.</p>
<hr>
<p><strong><em>1. Put and swap</em></strong></p>
<pre><code>Select 2 gifts that does not belong to same trip.
If the score decrease by putting or swapping the gifts, execute it.
</code></pre>
<p><strong><em>2. Sort gifts within trip by 5!</em></strong></p>
<pre><code>Select 5 gifts from head of trip.
Try to sort all patterns(5!).
And slide to next('1 2 3 4 5'→'2 3 4 5 6').
</code></pre>
<p><strong><em>3. Put and swap 2</em></strong></p>
<pre><code>This algorithm is multiple version of 1st algorithm.
</code></pre>
<p><strong><em>4. Swap gifts by 6!</em></strong></p>
<pre><code>Select 6 gifts that does not belong to same trip.
Try to swap all patterns(6!).
</code></pre>
<p>I used only Greedy algorithms, and won 39th(12425651858.70720).</p>
<p>[quote=woshialex;104216]</p>
<p>if you have used Simulated annealing instead of Greedy algorithms (a very simple change could do it), you can easily get below 1.24000e13 with your current code I think, they are good moves and our code has these moves too.</p>
<p>[/quote]</p>
<p>Ok, I'll give it a shot. Also I'll look into your code on github.</p> |
Santa's Stolen Sleigh | Rudolph prize entry | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa's Stolen Sleigh
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><ol>
<li><p>Hard (deep) copies of both trips are made by
Trip A(*this);
Trip B(other);
and all operations are then performed on A and B. If move is to be accepted, updating is simply done by copying again
*this = A;
other = B;</p></li>
<li><p>Trip::performSWAP(index, G) replaces a gift at position "index" with gift G and makes all the updates.
Thus, what has to be done is
this.performSWAP(ii, other[pos]);
other.performSWAP(pos, gifts[ii]);
Since gift at position ii will be "overwritten" we just make a copy of this gift before.</p></li>
</ol>
<p>Our algorithm description (team "woshialex & weezy") is enclosed.
Indeed, it's a "classic" Simulated Annealing algorithm.</p>
<p>Source code can be found at <a href="https://github.com/woshialex/SantaStolenSleigh">https://github.com/woshialex/SantaStolenSleigh</a></p> |
Santa's Stolen Sleigh | Winning entry | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa's Stolen Sleigh
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>1478 trips. I added the final submission in the attached files.</p>
<p>Also, visualization for 1238458xxxx state.</p>
<p><a href="https://s3-ap-northeast-1.amazonaws.com/colun-share-sandbox/20151213-santa2015-otwe2j5g/vis1238458-B.html">https://s3-ap-northeast-1.amazonaws.com/colun-share-sandbox/20151213-santa2015-otwe2j5g/vis1238458-B.html</a></p>
<p>Visualization for final submission.</p>
<p><a href="https://s3-ap-northeast-1.amazonaws.com/colun-share-sandbox/20160109-santa2015/vis-final.html">https://s3-ap-northeast-1.amazonaws.com/colun-share-sandbox/20160109-santa2015/vis-final.html</a></p>
<p>From the trend of SA result with different number of iterations, we expect optimal answer will be around 12380000000.</p>
<p>Hi! I share some of materials we used in the contest.</p>
<p>In the first phase, we used SA, using 20 threads in 24-core machine, running ~5 days. Ran in 3 machines and best score was 12384669985.</p>
<p>After that, BS combined with SA, is used, again in 3 machines, this time running 21 hours. Best output was 12384529598. Some complicated merging of these output was our winning entry, 12384507107.</p>
<p>Below is a tracking sp of our progression, after we made a team.</p>
<p><a href="https://docs.google.com/spreadsheets/d/1bO6zLWm3JMOa8rvBhxpV688bUndeJUo595Uynv1RQY8/edit?usp=sharing">https://docs.google.com/spreadsheets/d/1bO6zLWm3JMOa8rvBhxpV688bUndeJUo595Uynv1RQY8/edit?usp=sharing</a></p>
<p>Code for the first phase is attached. Download them in a same directory. sort_by_dist is just a list of nearest presents, encoded in a text. prv.csv can be any output file, possibly your submission.</p>
<p>g++ santa_mul.cc -o santa_mul -pthread --std=c++11 -Ofast -march=native</p>
<p>(./santa_mul output_file output_file_with_more_information progression_output_file number_of_iterations starting_temperature number_of_threads)</p>
<p>time ./santa_mul o3.out d3.out p3.out 500000000000 0.33 20</p> |
Santa's Workshop Tour 2019 | How to win Santa's Workshop Tour | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa's Workshop Tour 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, I would like to thank <a href="/inversion">@inversion</a> for preparing this nice competition. It was really fun.
Second, sorry that it took so long time till I wrote this post.
Third, yes most of you were right. I also used mixed integer programs (MIPs).</p>
<p>But now let's start!</p>
<p>My way to an optimal solution started with recognizing that the “nonlinear” objective function can be linearized by enumerating all possible combinations of people who may visit on one day and on the day after.
After that, I had a short look into the data and believed that not many families will be assigned to a non preferred day. Nevertheless, I didn't want to remove this possibility completely. That's what led me to the following</p>
<h1>Mixed Integer Linear Programming Relaxation</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F478286%2Fe46f34c1bb4058e65a02dc2a2404d14c%2Fmip_relaxation.png?generation=1579800775368803&alt=media" alt=""></p>
<ul>
<li><code>x_{f,d^f_p}</code> is the binary variable which is <code>1</code> iff family <code>f</code> is assigned to its preference <code>p</code> for <code>p=1,...,10</code> and <code>x_{f,d^f_{11}}</code> is <code>1</code> iff the family is not assigned to one of its prefered days, where <code>d^f_{11}</code> is set to “day” <code>101</code>.</li>
<li><code>y_{d,i,j}</code> is the binary variable which is <code>1</code> iff day <code>d</code> has <code>i</code> and day <code>d+1</code> has <code>j</code> people assigned. Note that we also introduced variables <code>y_{100,i,j}</code> for <code>i</code>≠<code>j</code> , for the ease of presentation. In this setting we can fix every variable <code>y_{100,i,j}</code> to <code>0</code> for <code>`i</code>≠<code>j</code>. Clearly, I didn't add these variables in my implementation.</li>
<li><code>z_d</code> is the “continuous” variable representing how many people are assigned to day <code>d</code>.</li>
<li>Term <code>(1)</code> is the objective function which we want to minimize, where <code>pc(p)</code> represents the preference and <code>ac(i,j)</code> the accounting cost.</li>
<li>Equation <code>(2)</code> ensures that each family is either assigned to one of its prefered days or to “day” <code>101</code> representing that the family is not assigned to one of its preferences.</li>
<li>Equation <code>(3)</code> ensures that day <code>d</code> is assigned to a number of people visiting on this day and to a number of people visiting on day <code>d+1</code>.</li>
<li>Equation <code>(4)</code> is in some way “flow conservation” ensuring that the number of people of consecutive days coincide.</li>
<li>Equation <code>(5)</code> couples the consecutive day variables with the day quantity variables.</li>
<li>Inequality <code>(6)</code> ensures that the number of people assigned to a day is at least the number of people assigned to that day which they prefer, where <code>n_f</code> is the number of family members of family <code>f</code>.</li>
<li>Equation <code>(7)</code> ensures that the number of people assigned to all days equals the number of family members. </li>
</ul>
<p><strong>Note, that in general a solution of this MIP don't have to be feasible for Santa's problem. Furthermore, I believe it can be as challenging as to solve Santa's problem from scratch to make a solution of this MIP feasible, if the data is bad .</strong></p>
<p>Nevertheless, the data did not look that bad and it turned out that I never had to “repair” a solution.</p>
<p>After roughly two hours Gurobi found a high quality solution and naturally I immediately submitted it to the leaderboard. Due to a mistake in my implementation this resulted in a solution which was scored with a value of ><code>34145044298</code>, the overall worst score which was shown on the leaderboard in the whole duration of this competition.
After fixing the bug I got a solution with a value ≤<code>74589</code>. After three hours Gurobi produced a solution with a value of ≤<code>70913</code> which did not further improve within 24 hours.
Nevertheless, I did not used this solutions, since I worked in parallel on a reduction on the number of variables.</p>
<h1>Lower Bounds, Upper Bounds, and Size Reduction</h1>
<p>Since the number of variables is huge, I was interested in lower bounds on the preference costs (<code>LB_pref</code>) and upper bounds on the optimal value for the whole problem (<code>UB_opt</code>). The reason for this is, that given <code>LB_pref</code> and <code>UB_opt</code> we can bound the accounting costs from above by <code>UB_opt - LB_pref</code>. This led me to the following MIP formulation, only optimizing the preference costs (8). This program was solved to optimality in less than a minute with <code>LB_pref</code>≥<code>43622</code>.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F478286%2F13a83bd6a1317ec6bf49c1e162073fc4%2Fmip_preference.png?generation=1579808394195922&alt=media" alt=""></p>
<p><strong>So, I had a lower bound on the preference cost: <code>43622</code></strong></p>
<p>For the upper bound we are lucky, since kaggle provides public leaderboards. At this time <a href="/wataorz">@wataorz</a> was in top position with a solution score of ≤<code>70888</code>. Thus, the accounting costs are bounded from above by <code>27266</code>.
To Further improve the lower bound on the preference costs I removed all variables <code>y_{d,i,j}</code> with larger accounting penalty than <code>UB_opt - LB_pref</code>≤<code>27266</code>, added a constraint bounding the accounting costs (9), and solved the following programm.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F478286%2F1852f21aee7d8da52418294cec548d27%2Fimproved_lower_bound.png?generation=1579808583934573&alt=media" alt=""></p>
<p><strong>This program runs 10 minutes and gives a lower bound on the preference costs of <code>LB_pref</code>≥<code>54412</code></strong></p>
<p>Note, that if we have an improved lower bound on the preference costs or an improved upper bound on the optimal costs we can rerun this program to get possibly an improved lower bound on the preference costs. In particular, I could have rerun the program directly but, since our lower bound improved, but I decided no to.</p>
<p>After that, I removed all variables <code>y_{d,i,j}</code> with penalty strictly larger than <code>UB_opt - LB_pref</code> from the first MIP and solved it.</p>
<p><strong>This produces a solution with value ≤<code>70134</code> in roughly 70 minutes.</strong></p>
<p>Since I had now a good quality solution, I decided to try an improvement step.</p>
<h1>MIP Large Neighbourhood Search</h1>
<p>The last “tool” I used was a MIP representing a “large” neighbourhood search. Given a feasible solution, it restricts the number of people for each day to a given threshold <code>TR</code> from the number given by the initial solution (10). The day load a day <code>d</code> of the start solution is represented by <code>l_d</code>. Again, I removed all variables <code>y_{d,i,j}</code> with penalty strictly larger than <code>UB_opt - LB_pref</code> and all variables not within the threshold.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F478286%2F8856f909c45e84298849738e9d59ed23%2Flns.png?generation=1579809169263321&alt=media" alt=""></p>
<p>I am not sure but I think I ran this program with threshold values <code>20</code>≤<code>TR</code>≤<code>120</code> which led me to an optimal solution. For one large <code>TR</code> I solved the program to optimality. So, I knew that in this huge neighbourhood there is not better solution. I had not much hope to solve the program to optimality with larger values. Thus, I decided to try to “prove” optimality and ran the first MIP with all bounds I had and my best solution. It took about a day, but then Gurobi were proved optimality. For this run I changed the parameters of Gurobi to aggressively work on the bound.</p>
<p>Note, that my work was not that straightforward how I presented it here. I did many things in parallel and ran the above MIPs with new start solutions and improved bounds.</p>
<p>I hope you have fun with this post.</p>
<p><strong>Please let me know, if you see any mistakes or have questions!</strong></p>
<p>I think (6) and (7) are crutial for that formulation! For sure we can remove some of them.
Nevertheless, it is “normal” to get bounds which exceed the optimal bound within a LNS! We cut off solutions, so we can not hope for bounds on the optimal solution! </p>
<p>What makes you think that?</p>
<p>The <code>z_d</code> variables are in my formulation continuous variables, nevertheless they are implicit integral.</p> |
Santa's Workshop Tour 2019 | John does California Odyssey (with code) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa's Workshop Tour 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our full model is exactly the same as the one described by <a href="/frankfisk">@frankfisk</a> : <a href="https://www.kaggle.com/c/santa-2019-revenge-of-the-accountants/discussion/126380">https://www.kaggle.com/c/santa-2019-revenge-of-the-accountants/discussion/126380</a> I updated the post.</p>
<p>I added the code with symmetry breaking in the github repo. It is via additional constraints. They say that the rows of x variables for all families having the same number of members are ordered.</p>
<p>Yes, I had great inspiration for the title, not sure how ;)</p>
<p>Impact was to speed up finding new solutions. We did not measure precisely.</p>
<p>I was very lucky to team with Alain and Stéphane. Without them I would not have entered this competition as I was already engaged in another one. And without them I would not have found the optimum that fast, therefore would not get a gold medal for sure. Contributions from team members were all significant. We came to this via different point of views, which was very fruitful. </p>
<p>We started from this notebook <a href="https://www.kaggle.com/vipito/santa-ip/">https://www.kaggle.com/vipito/santa-ip/</a></p>
<p>It has several interesting components:
- A LP model for family to days assignment. <br>
- Max constraints on difference between successive days occupancy
- A local search to improve solutions</p>
<p>We then explored a number of variations and improvements. When one of us was finding a solution it was shared with the other ones so that they can start from it in their next run. We iterated over many models and runs. Typically we would not let something run for more than a day. Indeed, the better the starting point the better the end point! And also, when using MIP models, the better the starting point the smaller the model as many variables can be set to 0 upfront.</p>
<p>The things we tried include: data/solution analysis , local search, LNS, linearization, approximation, simplification, symmetry breaking. Let's look at each of those, in nor particular order.</p>
<p><strong>Local Search</strong></p>
<p>We started form the stochastic search of the public kernel, but then moved to a search similar to what is used in max flow algorithm: find a chain of family reassignment that keeps occupancy mostly unchanged and improve cost. For instance, comparing two solutions found early in our endeavor we saw that they difference in only few places:</p>
<pre><code>Family: 261 83 - 67
Family: 779 67 - 7
Family: 798 35 - 45
Family: 2926 1 - 35
Family: 3215 25 - 83
Family: 4716 45 - 1
Same: 4994
</code></pre>
<p>If we look carefully, we see that moves can be chained:</p>
<pre><code>35-45-1-35
25-83-67-7
</code></pre>
<p>We have one 3 cycle and one 3 path that capture all changes.</p>
<p>We coded a systematic search for chains up to a given length. This was way more effective that a brute fore on possible family swaps.</p>
<p><strong>data/solution analysis</strong></p>
<p>After our first few solutions we found that the distribution of choices ranks was highly skewed. Most families had one of their first 4 choices. A first consequence is to limit model complexity by only considering choices up to 4, or 6, depending on the runs. Unless mistaken, all families got one of their top 6 choices in our optimal solution. We relaxed this at the end when we proved optimality. </p>
<p>Another example of data analysis was to look at <code>gap(d)</code> which is absolute differences of occupancy of a day d. n Here is a plot gap(d) as function of the occupancy of the day for a solution of cost 69158.xxx</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2F6081dcbb4d5e819d0978db28c2bd5a32%2Fimage.png?generation=1579180045541292&alt=media" alt=""></p>
<p>We see there is a simplex convex hull, which can suggest additional conditional constraints. For instance:
if (number(d) >= 126), then (number(d)-number(d-1)) <= a-b*number(d) </p>
<p>where <code>number(d)</code> is the occupancy on day <code>d</code>, <code>a</code>and <code>b</code>two parameters we set for each run.</p>
<p><strong>Cost approximation</strong></p>
<p>The accounting cost function is non convex, which makes it tricky to optimize. Here is a log plot of it capped by a high value (100,000 I think)..</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2F021ea4250ad15952825847a03721168c%2Fcost%20(1" alt="">.png?generation=1579180264664895&alt=media)</p>
<p>Even if non convex, it has some clear properties. It is increasing with the gap more than linearly. This led to the idea of minimizing the sum of their squares. Unfortunately this was not very effective.</p>
<p>Other approximations were based on conditional constraints of the form:</p>
<pre><code>if number(d) &gt;= a, then cost(d) &gt;= piecwise(gap(d))
</code></pre>
<p>i.e. approximating the cost by piecewise functions that minor the actual cost.</p>
<p>When we approached the end of our odyssey we switched to an exact representation of the cost via the now well known 3M variable model, first shared by <a href="/hengck23">@hengck23</a> . We actually tried this model the first day we entered the solution, but solving it from scratch did not seem feasible at the time.</p>
<p><strong>Large Neighborhood Search</strong></p>
<p>Local search is powerful, but it does not exploit the flow structure present in the model and it does not allow for massive changes in family assignments. We used another form of search that could lead to large changes in family assignment. The idea was to start from a solution, keep its occupancy per day, then constraint occupancy to be close to that initial occupancy, and solve the problem as a MIP. This is very effective to explore a large neighborhood of the initial solution, hence its name. It led us to find the deep local optim at 68910.94. But this could not lead us to an optimal solution. Reason for that is clear when we look at occupancy per day for our optimal solution and for that deep loacl optimum:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F75976%2Fd6f10bd83c5ca4ac0e3f5dcf86169954%2Fimage%20(10" alt="">.png?generation=1579181603445237&alt=media)</p>
<p>We see that the optimal solution has one extra dip to 125 compared to the other solution. There is no way our local search or our large neighborhood search would have found it.</p>
<p><strong>Exact model</strong></p>
<p>After few days we switched to an exact model (see description below) and used various subset of it. Subsets can mean: limiting family choices to top K (k = 4, or 6, in practice). Limiting the max value of the gap. These limits were implemented by setting variables to 0 before solving the problem. The model was always initialized by a solution (mi start). Same for variables used to represent the cost. Those with a large coefficient were set to 0.</p>
<p>Then all the tricks above were used to get new solutions quickly from known solutions. Another trick was to fix some day occupancy to 125. At a point, starting from a solution of cost 68914.2801, limiting to best 6 choices, and fixing 4 day occupancy to 125 we found an optimal solution overnight.</p>
<p><strong>Optimality Proof</strong></p>
<p>Finding a solution of optimal cost is not the same as proving there is no better solution. We had a slight hope that none of the teams in front of us on the LB found the actual optimum. Our run showed they did find the optimum. The model is very similar to the 3M model shared publicly.</p>
<p>A binary variable x for each pair (family,day)
A binary variable y for each pair (day,occupancy of the day)
A binary variable z for each triple (day, occupancy of the day, occupancy of next day)</p>
<p>The constraints are straightforward except one that was disclosed by <a href="/hengck23">@hengck23</a> : summing the variables z along one of the last two axis is equal to one of the variable y. </p>
<p>Code for a cplex implementation of the full model is available at <a href="https://github.com/jfpuget/Kaggle_Santa_2019">https://github.com/jfpuget/Kaggle_Santa_2019</a>. We ran this on a machine with 20 cores at 2.6GHz . It uses 16 GB. It proves optimality in less than 3 hours, when mip start is our optimal solution.</p>
<p>Before that run we tried to add symmetry breaking constraints with the hope of speeding proof. Indeed, once families start to not getting their preferred choices a lot of family assignments yield the same cost. Symmetry breaking as effective in a way as it halved the number of nodes for the proof, but running time was a bit larger.</p>
<p>Edit: Our full model is exactly the same as the one described by <a href="/frankfisk">@frankfisk</a> : <a href="https://www.kaggle.com/c/santa-2019-revenge-of-the-accountants/discussion/126380">https://www.kaggle.com/c/santa-2019-revenge-of-the-accountants/discussion/126380</a></p>
<p>Thanks!</p>
<p><a href="/sheriytm">@sheriytm</a> this was worth a separate topic.</p>
<blockquote>
<p>I only got <71.000 without Gurobi</p>
</blockquote>
<p>We found and prove optimality without Gurobi ;)</p>
<blockquote>
<p>Does your team have a fast and effective way of finding the optimal solution here without using a commercial tool?</p>
</blockquote>
<p>No. Unlike machine learning, open source is not competitive with commercial software for combinatorial optimization.</p> |
Santa's Workshop Tour 2019 | MIP formulation on Gurobi, CPLEX and CBC+PuLP | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa's Workshop Tour 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats winners! I was a little bit late...</p>
<p>I've shared a simple MIP formulation on Gurobi, CPLEX, and CBC+PuLP.</p>
<p><a href="https://github.com/tkm2261/kaggle_santa2019_youtube">https://github.com/tkm2261/kaggle_santa2019_youtube</a></p>
<p>This code is a little bit different from the one I used but simple. It might be better for learning how to formulate and solve mixed-integer programming problems.</p>
<p>The difference is that I added several redundant constraints to lift the bound.</p>
<p>I've also uploaded a tutorial video and its slide. Although It's Japanese, maybe you can understand some parts. I'd be happy to answer any questions in English in this thread.</p>
<p><a href="https://youtu.be/0AdaTRU--YE">https://youtu.be/0AdaTRU--YE</a>
<a href="https://www.slideshare.net/tkm2261/kaggle-santa-2019mip">https://www.slideshare.net/tkm2261/kaggle-santa-2019mip</a></p> |
Santa's Workshop Tour 2019 | My optimal solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa's Workshop Tour 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Before I got the license for gurobi, I just got that band of occupancies for 68910.94 by luck, don't have any scientific proof. Running LP, fix occupancy for day [36, 43, 50, 57, 64, 71, 78, 85, 92] at 125 and some other days to avoid the gap in relaxation like:
<code>
min_occ[21] = 126+30
min_occ[29] = 126+30
min_occ[35] = 126
min_occ[56] = 126
min_occ[68] = 126
</code></p>
<blockquote>
<p>with opensource tools only in reasonable time</p>
</blockquote>
<p>I think no. It took me so much of time. </p>
<p>Thanks Ernee from <a href="https://www.kaggle.com/c/santa-workshop-tour-2019/discussion/122660">https://www.kaggle.com/c/santa-workshop-tour-2019/discussion/122660</a>, among 100x176x176 variables, I also set to 0 if account_matrix_cost[u, v] >= 500. </p>
<p>My SCIP run to get 68910.94 is searching in +-11 around another band of occupancies. The occupancy I provided above is for the 68888.04 score, with day 22 is 125. The 68910.94 score does not have occupancy 125 at day 22.
<code>
MIP solver result: OPTIMAL
Objective value = 68910.94216205172
Problem solved in 26989972.000000 milliseconds
Problem solved in 18706051 iterations
Problem solved in 33180 branch-and-bound nodes
CPU times: user 2d 8h 55min 4s, sys: 41min 48s, total: 2d 9h 36min 52s
Wall time: 7h 29min 5s
</code>
You can provide initial solution in SCIP, you just need to call model.SetHint(variables, values). Mine is a 68912.83046267704 solution. You can run multiple threads by calling model.SetNumThreads(n_thread) </p>
<p>I meant 67xxx solution in LP model. No doubt you can always get it.</p>
<p>As I know Gurobi does not solve continuous QP, it will so error message if we try to do that. The constraints I used above is binary QP. How did you get to optimal without commercial? Maybe the notebook prize will show how to do that. </p>
<p>I used Gurobi 9.0 with a non limited evalution license. Thanks Gurobi for giving me an avalution license. SCIP in ortools can only get me to 68910.94.
- I fixed the occupancy of days at this array and run MIP model around +/-5 values of each day.
<code>estimate_occupancies = np.array([300, 287, 300, 300, 286, 262, 250, 250, 271, 296, 300, 300, 279,
264, 256, 273, 294, 282, 259, 228, 196, 164, 125, 300, 300, 295,
278, 263, 257, 253, 279, 276, 252, 219, 188, 156, 125, 283, 271,
248, 216, 184, 159, 125, 300, 282, 257, 226, 194, 161, 125, 286,
264, 236, 201, 168, 137, 125, 266, 241, 207, 166, 125, 125, 125,
253, 225, 190, 147, 125, 125, 125, 227, 207, 175, 129, 125, 125,
125, 235, 220, 189, 147, 125, 125, 125, 256, 234, 202, 161, 125,
125, 125, 234, 214, 181, 136, 125, 125, 125])</code>
With this scope, the MIP model run under 10 minutes on my macbook.
- Where did I get that estimated occupancies? I run LP model on 5000x10 preference cost CONTINOUS variables (from 0 to 1) + 100x176x176 accounting cost BINARY variables just like the guide from <a href="https://www.kaggle.com/c/santa-workshop-tour-2019/discussion/120764">https://www.kaggle.com/c/santa-workshop-tour-2019/discussion/120764</a> and added these quad constraints, zs is mine 100x176x176 accounting cost variables:
<code>
for i in range(0, N_DAYS, 2): ## only need this for every two days
for u in range(0, 176, 2): # every two occupancies
for v in range(0, 176): <br>
two_vars = zs[i][u, v] + zs[i][u+1, v] <br>
m.addConstr(two_vars*two_vars==two_vars)
</code>
These quad constraints are need for LP only, in MIP model we don't need it. In the LP model, they force the relaxation to use two adjacent occupancies for a day. This way, the LP model is not optimistic about the accounting cost in the relaxation. You don't need to run the LP model to the end, just stop it when a 67xxx solution has been found, about 25 minutes on my macbook.
- The solution sounds easy and fast, but it took me months to find out. Congratulation and thank you everyone, I can't get this score without refer to discussions in this great competition. </p>
<p>> do not come out as binary, but rather continuous</p>
<p>Yes, this is as expected because this is LP model-you are using GLOP_LINEAR_PROGRAMMING. </p>
<p>>because I could not use your quadratic constraints</p>
<p>Yes, the LP model is optimistic about the accounting without the quad constraints I mentioned. </p>
<p>> Hence it must be either 0 or 1</p>
<p>Yes, two_vars can only be 0 or 1. Like you said it is already implied by another constraint that sum 176x176 variables of each days must be 1. </p>
<p>> So it seems redundant</p>
<p>It seems so, but it is not. We need it in LP model to prevent the very high accounting cost. For example without it, the LP model can output zs[iii][0, xxx]=0.5, zs[iii][100, xxx]=0.5. So the occupancy for day iii becomes accounting_cost_matrix[0.5x0 + 0.5x100, xxx]. Remember the accounting cost function is exponent function, you had high accounting cost because:
<code>
accounting_cost_matrix[50, xxx] &gt; 0.5*accounting_cost_matrix[0, xxx] + 0.5*accounting_cost_matrix[100, xxx]
</code>
We don't want that, we want the LP model to output two adjacent values for a day, like
zs[iii][0, xxx]=0.5, zs[iii][1, xxx]=0.5. So
<code>
accounting_cost_matrix[0.5, xxx] == 0.5*accounting_cost_matrix[0, xxx] + 0.5*accounting_cost_matrix[1, xxx]
</code></p>
<p>Ortools does not support quad constraint. Gurobi supports it, but I believe inside, Gurobi converts those quad constraints to linear constraints too. As <a href="/cpmpml">@cpmpml</a> said in <a href="https://www.kaggle.com/c/santa-workshop-tour-2019/discussion/124665#713303">https://www.kaggle.com/c/santa-workshop-tour-2019/discussion/124665#713303</a> , there is way to do that, but I have not tried yet.
```
Exactly!</p>
<p>For those interested, the linearization of the objective term a * x1 * x2 where x1 and x2 are binary variables and a positive is:</p>
<p>Replacing x1 * x2 in the objective with a new binary variable y and adding a constraint</p>
<p>y >= x1 + x2 - 1</p>
<p>if x1*x2 appears in a constraint, in the general case we can replace it with a new variable y and few constraints:</p>
<p>y >= x1 + x2 - 1</p>
<p>y <= x1</p>
<p>y <= x2
```
Please let me know it is works if you try it. </p> |
Santa's Workshop Tour 2019 | Our trick | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa's Workshop Tour 2019
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>1) Use LP 67xyz LP solution
2) Core trick in that LP is that <code>sum_i M_d[i][j] == sum_k M_{d+1}[j][k]</code> (which is not needed but pushes LP higher)
3) Our trick for easier cutting and branching is to have <code>sum_i M_d[i][j]</code> as separate variable, so solver can branch on it.</p>
<p>Side note:
We spend too much time on formulation with 175 variables per day and having 175*175 constrains. We had tricks like convex hulls, lazy constraints, ... but that was not that great.</p>
<p>It is just additional constraint for shenanigans LP relaxation can do.
It is obviously redundant for integer only formulation.</p> |
Santander Customer Satisfaction | 13th place... Good or Bad ??? | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Satisfaction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Just obtained my best LB position, 13 out of 5000+, but i still don't know how to feel about it!!!!</p>
<p>My solution is a linear combination of a) mean of xgb models and b)mean lasagne nn models with corrections applied at the end by this function:</p>
<pre><code>settozerofromvars_v4<- function(vector,df){
vector[df$num_var13_corto_0 >= 6]<- min(vector) - 0.00001
vector[df$num_var12 >= 6]<- min(vector) - 0.00001
vector[df$ind_var33_0 >= 1]<- min(vector) - 0.00001
vector[df$num_var13_largo_0 >= 6]<- min(vector) - 0.00001
vector[df$num_aport_var13_hace3 >= 6]<- min(vector) - 0.00001
vector[df$num_meses_var13_largo_ult3 >= 1]<- min(vector) - 0.00001
vector[df$var15 < 23]<- min(vector) - 0.00001
vector[df$var36==0]<- min(vector) - 0.00001
return(vector)
}
</code></pre>
<p>the above ordering is based on counts in train data set.
If i hadn't applied any correction function at the end i would had finished at 7th position and i would become a master.</p>
<p>If i had applied an even more conservative function like:</p>
<pre><code>settozerofromvars_v4_4var<- function(vector,df){
vector[df$num_aport_var13_hace3 >= 6]<- min(vector) - 0.00001
vector[df$num_meses_var13_largo_ult3 >= 1]<- min(vector) - 0.00001
vector[df$var15 < 23]<- min(vector) - 0.00001
vector[df$var36==0]<- min(vector) - 0.00001
return(vector)
}
</code></pre>
<p>I would had finished at 3rd position, and in the money.</p>
<p>Should i feel happy or unhappy??? </p>
<p>Thank you all for your replies.</p>
<p>Sorry for my late response but i didn't feel like doing much since competition ended.</p>
<p>I was confused at first but now i am happy for my LB position and unhappy for my mistake.</p> |
Santander Customer Satisfaction | 34th place - code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Satisfaction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://github.com/pjpan/Practice/tree/master/Kaggle-SantanderCustomerSatisfaction">https://github.com/pjpan/Practice/tree/master/Kaggle-SantanderCustomerSatisfaction</a></p>
<p>there is big gap between private and public leaderboard score. I try stacking in RF、GLM、Xgboost. I think the py code is the wright to the 34th place.</p> |
Santander Customer Satisfaction | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Satisfaction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our solution with the documentation is uploaded here:</p>
<p><a href="https://github.com/diefimov/santander_2016">https://github.com/diefimov/santander_2016</a></p>
<p>Despite of many problems we had during this competition, the collaboration in our team was wonderful. I hope we will write about it in the Kaggle blog. Only during the last 6 hours of the competition we have made a correct decision about the final submission, that helped us to stay on the top.</p>
<p>Actuallly, I did not check the individual score on LB, but on CV it was a little bit worse than xgboost: on CV xgboost gave 0.841, and rgf gave 0.837. I think on LB the difference is the same.</p> |
Santander Customer Satisfaction | 44th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Satisfaction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Rolf,</p>
<p>Thank you for your comment.<br>
I used lua with <a href="http://torch.ch/">Torch7</a> for Neural Network and K-Nearest Neighbor.</p>
<p>Hi CJCJ,</p>
<p>This is one of the ensemble techniques.<br>
I create 200 different models and use top 10 models for averaging, see <a href="https://github.com/toshi-k/kaggle-santander-customer-satisfaction/blob/master/source/01_nn_averaging.r">01_nn_averaging.r</a>.</p>
<p>Hi Everyone,</p>
<p>Ending final validation, I share my solution.<br>
My solution is based on Neural Network, Gradient Boosting and K-Nearest Neighbor.<br>
Code: <a href="https://github.com/toshi-k/kaggle-santander-customer-satisfaction">https://github.com/toshi-k/kaggle-santander-customer-satisfaction</a></p>
<p><img src="https://raw.githubusercontent.com/toshi-k/kaggle-santander-customer-satisfaction/master/img/solution.png" alt="conceptual diagram" title=""></p> |
Santander Customer Satisfaction | 7th place - post competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Satisfaction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hola a todos,</p>
<p>He construido un modelo con el que he obtenido un AUC = 0.828314 para la clasificación privada.</p>
<p>Este modelo es un ensemble de ensembles de varios modelos de XGBOOST.</p>
<p>Mejores modelos de XGBOOST:</p>
<p>1) TomekLink + ENN + XGB (max_depth = 5, n_rounds = 500, eta = 0.020204812)</p>
<p>2) TomekLink + ENN + XGB (max_depth = 5, n_rounds = 500, eta = 0.0202048125)</p>
<p>3) ENN + XGB (max_depth = 5, n_rounds = 500, eta = 0.020204813)</p>
<p>4) ENN + TomekLink + XGB (max_depth = 4, n_rounds = 560, eta = 0.020204812)</p>
<p>5) undersampling (30) + XGB (max_depth = 5, n_rounds = 300, eta = 0.020204812)</p>
<p>6) TomekLink + ENN + Elim. Caract. + XGB (max_depth = 5, n_rounds = 600, eta = 0.020204812)</p>
<p>7) undersampling (30) + XGB (max_depth = 5, n_rounds = 300, eta = 0.0202048123)</p>
<p>Ensemble: Media aritmética de las predicciones de cada modelo de XGBOOST.</p>
<p>E1) (1 + 2 + 3 + 4 + 5 + 6 + 7) / 7</p>
<p>E2) (1 + 3 + 4) / 3</p>
<p>Ensemble final: (E1 + E2) / 2</p> |
Santander Customer Satisfaction | Curse of Overfitting (From Place 14th to Place 2484th) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Satisfaction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><pre><code>import pandas as pd
import numpy as np
import xgboost as xgb
from sklearn.cross_validation import StratifiedKFold
from sklearn.cross_validation import cross_val_score
from sklearn.metrics import roc_auc_score
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.feature_selection import SelectFromModel, VarianceThreshold
import xgboost as xgb
from sklearn.cross_validation import KFold
print('Load data...')
train = pd.read_csv("../../data/train.csv")
train_id = train['ID'].values
target = train['TARGET'].values
train = train.drop(['ID','TARGET'],axis=1)
test = pd.read_csv("../../data/test.csv")
test_id = test['ID'].values
test = test.drop(['ID'],axis=1)
train = train.replace(-999999,2)
test = test.replace(-999999,2)
# adding zero counts
train["zeroes"] = (train == 0).astype(int).sum(axis=1)
test["zeroes"] = (test == 0).astype(int).sum(axis=1)
# remove constant columns (std = 0)
remove = []
for col in train.columns:
if train[col].std() == 0:
remove.append(col)
train.drop(remove, axis=1, inplace=True)
test.drop(remove, axis=1, inplace=True)
print(train.shape, test.shape)
# remove duplicated columns
remove = []
cols = train.columns
for i in range(len(cols)-1):
v = train[cols[i]].values
for j in range(i+1,len(cols)):
if np.array_equal(v,train[cols[j]].values):
remove.append(cols[j])
train.drop(remove, axis=1, inplace=True)
test.drop(remove, axis=1, inplace=True)
# Feature selection
#ROUND ONE
Cols = train.columns.values.tolist()
clf = GradientBoostingClassifier(random_state=1729)
selector = clf.fit(train, target)
importances = selector.feature_importances_
fs = SelectFromModel(selector, prefit=True)
train = fs.transform(train)
test = fs.transform(test)
print(train.shape, test.shape)
selectedCols = train.shape[1]
sortedCols = [col for importance, col in sorted(zip(importances, Cols))]
sortedCols = sortedCols[0:selectedCols]
train = pd.DataFrame(train)
test = pd.DataFrame(test)
train.columns = sortedCols
test.columns = sortedCols
for i in range(len(sortedCols)):
for j in range(len(sortedCols)):
colName = sortedCols[i]+"_SUBTRACT_"+sortedCols[j]+"_DIVIDE_"+sortedCols[i]
train[colName] = (train[sortedCols[i]]-train[sortedCols[j]])/train[sortedCols[i]]
test[colName] = (test[sortedCols[i]]-test[sortedCols[j]])/test[sortedCols[i]]
train = train.replace(np.inf, 999999)
train = train.replace(-np.inf, -999999)
train = train.replace(np.nan, -1)
test = test.replace(np.inf, 999999)
test = test.replace(-np.inf, -999999)
test = test.replace(np.nan, -1)
print "Second Round Started"
#ROUND TWO
Cols = train.columns.values.tolist()
clf = GradientBoostingClassifier(random_state=1729)
selector = clf.fit(train, target)
importances = selector.feature_importances_
fs = SelectFromModel(selector, prefit=True)
train = fs.transform(train)
test = fs.transform(test)
print(train.shape, test.shape)
selectedCols = train.shape[1]
sortedCols = [col for importance, col in sorted(zip(importances, Cols))]
sortedCols = sortedCols[0:selectedCols]
print sortedCols
predictedResult = np.zeros(train.shape[0])
kf = KFold(train.shape[0], n_folds=10)
testPred = []
for trainIndex, testIndex in kf:
trainFold, testFold = train[trainIndex], train[testIndex]
trainFoldTarget, testFoldTarget = target[trainIndex], target[testIndex]
xgbc = xgb.XGBClassifier(n_estimators = 560,learning_rate = 0.0202047,max_depth = 5,subsample = 0.6815,colsample_bytree = 0.701)
xgbc.fit(trainFold,trainFoldTarget)
xgbpred =xgbc.predict_proba(testFold)[:,1]
testPred.append(xgbc.predict_proba(test)[:,1])
predictedResult[testIndex] = xgbpred
print roc_auc_score(testFoldTarget, xgbpred)
print roc_auc_score(target, predictedResult)
testPred = np.average(np.array(testPred), axis =0)
pd.DataFrame({"ID": train_id, "TARGET": predictedResult}).to_csv('csv/model5TrainProb.csv',index=False)
pd.DataFrame({"ID": test_id, "TARGET": testPred}).to_csv('csv/model5TestProb.csv',index=False)
</code></pre>
<p>Yes.. Number of features would be around (336-contant features-duplicated features)^2 which is around 45000 . I think it would score even better if we can encode NaNs and infinities generated during feature generation phase some different way because many of those generated features were giving good linear correlation with target before i replaced NaNs and infinities with simple outlying value. Also i tried to keep only one feature between (a-b)/a and (b-a)/b , but it was giving very bad result.</p>
<p>I am pasting the original code below.. Since i was stacking different models, it will generate both test & train csv. You can use test csv for 14th position.. I have attached final csv that i got from this model so that you can cross check it with your's. There may be some differences based on platforms and floating point precision.</p>
<p>My CV Scores were as follows:</p>
<p>0.835565346362
0.838014152227
0.828437305991
0.832669246434
0.844827604474
0.845186121769
0.843317732128
0.866952353959
0.853440530264
0.831788359156</p>
<p>Accumulated CV Score
0.84144568446</p>
<p>Can someone try it after removing first round of gradient boost feature selection ... I don't have enough memory to try that... </p>
<p>Sorry @EnvyMyFieldsMedal ... In actual script it was xgb classifier in place of logistic regression... I was trying different ideas after that...so it got changed...</p>
<p><a href="https://www.kaggle.com/shahnawazakhtar/santander-customer-satisfaction/14th-place-private-lb-script">https://www.kaggle.com/shahnawazakhtar/santander-customer-satisfaction/14th-place-private-lb-script</a></p>
<p>It may have even got in top 10 or top 5 if there was enough memory to bypass first round of feature selection and only used Feature Selection after Feature Generation stage</p>
<p>Here is the Script that would have got 0.828063 ...</p>
<p><a href="https://www.kaggle.com/shahnawazakhtar/santander-customer-satisfaction/14th-place-private-lb-script">https://www.kaggle.com/shahnawazakhtar/santander-customer-satisfaction/14th-place-private-lb-script</a></p> |
Santander Customer Satisfaction | My 12th place simple solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Satisfaction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>What kind of undersampling have you applied?</p>
<p>Thanks for your advices, I will try different balances!</p>
<p>Hello, I would like to share one of my models that gets Rank 12, but I selected 2 other different submissions... :(</p>
<p>As the majority of us, I have learned a hard lesson about overfitting.</p>
<p>The solution has two parts: the first involves making a noise prediction and the second part involves making an ensemble of 24 partitions of the dataset in order to make undersampling without losing information.</p>
<p><strong>1. Noise prediction</strong></p>
<p>This dataset has some repeated instances <strong>but</strong> with both class label, so that instances are noise. What I made was extract that noisy instances from the complete dataset. After that, I splitted the resulting complete dataset in 5 partitions and I ran xgboost on each one. Then, I made a majority vote of the five resulting predictions for each noisy instance that I extrated at the begining. Finally, I inserted that instances with the real class label.</p>
<p><strong>2. Undersampling</strong></p>
<p>This dataset is highly unbalanced, so using the new dataset without noise, I made an undersampling as follows: first, I splitted the dataset in the instances which have class label 0 and the instances which have class label 1. Later, I splitted the instances with 0 class into number_of_class0/number_of_class1 partitions (24 partitions) and I joined each partition with the partition of instances of 1 class label made before. Finally, I ran xgboost on each new balanced partition and I made an average with the predictions of the 24 models.</p>
<p>This simple model gets 0.840336 in the public leader-board and 0.828114 in the private leader-board. If you want, you can check the code on my GitHub account.</p>
<p>Regards,</p>
<p>Antonio.</p>
<p>Well, what I copied is the basic feature selection (remove duplicates, constant, etc.). I am new in machine learning and R, but with this competition I have learned/enjoyed a lot.</p> |
Santander Customer Satisfaction | My solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Satisfaction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Here is an outline of my method which achieved:</p>
<p>Public: 2925th AUC: 0.837722 / Private 1436th AUC: 0.825816</p>
<p>1) I grouped the features by var. For most var numbers there are several features with that number. For each group I feed the features into a simple classifier to find the most important groups.</p>
<p>2) I hand engineered features within each group. Creating binary features; local zero count; simple interaction features etc. </p>
<p>3) I also used PCA and the global zero count feature. </p>
<p>4) I used extra trees to select features - I was super happy when a number of my hand engineered featured survived the cut. This was the first time I had hand engineered so many features. </p>
<p>5) I used a greedy approach to remove and add features manually (ended up removing one feature and adding another feature).</p>
<p>6) I trained 3 xgboost models. I choose the max_depth for each tree (5, 9 and 12). I figured that this approach would allow my model to be more robust even if it did not improve my score that much. </p>
<p>7) I used a weighted average of the three models (0.6, 0.3, 0.1).</p>
<p>I was experimenting with most of the ideas for the first time so I am quite happy with how everything worked out. I look forward to participating in more competitions.</p>
<p>Thanks to both for sharing. There are some really interesting ideas here. I can't wait to get started with another competition, but, for now I have so many things to investigate!</p> |
Santander Customer Transaction Prediction | # 21 Magics | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to everybody participated in this competition, it was really exciting.
I want to briefly describe my magics.</p>
<p>1) Calculated for each column - is value duplicated? Then I calculated number of duplicated for each row and noticed 100k rows in test with 200duplicated. It was before "fake" kernel, so I thought it was private part of dataset. This was weird and I paid more attention to counts. I calculated "if value calculated based on TRAIN data" and then extrapolated this for test. This was boost for .901
2) Then I tried multiply raw values on 0/1 duplicated and filling 0 with column mean. This was about .907
3) 922+ appeared when I started calculating statistics on 300k rows. So there were following parts: raw columns, 0/1 duplicates, value_counts, MinMaxScaler(value_counts)<em>raw + MMS(vc)</em>(1-row_mean)
Actually MinMaxScaler(value_counts)<em>raw + MMS(vc)</em>(1-row_mean) was a big boost. It somehow shifts values to center and transforms the data. The hist looks like following (in attachments)
...this all was followed by TONNS of unsuccessful experiments...</p>
<p>Boost to 923-924 was in blending lgb+ctb and augmentating train part x2 and x3.
Maybe I'll write my story in more details and publish gist but not now - I need some sleep.</p> |
Santander Customer Transaction Prediction | #1 Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I made public our "create_data" script. Url at the end of the post.</p>
<p>Within columns. Using data + true samples from test > data alone</p>
<p>Positive were the most important points to add. It might be feature calculation with more data which makes the difference</p>
<p>To prevent leaks we don't use the target of the sample itself. We look at <strong>other</strong> samples that have the same feature value.</p>
<p>I tweaked lgbm source code so that once it begins a tree with a feature it can only use features derived from the first one for next splits (for example var 81 first split -> only var 81 has one and var 81 not unique availables for next splits), thinking it would limit noisy interactions but It didn't improved our model... was easier to build this kind of architecture with NN</p>
<p>@CPMP, @Silogram is being very humble but we won as a team :)</p>
<p>I will post the kernel after running it</p>
<p>First, congratulations to every team that participated and fought hard to find this so called "magic", and especially to The Zoo for smoothing the way to .92x and all the top teams that scared us up to the last minute ... !
This solution write-up will contain technical parts as well as, as many persons ask, some details about my journey that led to our solution. Feel free to read what interests you :)</p>
<p><strong>TLDR:</strong>
made 400 features,
LGBM, 600feats, that scores 0.92522 public /0.92332 private using pseudo-label and data augmentation
<strong>Winning ticket</strong> : NN, 600feats, with custom structure that scores .92687 public / 0.92546 private using pseudo-label and data augmentation
blending them (2.1NN, 1LGBM) gave us our .927 public</p>
<h1>Feature engineering:</h1>
<h2>Technical part:</h2>
<p>The "magic" is about count of values, especially the fact that some are unique.
We created 200 (one per raw feature) categorical features, let's call them "has one feat", with 5 categories that corresponds (for train data) to:
- This value appears at least another time in data with target==1 and no 0;
- This value appears at least another time in data with target==0 and no 1;
- This value appears at least two more time in data with target==0 & 1;
- This value is unique in data;
- This value is unique in data + test (only including real test samples);</p>
<p>The other 200 (one per raw feature) features are numerical, let's call them "not unique feat", and correspond to the raw feature replacing values that are unique in data + test with the mean of the feature.</p>
<h2>My journey to the findings:</h2>
<p>After some EDA where key insight was that number of different values in train and test was not the same, I started with LGBM, because it is fast and powerful and easy to use.
Like many I began to see CV/LB improvements with count encoding of features.
I looked at my LGBM trees (with only 3 leafs that's easy to do) and noticed the trees were using the uniqueness information.
After this insight, I started to build features around uniqueness. Using only training data and the "has one feat", I could reach .910 LB. Adding the other 200 "not unique feat", .914LB.
The next move was to use data + test to spot unique values. It worked really well on CV, giving >.92x results but didn't apply to test as is!
As many people noticed, the count of unique values per feature in data and test is very different! So I knew that there was a subset of samples in test that I couldn't identify yet that would bring >.92x LB. I teamed with Silogram at this moment. The day after he sent me a link to the beautiful and very important kernel of @YaG320 (<strong>rick and morty's fans are the best!</strong>) "List of Fake Samples and Public/Private LB split". I immediately understood that this was the key to spot values that are unique in data + test!
We got LB .921 using LGBM at this time, and these are the features we used at the end.</p>
<h1>Modelisation:</h1>
<h2>Technical part:</h2>
<p>We used standard 10 fold Stratified cross validation with multiple seeds for final blend.
We made a LGBM using the shuffle augmentation (duplicate and shuffle 16 times samples with target == 1, 4 for target ==0) and added pseudo label (2700 highest predicted test points as 1 and 2000 lowest as 0). Our LGBM performs .92522 Public, .92332 private.</p>
<p>Our second model was a NN with a particular structure:
The idea, like many did, was to process all the features belonging to the same group(raw / has one / not unique) independently and in the same way (i.e using same set of weights). That would create sort of embedding of this feature value. What differentiate us is the next step : We did a weighted average of those 200 embeddings which we then feed to a dense layer for final output. This ensure that every feature is treated in the same way. The weights were generated by another NN. The idea is very similar to what attention networks do. Everything was of course optimized end to end.
We added on the fly augmentation (for every batch, shuffle the features values that belong to target == 1 / target == 0) and it scored .92497 private. Adding pseudo label (5000 highest and 3000 lowest) increased private to .92546.</p>
<p>Our final submission is a blend of these 2 models with weight 2.1 NN / 1 LGBM.</p>
<h2>My journey to the findings:</h2>
<p>Fast.ai made my NN design very easy! I customized the tabular model for the architecture and implemented on the fly augmentation with a callback quite simply.
I really recommend it to everyone ... For the training of the neural network It also made things easy : I added batch norm and small dropouts almost everywhere and then the fit one cycle method with 15 epochs at 0.01 learning rate (nothing fancy) was enough to achieve those results!</p>
<h1>Final thought:</h1>
<p>At the beginning, I wanted to use this competition to try feature selection algorithms (what a deception ... ^^) but rapidly got hooked by the puzzle!
It was a nice experience to team up with such an experienced kaggler as @Silogram, and I'm very happy and proud that we won this!</p>
<p>EDIT1: here is our code for the NN <a href="https://www.kaggle.com/fl2ooo/nn-wo-pseudo-1-fold-seed">https://www.kaggle.com/fl2ooo/nn-wo-pseudo-1-fold-seed</a>
EDIT2: <a href="https://www.kaggle.com/fl2ooo/create-data">here is our script to create data</a> (no refactoring has been done sorry!)</p>
<p>The link to the kernel is now in the solution</p>
<p>Thank you and congratulations to your team too. Your early achievements were truly amazings ! I'm not worrying at all about your future results :)</p>
<p>There are <a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/89003#513569">other comments talking about it</a>. What do you mean by statistical data generation? By adding more data in your training, if there is not too much noise in it, you can expect to boost a little bit your model performances (the more data the better). Shuffle augmentation is based on train data while these are test points. Diversity is a good thing :)</p>
<p>Chris did <strong>very great kernels</strong> showing how to do this. I was using simpler things like a combination of <code>print(model.model_to_string())</code> to spot which trees which use the feature that interest you and then plotting it using <code>lgb.plot_tree(model, tree_index=... ,figsize=(.., ..))</code></p>
<p>we tried to let it in [0:1], using 'xentropy' in LGBM but at the end transformed it to 0/1 so that we could recalculate the "has one" feature using more data.</p> |
Santander Customer Transaction Prediction | #19 solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Since no interactions among different vars helps, I tried lots of interactions between the feature 1 and feature 2 from the same var, including var * count and var / count.</p>
<p>I plotted some var distribution, only use count==1 or count==2 ... and I guess low frequency values has some different property with the high frequency ones.
Then, feature 3 (ignore low frequency values), feature 5 (use log transform to emphasize low frequency values) and some other features are created.</p>
<p>Sure, I have edited the topic.
Just noticed I said almost same thing with him in Data Insight 1.
Unfortunately, I missed that kernel, because it is using R (not familiar with R).
Thanks for commenting.</p>
<p>Congrats to all winners.
So happy to get my first gold medal.
It is a interesting competition.
The most exciting moment is the first time LB go above .901.</p>
<p>Top teams share so fast... like they need no sleeping.</p>
<h1>Data Insights</h1>
<p>Insight of the data is the key to get good result.
Thanks to the kind and mighty kagglers' share, it is not really hard to discover most of them.</p>
<ol>
<li>Features are totally unrelated with each other. (Thanks to <a href="https://www.kaggle.com/cdeotte">@Chris Deotte</a> ‘s <a href="https://www.kaggle.com/cdeotte/modified-naive-bayes-santander-0-899">share </a>, and <a href="https://www.kaggle.com/jiweiliu">@Jiwei Liu</a> 's <a href="https://www.kaggle.com/jiweiliu/lgb-2-leaves-augment">share</a>. and @<a href="https://www.kaggle.com/brandenkmurray">Branden Murray</a> 's <a href="https://www.kaggle.com/brandenkmurray/randomly-shuffled-data-also-works">share</a>)</li>
<li>Some similar property(target distribution) among many features. (same with 1)</li>
<li>Real and fake data in test set. (Thanks to <a href="https://www.kaggle.com/yag320">@YaG320</a> 's <a href="https://www.kaggle.com/yag320/list-of-fake-samples-and-public-private-lb-split">share </a>.)</li>
<li>Fake data can be perfectly generated by shuffling features. (same with 1)</li>
<li>Each feature brings a different part of information. (Since all the feature selection method I have tried did not work well.)</li>
</ol>
<p>So, I got almost all the ideas from reading the public kernel and discussion.</p>
<h1>What I am thinking when competing</h1>
<p>Some idea below may be wrong.</p>
<ol>
<li>Independence (totally unrelated) -> any common interaction between two or more features should be avoid
<ol><li>Do feature engineering with one original feature, and repeat 200 times.</li>
<li>Set feature_fraction == 0.005 in lgb.</li>
<li>Use small leaf_num in lgb.</li></ol></li>
<li>Similar property (rolling window target probability)
<ol><li>Group the features by similar "rolling window target probability" , adjust each feature to have similar distribution, and reshape it to have a (n_samples * group_size, 1) sample... bad result.</li>
<li>After reading <a href="https://www.kaggle.com/onodera">@ONODERA</a> 's <a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88939">topic</a>, I think the idea is right in some way.</li></ol></li>
<li>Fake list
<ol><li>Use part of test set in training.</li>
<li>Do count encoding with true samples.</li></ol></li>
<li>Shuffling -> shuffling "0" get perfect "0" sample, vice versa. -> the unique value comes from "0" and "1" may be different. -> the ratio of "0" an "1" is different. -> count encoding. (Maybe something is wrong, but this is how I got the idea.)</li>
<li>Shuffling -> shuffling "0" together with "1" may produce something between "0" and "1", like a 80% "1" sample.</li>
</ol>
<h1>Features</h1>
<p>The final submission comes from a lgb model with 1200 features.</p>
<ol>
<li>origin values (X200)</li>
<li>count encoding (X200)</li>
<li>when count == 1 replace origin feature by np.NaN (X200)</li>
<li>var * count (X200)</li>
<li>SandardNormalized(var) * np.log1p(A * count + B) (X400 for different A and B)</li>
</ol>
<h1>Other</h1>
<ol>
<li>Stacking of lgb models can make cv +0.0005, but only make LB +0.0001.</li>
<li>Add public test set to train set makes cv +0.001, but only make LB +0.0002.</li>
</ol> |
Santander Customer Transaction Prediction | #38 solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>oh after make sure with my teammate, we foreget to improve nn weight in our final model
our nn cv 9224 pb 92071 lb 92185</p>
<p>祝贺各位老哥!
Congratulations to all winners
thanks all my best teammates @chizhu @puckW <a href="/zhouqingsongct">@zhouqingsongct</a>
here is our simple solution
solution ordered by time </p>
<h1>904</h1>
<p>just freq encode all vars by train ,then for test part all freq var add 1
u can achieve #904 in lb
catboost model</p>
<h1>914</h1>
<p>just remove test fake
then use train + test to get freq encode </p>
<h1>922</h1>
<p>just original features and original features replaced by median when freq var ==1</p>
<h1>923 and 924</h1>
<p>just augment and stacking</p>
<p>what a pity that we forget add weight of our nn to our stacking/blending models</p>
<p>finally
Thank organizations
thanks niubility share in discussion</p>
<p>感谢机构感谢各位选手牛逼的分享
u can translate Chinese to English because my English is poor
thank anyway
we will post our code later
this kernel might be overfitting a little ,but it show our core method for this compe
<a href="https://www.kaggle.com/chizhu2018/augment-catv2?scriptVersionId=12597845">https://www.kaggle.com/chizhu2018/augment-catv2?scriptVersionId=12597845</a></p>
<p>thanks </p>
<p>thanks</p>
<p>thanks</p>
<p>thanks</p> |
Santander Customer Transaction Prediction | #8 team solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>A short summary of our (and probably others) magic. Its about counts. Counts cannot be captured by lgbm on raw data.
With 400feats: [raw,isNotUnique*raw] + lgbmUpsampled we got 0.924 on public.
"isNotUnique" filters out samples with unique numeric value (cnt==1, calculated on 300k rows [train,testReal]).</p>
<p>Then we added a few variations of lgbm/xgb models and linear blended them - thats our 0.925 public score.
Variations on data are (just minor improvements)
- added counts (+200f)
- filtered out values which appear 1 or 2 times (+200f)
- leaky features which calc isNotUnique*raw on target==1/target==0 train samples (+400f)</p>
<p>We tried tons of tricks, almost all failed. A few mentioned: other features, dae pretraining, nn, catboost, more upsampling, pseudo labeling, feature selection, target encoding, parameter tweaking.
The most weird thing is the "shallow"/"independent" property of the dataset, on our 400f I got 0.921 with naiveBayes. It would be interesting to get more infos about santander's anonymization.
Lets see what top teams did.</p>
<p>Michael</p> |
Santander Customer Transaction Prediction | 105th place submission :) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Public LB - 0.919 Private - 0.917</p>
<p><a href="https://www.kaggle.com/timon88/santander-catboost-finalscript-public-lb-0-919">https://www.kaggle.com/timon88/santander-catboost-finalscript-public-lb-0-919</a></p>
<p>Congrats everyone!!! We did it)</p>
<p>Thank you!</p>
<p>Thank you! It's all because your motivating posts and kernels :)</p> |
Santander Customer Transaction Prediction | 10th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The (200,2) input structure is well explained in <a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88929#latest-513419">5th place solution</a>, let me know if you have further questions.</p>
<p>Thanks, Chris, I am a big fan.
I read the lightgbm documentation several times. I guess it is not enough to release the full power of gradient boosting though. I'll hopefully get to know Catboost & XGB in the near future.</p>
<p>Thank kaggle & santander for this interesting competition, my solution is quite straightforward:</p>
<ol>
<li>single model of NN with input shape (200, 2) and in class shuffle augmentation (similar as explained in <a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88929#latest-513419">5th place solution</a>) reach 0.9239/0.9232 on public/private LB.</li>
<li>The final solution of 0.92502/0.92421 LB is a simple ensemble of a few NN models trained with different ratios of positive/negative in data augmentation.</li>
<li>I believe some extra lightgbm models would significantly help in ensemble due to low correlations with NN models. Unfortunately I am not familiar with the methodology and my lightgbm model never break 0.922.</li>
</ol>
<p>Thank everyone for sharing on the forum, I have learned/enjoyed a lot reading them.</p> |
Santander Customer Transaction Prediction | 12th solution and magic | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to my teammates @THLUO <a href="/daishu">@daishu</a> <a href="/uuulearn">@uuulearn</a>. We make a team up at the last day of team merge and thank you very much. They really carry me :-). </p>
<p>Some key parts of our solution:
0.deleta fake test for statics.
1.some features about how to deal with Local low frequency value on histgram. It is
count(var) / * dist in summary. <br>
some dists like: var -var.mean/mode/median/max/min/cumsum probability density/1 -cumsum probability density
2.some target separate kfold to deal with feature
3. data augment(tanks to @jiwei liu) </p>
<p>code about some feature is here
<a href="https://www.kaggle.com/baomengjiao/some-key-features-in-12th-solution-0-9239">https://www.kaggle.com/baomengjiao/some-key-features-in-12th-solution-0-9239</a></p> |
Santander Customer Transaction Prediction | 14th solution and magic feature(double peak) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to the winners. This competition is very interesting.
Here is my code.
<a href="https://github.com/qrfaction/Kaggle_SCTP_gold_14th_solution">https://github.com/qrfaction/Kaggle_SCTP_gold_14th_solution</a>
My summary will be updated soon</p>
<p>update:
Because my English is not well, I don't know how to explain my features in English.
By grouping the value count, I found double peak in "count==1" group of many features.
Here is the main code.</p>
<p><code>
train[col + 'c1'] = (train[col + 'value_count'] == 1)
train[col + 'isP1'] = (train[col] &lt; train[col + 'm'])
train[col + 'p1dis'] = (train[col] - train[col+'p1']) * train[col + 'isP1'] * train[col + 'c1'] + (train[col] - train[col+'mean']) * (1-train[col + 'c1'])
train[col + 'p2dis'] = (train[col] - train[col+'p2']) * (1-train[col + 'isP1']) * train[col + 'c1'] + (train[col]-train[col+'mean']) * (1-train[col + 'c1'])
</code></p>
<p>train[col + 'isP1'] is peak1 mask.
train[col+'p1'] is the peak1 point
train[col+'mean'] is the mean of each count.</p>
<p>I created three groups of features for ensemble in this way.
These are the difference between me and other teams.
My model has gained a lot from this feature.
LGBM: 0.92475 public /0.92367 private using data augmentation</p> |
Santander Customer Transaction Prediction | 1st competition -> 1st silver . . . almost :/ | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><em>This is my first competition at Kaggle, without counting the once for <a href="https://www.coursera.org/learn/competitive-data-science?specialization=aml">Kaggle Coursera course</a> and I would like to share my experience . . . i.e. mistake</em></p>
<p>Before the final date, my body and I were about 4000th and then we dropped for more than 2000 places. This happened because I did not want to choose our <a href="https://www.kaggle.com/wlakinsson/santander-testing-pipeline?scriptVersionId=12266016">best submission</a> with <strong>Public=0.900</strong>.</p>
<p>Why did I do that? Because this kernel was a copy of many other LightGBM at the competition and I thought that it cannot improve us as all of them are the same, so I risked with some other submission which had scored above <strong>Public>0.880</strong>.</p>
<p>But why did my LigthGBM performed better than the majority? I had an additional 11 features computed in <a href="https://www.kaggle.com/wlakinsson/santander-data-imbalance-fe?scriptVersionId=12138714">separate kernel</a>. It was "just some" additional statistic parameters and nearest neighbors.</p>
<p>Finally, the <strong>Private=0.90016</strong> score is only one guy:
- 346 / 16 / kiyomomo / kiyomomo / 0.90016 / 106</p>
<p>so basically I would get a sliver with the right choice.</p>
<p>I feel so stupid because of <a href="https://www.kaggle.com/kazanova">KazAnova</a>'s advice in one Coursera lecture that 2 best choices should be:
- best public score
- the best simple model (i.e. no stacking, good validation, and Public) as it could generalize well</p>
<p>and our LightGBM was both . . .</p>
<p>It was so obvious that I got super-angry at myself :/</p> |
Santander Customer Transaction Prediction | 20th solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Sorry that I forgot to make it public. It should work now. It is based on OOF predictions.</p>
<p>The link is: <a href="https://www.kaggle.com/lucaskg/20th-solution-part-2-auc-weight-optimization">https://www.kaggle.com/lucaskg/20th-solution-part-2-auc-weight-optimization</a></p>
<p>The method combines probabilities of different models with weights <code>w_1</code>, <code>w_2</code>, ..., <code>w_n</code>. These weights are optimized by the Nelder-Mead method in scipy.</p>
<p>Great thanks to our teammates <a href="https://www.kaggle.com/strideradu">@Strideradu</a>, <a href="https://www.kaggle.com/cczaixian">@KALE</a>, <a href="https://www.kaggle.com/autumnwmc1995">@Miaochao Wang</a>, and <a href="https://www.kaggle.com/lfcpeng17">@Daniel Bi</a>. Also Kaggle is such a great community to learn and share, I really love it!</p>
<h2>Models</h2>
<p>The solution is a blend of:
1. <a href="https://www.kaggle.com/lucaskg/20th-solution-part-1-lgb-no-aug-92153">LightGBM</a> whose CV is 0.9215, private LB 0.92033. We then use <a href="https://www.kaggle.com/jiweiliu/lgb-2-leaves-augment">Jiwei's great augmentation </a> method to boost the score up further.
2. NN, which is essential to blending and our overall success. My teammate <a href="https://www.kaggle.com/strideradu">@Strideradu</a> explains his <a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/89070#514156">NN framework</a> in more details.
3. <a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88922#latest-513657">Here</a> is the detail of how we implemented the augmentation scheme.</p>
<p>Finally, the two models are stacked through <a href="https://www.kaggle.com/lucaskg/20th-solution-part-2-auc-weight-optimization">a weighted AUC optimization</a>. </p>
<h2>Features</h2>
<ol>
<li><p>Value counts: the frequency of unique values reveals a lot of information. For some variables like <code>var_108</code> and <code>var_12</code>, values with high counts tend to have higher probability of positive target.</p></li>
<li><p>Multiplication and division between raw feature and its corresponding value count. E.g., <code>df['var_0']/df['var_0_count']</code> and <code>df['var_0']*df['var_0_count']</code>, which disperse the distribution of different values with the same counts. </p></li>
<li><p>Features grouped by <code>count==1/!1</code>. We created features like rank ratio (<code>x.rank()/x.rank().max()</code>) and zscore (<code>(x-x.mean())/x.std()</code>) for values whose count equals 1 or greater than 1, respectively. It turns out that <code>count==1</code> is very special. Simulated data (generated by some distribution and then rounded to 4 decimals) should have very low PDF towards the mode of the overall distribution of the variable. However, this is not the case for many variables (will upload figures later). The zscore kind of normalizes the distribution so that we can compare the edges of distributions for <code>count==1/!1</code> altogether.</p></li>
</ol>
<p>Thanks!</p>
<p>Thanks!</p> |
Santander Customer Transaction Prediction | 29th place solution with code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The <code>logit</code> function is <code>log(p/(1-p))</code> by definition (see, e.g., <a href="https://en.wikipedia.org/wiki/Logit">https://en.wikipedia.org/wiki/Logit</a>). But maybe your question is why do we use this function instead of just <code>log</code>? The reason is that it simplifies the calculation without affecting accuracy. As you point out, <code>logit</code> is a montonically increasing function of <code>p</code>, so transforming all predictions by <code>logit</code> preserves their ordering. For the roc-auc metric, only the order of predictions matters, so we can predict <code>logit(p)</code> instead of <code>p</code> and accuracy will be unchanged. The advantage of this is that predicting <code>logit(p)</code> allows us to cancel some terms in the Bayesian calculation (as shown in the pdf), so it is much simpler than trying to calculate <code>p</code> (or <code>log(p)</code>) directly.</p>
<p>The CV scores of the models are:
lgb1: 0.92443
lgb2: 0.92429
modified logistic regression: 0.92077
comb: 0.92599 (35% each for lgb models, 30% for log reg)
So the 2 lightgbm models are very similar.</p>
<p>Initially I thought that the normalised version should be better, because if the positive values are upsampled, then what should matter is the observed frequency of the data relative to the expected frequency in that part of the distribution (i.e. in dense parts of the distribution we'd naturally expect lots of repeats since values are truncated at 4 decimal places, so having frequency > 1 isn't that relevant there). </p>
<p>But I think there might be more going on than just upsampling of positive rows. If positive rows have been upsampled, then we'd expect that the higher the frequency, the stronger the signal. But this isn't the case. I made a kernel to illustrate this here: <a href="https://www.kaggle.com/belindatrotta/target-vs-column-value-for-different-frequencies">https://www.kaggle.com/belindatrotta/target-vs-column-value-for-different-frequencies</a>
If you look at, e.g. var_5, the pattern for freq = 3 is actually less extreme than for freq = 2, which isn't what I expected. I'm not sure what's going on here.</p>
<p>I tested out the naive bayes approach in your kernel (multiplying probabilities, or equivalently, taking mean of logs), and it performs only a little worse than summing the logits (cv 0.92569 vs 0.92599). Your kernel was very helpful for me; I read it properly only very late in the competition and it made me realise I'd been thinking about the problem completely wrongly. Until then I was just thinking of the columns as independent predictors of the target, like an ensemble of different models, but in fact they are related because the scrambling is done only within each target class.</p>
<p>Haha yeah, I like to be organised :)</p>
Code and more detailed writeup is on github: <a href="https://github.com/btrotta/kaggle-santander-2019">https://github.com/btrotta/kaggle-santander-2019</a>
<p>My solution is quite short, <200 lines of code. Training and prediction takes around 3.5 hours. </p>
<p>I relied heavily on some excellent kernels and discussions by others, which I reference below. </p>
<h3>The magic</h3>
<p>There are 3 key observations, all related.</p>
<p><em>Feature independence:</em>
It was discovered in some excellent early kernels that the features seem to be independent given the target (see kernel and discussion by Branden Murray (1), (2) and kernel by Chris Deotte (4). So it appears likely that each feature column has been independently "scrambled" within each target class (0 or 1). This means there's no point looking for feature interactions among the columns.</p>
<p><em>Synthetic test data:</em>
The kernel by YaG320 (3)) shows that some of the test data contains no unique feature values in the whole row, suggesting that it has been synthesised from the test of the data.</p>
<p><em>Repeated values:</em>
Repeated values seem to show a stronger signal, as shown in Figure 1.1. I wasn't able to figure out why this is, nor whether it's a real effect or an artifact of the way the data is prepared. But this is the key to the "magic" feature engineering. For each column, we add a feature counting the number of appearances of the value in the train and test sets. However, it's crucial to exclude the synthetic test data from this count. Again, I haven't really figured out why this works. I tried calculating the counts separately for train and test, and excluding the synthetic test data from the test counts, but although this gave good cross-validation results in training, it didn't seem to work on the test set. So that suggests that the combined train and test count leaks some information from test to train.</p>
<h3>Modelling approach</h3>
<p>We model each column separately then combine the predictions. Each column prediction is a blend of 2 lightgbm models and one modified logistic regression.</p>
<p><em>Lightgbm 1:</em>
The model has 2 features, the column value and the frequency count. Maximum depth is 2. We choose the number of iterations separately for each column by cross-validation (since some columns have much more signal that others).</p>
<p><em>Lightgbm 2:</em>
The same as the first model, but instead of the raw frequency count, we used the normalised count, defined as the count divided by the average count of neighbouring points.</p>
<p><em>Modified logistic regression:</em>
We separate the data into 2 sets, frequency 1 and frequency > 1 and fit a separate model for each. In each model, we sort the training data by the value of the column, and divide it into (overlapping) blocks of 20000 samples with a step size of 500. For each block we fit a logistic regression with the single variable c, the column's value, and evaluate it at the midpoint of the column's values. This gives us a prediction of the target for a subset of evenly spaced values of c. Then we linearly interpolate between these. The idea is similar to the Savitzky-Golay filter, but fitting a logistic function instead of a polynomial. Surprisingly, this gives results almost as good as the lightgbm models.</p>
<h3>Combining the predictions from different columns</h3>
<p>Chris Deotte's naive Bayes kernel (4) shows that the features are independent within each target class. The approach in the kernel is to multiply all individual column probabilities. However, Julian points out in the comments that this makes the additional assumption that the individual columns are completely independent, not just conditionally independent given the target class. I used the mean of the logits of the original columns, which doesn't require this assumption (more detailed explanation in the pdf on my github page).</p>
<h3>References</h3>
<ol>
<li><a href="https://www.kaggle.com/brandenkmurray/randomly-shuffled-data-also-works?scriptVersionId=11467087">https://www.kaggle.com/brandenkmurray/randomly-shuffled-data-also-works?scriptVersionId=11467087</a></li>
<li><a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/83882">https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/83882</a></li>
<li><a href="https://www.kaggle.com/yag320/list-of-fake-samples-and-public-private-lb-split?scriptVersionId=11948999">https://www.kaggle.com/yag320/list-of-fake-samples-and-public-private-lb-split?scriptVersionId=11948999</a></li>
<li><a href="https://www.kaggle.com/cdeotte/modified-naive-bayes-santander-0-899?scriptVersionId=11969430">https://www.kaggle.com/cdeotte/modified-naive-bayes-santander-0-899?scriptVersionId=11969430</a></li>
</ol>
<p>This term is constant, i.e. the same for every row, since it doesn't depend on the features. The roc-auc metric depends only on the ordering of the rows, so adding a constant to all rows doesn't affect the score. (In fact this term can be calculated: P(y=1) is just the prior probability that target is true, which is around 0.1, and P(y=0) is around 0.9. But we don't need it.)</p>
<p>Thanks for the congrats everyone, I'm disappointed to see that I still ended up one short of gold. But as always I learnt a lot so can't complain too much :)</p>
<p>Cool, good to have an independent confirmation of my thinking!</p> |
Santander Customer Transaction Prediction | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks, we were here 1week ago
<img src="https://storage.googleapis.com/kaggle-forum-message-attachments/513073/12915/1st_of_April.png" alt=""></p>
<p>First, I don't want to write this title. I'm fed up with 2nd place.</p>
<p>But I would share our solution briefly.</p>
<ol>
<li>remove fake from test</li>
<li>concat train and test, and then invert some of features</li>
<li>standard scaling</li>
<li>count encoding</li>
<li>count <code>round</code> encoding</li>
<li>unpivot all vars(so we have 200k x 200 = 4m train samples)</li>
<li>train and predict</li>
<li>convert prediction(200k x 200) into odds. We used <code>(9 * p / (1 - p))</code></li>
<li>submit</li>
<li>press 'F' to pay respects</li>
</ol>
<p>I will share codes later. Thanks,</p>
<p>====================================
Edit1: our best model is <code>NN: LGB 3:1</code></p>
<p>Edit2:
my github: <a href="https://github.com/KazukiOnodera/santander-customer-transaction-prediction">https://github.com/KazukiOnodera/santander-customer-transaction-prediction</a>
Golf: <a href="https://github.com/KazukiOnodera/santander-customer-transaction-prediction/blob/master/py/990_2nd_place_solution_golf.py">https://github.com/KazukiOnodera/santander-customer-transaction-prediction/blob/master/py/990_2nd_place_solution_golf.py</a></p>
<p>Edit3: confirmed code golf score
<img src="https://storage.googleapis.com/kaggle-forum-message-attachments/512983/13210/2019-05-14%2012.00.35.png" alt="pic"></p>
<blockquote>
<p>I didn't understand why have you converted prediction like applying formula into odds</p>
</blockquote>
<p><a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88939#514367">https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88939#514367</a></p>
<blockquote>
<p>how it increases your accuracy</p>
</blockquote>
<p>If I remember correctly, 0923 into 0925.</p>
<p>Good question. It is used as category.
<a href="https://github.com/KazukiOnodera/santander-customer-transaction-prediction/blob/master/py/990_2nd_place_solution_golf.py#L233">https://github.com/KazukiOnodera/santander-customer-transaction-prediction/blob/master/py/990_2nd_place_solution_golf.py#L233</a></p>
<p>I must say that vars I dropped doesn't make any difference so much. </p>
<p>Regarding reverse, we believed this kernel <a href="https://www.kaggle.com/sibmike/are-vars-mixed-up-time-intervals">https://www.kaggle.com/sibmike/are-vars-mixed-up-time-intervals</a></p>
<ol>
<li>for groupKfold</li>
<li>it improves cv
Sorry, I'm using <code>pandas==0.24.0</code>. I will add requirements later</li>
</ol>
<p>treat <code>var_0</code>, <code>var_1</code>...... <code>var_199</code> as same feature</p>
<p>We need to convert 200 predictions to 1 prediction. So we did math.
<img src="https://storage.googleapis.com/kaggle-forum-message-attachments/513175/12919/brba_science.png" alt=""></p>
<p>Thank you, but this is not win for me😭</p>
<p>These lines
<a href="https://github.com/KazukiOnodera/santander-customer-transaction-prediction/blob/master/py/990_2nd_place_solution_golf.py#L157-L163">https://github.com/KazukiOnodera/santander-customer-transaction-prediction/blob/master/py/990_2nd_place_solution_golf.py#L157-L163</a></p>
<p>Do you understand odds right? We tried some methods like linear combination, polynomial fitting etc...
But products is the best method.</p>
<p>done</p>
<p>Thanks, also I'm really looking forward to seeing it!</p>
<p>Just internet slang :)</p>
<p>teamed up with our teammates?</p>
<p>Actually, order doesn't matter. We tested. In order to unpivot, we need to scaling.</p>
<p>here it is
<a href="https://github.com/KazukiOnodera/santander-customer-transaction-prediction/blob/master/py/990_2nd_place_solution_golf.py">https://github.com/KazukiOnodera/santander-customer-transaction-prediction/blob/master/py/990_2nd_place_solution_golf.py</a></p>
<p>Happy to hear that! And congrats 1st place!</p>
<p>Finally...! Thank you!!!</p>
<p>Thank you for your compliment! I'm still learning : )</p> |
Santander Customer Transaction Prediction | 30th solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Appreciate your kind word :)</p>
<p>Thanks Pan. I saw your write up just now. Thanks for the sharing, too!</p>
<p>Thanks for saying so. Wish you good luck, too :D</p>
<p>Thanks for the compliments @Icstat. Now, I realized why we call this subject as data-science. Using the data, make a hypothesis and do an experiment, and test it.
And also congratulate for your work!</p>
<p>Congratulate all the winners. And appreciate the Santander to host this competition. </p>
<p>First of all, it was really nice experience to join this competition as a team with my teammates, <a href="/limerobot">@limerobot</a> and <a href="/mohamedjedidi">@mohamedjedidi</a> . Especially for <a href="/limerobot">@limerobot</a>, I was really impressed by your passion and logical thinking toward the dataset. Without your help, we couldn’t make this result.</p>
<p>I want to write down how our team approached the competition in chronological order. At the first stage, I thought de-normalization might help. However, all the value in the dataset have the 4 decimal points, which means it's somehow transformed by the organizer(by rouding? - just speculation). So, it was not possible to retrieve the original value. </p>
<p>Then, I look back on the previous similar competition, Porto Seguro’s Safe Driver Prediction in the point that all the columns are anonymous and the objective is binary classification. As I read most of the kernels and discussion, it seems like there’re two important concepts that I can apply to this competition. One is 'feature_selection', and the other one is ‘sandwich upsampling’(duplicate the positive rows). So, my next experiment was implementing four different feature-selection techniques (Boruta, feature selector, target permutation, recursive feature elimination). </p>
<p>As the result, I found out that removing these 9 features boosted any model, (xgb, lgbm, cat) 'var_185','var_103','var_117','var_158','var_7','var_98','var_185','var_96','var_7'. Since then, the meaningless 3 weeks passed. <a href="/mohamedjedidi">@mohamedjedidi</a> and I did really lots of experiments, but none of them helped the model. By having done those experiment, what caught my eyes was two things. First, the number of unique value for every column is always bigger in Test-set than Training-set. Second, there’s no NaN value in the whole dataset. I’ve downloaded the past Santander competition’s dataset and discovered there’re really lots of missing values. Moreover, I’ve heard it’s nearly impossible not to have missing values in the total dataset, especially for the Bank industry. So, I’ve concluded that the organizer somehow imputed missing values( I think this was the key to arrive so-called ‘magic’), but didn’t know how to use this information.</p>
<p>At that time, I teamed up with <a href="/limerobot">@limerobot</a>. He has already implemented the similar idea and got 0.906. What he did was that making two different unique value lists for each column per Training/Test-set seperately, then creating first 200 features by replacing unique value in training-set’s list for each column with NaN and another 200 features by replacing unique value in test-set’s list for each column with NaN. I’ve simply modified the lists he made. I’ve made first column sets from making list by the “real_dataset”(Training-set + Non_Fake-Test-set), and the second column sets by the “fake_dataset”( Fake-Test-set). All of a sudden, lb score skyrocketed from 0.906 to 0.922. The next thing worked was making different lists by changing a threshold(for example, the original list was made by threshold ‘1’). We experimented more deeply about it, and concluded created features from fake_test was not helpful, so deleted it. Our best single used 601 features( 200 original features(a), 200 features by threshold 1 of “real_dataset”(b), 200 features by threshold 2 of “real_dataset”(c), and the null counts of middle 200 features(b)).</p>
<p>And we used 5 Stractified Kfold as the validation strategy. For each fold, after duplicating all the positive training-set, implemented augmentation strategy (200000 new rows for each label). The final submission set(lb 0.924) is made by the simple average of xgboost and lightgbm. For each model, we’ve used 20 different seeds not to overfit and generalize well. </p>
<p>It’s such a shame that our team missed the gold. However, I don’t regret participating this competition except for the one thing and really enjoyed testing every new idea I wanted to delve into. I needed to be careful when teaming up with someone. On the last day of team-merge, we’ve got a request from the guy. We invited him because he got two silver medals from the past competition and said he would actively participate. Even though we found out so-called “magic” in the fake dataset, we couldn’t the catch the fake in what he said. Really surprised by the attitude he showed and realized the most easiest way to get medal is to be in top scoring team and not to do anything. </p>
<p>Thanks for the reading. After cleaning the code, I’m doing to upload the link for it. Happy Kaggling :D. </p>
<p>Totally agree with you, @E.Y . Without those period, it might be impossible to take the next step. However, those 3 weeks was really hurt... Good medicine tastes bitter!</p> |
Santander Customer Transaction Prediction | 3rd Place Solution Summary and Q&A | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, we would like to congratulate everyone who put in the time and effort. Kaggle isnt about the LB it's more about learnings, that's what I have learnt.</p>
<p>As requested in my <a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88861#512622">previous discussion</a>, I'm going to share how our team acheieved our results. To be honest, I'm still flabbergasted on how we even achieved our ranking. </p>
<p>Detailed kernel and explanation will be posted by Nawid (He's the real wizard and he's also looking for a job XD). Kernel should also contain out thought process as well as discussion that somehow took place in the kernel itself instead.</p>
<p>So I was asked
A) how you teamed up, did you reach out, where you approached, did you already know your peoples, etc.,</p>
<p>To be honest, I dont know any of my teammates previously. I had this question on to merge or not to merge and I even posted <a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/87193">this</a> to ask for opinions. In the end I took the leap of faith and merged with interneuron. Basically all the discussions were on the forum and then only thing I did was to validate that these are people who took the time and effort to figure the data and not just those who forked kernels. This isnt much but it's what I did.</p>
<p>B) what your thought process was like that led to your various incremental discoveries, i.e. not just what worked but how you guided yourself.
Inital breakthrough to 0.901 was that we figured out counts improved CV however LB did not changed and after rereading forums we realised that we had to remove fake test when doing so.
Initally we thought that this isnt the magic and didnt pay much attention to it and continue to try and find the magic. It was only after Nawid joining us that we realise that the actualy magic about this dataset is that each column is independent and not correlated. Our hypothesis was that data was already preshuffled within 1 and 0 which was why shuffling it again still works. We went back to discussions again and we found another interesting post about that fact that everyone jumped after the NB kernel was shared. So we tried training models on each feature independly however without much improvement. This was where we started using LGBM instead of NB on each feature where we broke the 0.92 mark.</p>
<p>Our last idea that gave us the push was to use a NN to blend instead of a simple linear blending as it will be able to factor in feature importance as well. This worked so well that it made us jump to 5th on the first try, however we found this too late and didnt have much time to improve it further.</p>
<p>C) What features did you use for the blend? also original features? or only OOF predictions? What architecture?
OOF, count, density, deviation and original input. 4 layers of Conv1D. Strides of first layer to match number of features. 4 in this case</p>
<p>Adding your question into the post</p>
<p>It over-fitted to some extent but it somehow provided us the jump to 0.925. We didnt have time to fully analyse it as we only had about 26 hours left. </p>
<p>I would consider count as magic. But training them individually is more data science than magic imo. But it does give us a very significant improvement </p>
<p><code>np.sqrt(prediction - prediction.mean())</code></p>
<p>This was what we initially did</p> |
Santander Customer Transaction Prediction | 52nd Solution: "Is Var_108 a Date?" | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes, I oversampled too by adding more rows with <code>target=1</code> that helped increase CV and LB.</p>
<p>Yeah I got lucky. I read a discussion post of someone saying how CatBoost performed the best. I tried it and it increased my LB.</p>
<p>I doubt it's a date too, but if you generate random data with mean = 14.2 and std = 0.171 and round to 4 decimal places, you will not see any values with frequency counts over 65. Therefore the presence of frequency counts over 300 is very strange. Below are the frequency counts of random data, the highest count is around 65. </p>
<p><img src="http://playagricola.com/Kaggle/counts41119.png" alt="image"></p>
<p>My entire LB 0.923 solution uses only generated data and no real data, lol. I generated 500,000,000 new rows of data similar to Jiwei's kernel. I would create an entire copy of 200,000 rows, and 4 more copies of <code>target=1</code> rows, and then shuffle columns. The important thing is to shuffle <code>var_0</code> and <code>var_0_FE</code> together. You must keep the original value and it's frequency count in the same row. </p>
<p>By doing this you prevent your model from trying to find interactions between variables. But your model will find interaction between original variable and it's frequency count.</p>
<p>Haha, I don't think it is but I'm surprised that nobody posted that it might be. If you plot <code>var_108</code> histogram, you see a tall spike. </p>
<p><img src="http://playagricola.com/Kaggle/108spike.png" alt="image"></p>
<p>And if you display which values are in the spike, you see that it is the years 1990 thru 2010.</p>
<p><img src="http://playagricola.com/Kaggle/108countsB.png" alt="image"></p>
<p>I created 20 booleans features (20 new columns) like <code>is_var_108_equal_14.2000=1</code> when <code>var_108=14.2000</code>. I also did this for all the spikes in all the variables and my CV increased above 0.901. Then I realized you can capture all spikes at once by frequency encoding all the variables. That increased my CV to 0.904! (Note: at this point I didn't understand the significance of count=1)
* Next I changed LGBM's hyperparameters and got to 0.914
* Next I used data augmentation and got to 0.921 (When you shuffle columns, keep <code>var_0</code> and <code>var_0_FE</code> in the same row.)
* Next I created a variety of models, ensembled, and stacked and got to 0.923. </p>
<p>My best single model uses 400 features (200 original and 200 counts) and scores Public LB 0.92275 (with data augmentation). It uses CatBoost with <code>max_depth=7</code>, <code>learning_rate=0.04</code>, <code>colsample_bylevel=1.0</code>, <code>objective='Logloss'</code>, <code>eval_metric = 'AUC'</code>, and <code>early_stopping_rounds=1000</code> and stratified <code>21-folds</code> :-) It completes in 2 hours. (Note: to use GPU, you must add <code>task_type = 'GPU'</code>) </p>
<p>My favoriate model builds 200 separate models for each variable with its frequency count feature and then ensembles them using logistic regression. It scores Public LB 0.920 and helps my overall ensemble. By doing this we can display images of the classification decisions for each variable to understand the magic. (This is when I learned about the importance of count=1) I posted this kernel <a href="https://www.kaggle.com/cdeotte/200-magical-models-santander-0-920">here</a>. </p>
<p>EDIT: I did all this after Yag320 posted his kernel about fake data, so naturally when I frequency encoded, I combined train with the real test data. I just discovered now, that if you frequency encode with train combined with all test data, your LB is 0.900 even when your CV is 0.922. So excluding fake data is key to high LB.</p>
<p>I doubt it's a date too, but if you generate random data with mean = 14.2 and std = 0.171 and round to 4 decimal places, you will not see any values with frequency counts over 65. Therefore the presence of frequency counts over 300 is very strange.</p>
<p>I know it is significant because I see that it is significant in this kernel <a href="https://www.kaggle.com/cdeotte/200-magical-models-santander-0-920/notebook">here</a>.</p>
<p>My theory is that Santander up-sampled the data. They only had 0.1% of <code>target=1</code>, so they made more <code>target=1</code> by created new rows and choosing variable values from existing rows. This increased the counts of the <code>target=1</code> variable values allowing use to use data leakage to find them. </p>
<p>Thanks True_pk</p>
<p>Calculate counts before augmentation. Then add copies of train and shuffle columns. But keep <code>var_0</code> and <code>var_0_FE</code> in the same row. (i.e. shuffle them together.)</p>
<p>LGBM can locate bumps (that have width 0.01 or so on histogram) but LGBM can not locate spikes (that have width 0.0001 on histogram). Suppose there are 20 different spikes in one variable. How can LGBM find these spikes when <code>num_leaves=3</code>? To isolate 20 spikes in one variable, LGBM needs at least <code>num_leaves=40</code>. But LGBM uses 256 bins and it groups individual variable values together so it will never locate individual variable values, i.e. spikes, even with <code>num_leaves=100</code>.</p>
<p>My first attempts had very low AUC because I was just adding (or multiplying) the predictions together. I nearly gave up but then I used logistic regression to combine the models and the AUC jumped to 0.918. It was exciting.</p> |
Santander Customer Transaction Prediction | 5TH Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santander Customer Transaction Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to the winners and the hosts – this was a fun competition. I would also like to thank my teammates for doing a great job trying to decipher the data. </p>
<p>I would have preferred 6th rather than 5th - now we have to reproduce the results! </p>
<p>In the beginning of this competition I was relieved because I could not see any leakage – little did I know! I could not have been more wrong! I would guess that the top solutions don't add much value for this problem as everybody was exploiting the leakage - maybe in the future the organisers need to be more careful with the data. </p>
<p>On the other hand, if there was no leakage, it wouldn't be interesting!</p>
<p>To the matter at hand.</p>
Leakages
<ol>
<li><p><a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/85125#latest-511175">Half of the test data is fake</a> </p></li>
<li><p>Count of unique values of all features, of train + real test data together. If you also include the fake data in the calculations, the uplift will <strong>NOT SHOW</strong> on LB. In other words you need to create 200 extra features representing the count of unique values for all numerical features. E.g they have some categorical properties. This can get a lightgbm model to the <code>0.914x</code> level.</p></li>
<li><p>Augmentation. Since features are uncorrelated (also explained by Braden <a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/83882#latest-499474">here </a> you could generate multiple artificial samples. The augmentation we used was similar to <a href="https://www.kaggle.com/jiweiliu/lgb-2-leaves-augment">this </a> . What made all the difference is combining all previous 1,2. E.g when you do the augmentation and you take random values from one column, you need to make certain that you also sample the unique counts for these values for that column (as computed from the combined train + real test). Trying different augmentation- ratios ( we found increasing the positives 20 times and the negatives 2 times to be around the best for maximum lb performance), you could get lightgbm to <code>0.920x</code>. </p></li>
</ol>
Magic
<ol>
<li><code>NN</code>s perfomed better than <code>lightgbm</code>s, but needed an input shape of <code>Input(shape=(200,2))</code>. The first one represents the features and the second the unique count associated with these features. They could easily reach <code>0.924</code> in public LB with this transformation + augmentation. This nn's architecture and augmentation can be seen <a href="https://www.kaggle.com/c/santander-customer-transaction-prediction/discussion/88929#latest-514001">here</a>: </li>
<li>For <code>lightgbm</code> it helped using <code>minmaxscaler</code> of (-4,4) on the original features and then doing <code>Xn**countn</code> where the count is clipped between 1 and 3</li>
<li>Again for <code>lightgbm</code> it helped using <code>minmaxscaler</code> of (-4,4) on the original feature and then doing <code>countn** Xn</code> where the count is clipped between 1 and 3</li>
<li>Alaso <code>xn**-3</code> also helped a bit</li>
<li>Also very important to get almost another +0.0005 was pseudo-labelling</li>
</ol>
<p>We were trying to build a different model for each column, but it did not work out very well.</p>
<p>Stacking added around +0.001. NN was significantly better at stacking than any other method.</p>
<p>This on oof-ONLY . We had 60+ models</p>
<p><code>batch=512</code>
<code>epocc=36</code></p>
<p>```
def build_model(input_dims):</p>
<pre><code>model = Sequential()
model.add(Dense(1000, input_dim = input_dims,kernel_initializer='glorot_uniform'))#, W_regularizer=l2(0.000001) , kernel_regularizer=keras.regularizers.l2(0.00000001)
model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(500, kernel_initializer='glorot_uniform'))#, W_regularizer=l2(0.000001)
model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(Dropout(0.4))
opt= Nadam(lr=0.01)
model.add(Dense(1, kernel_initializer='glorot_uniform'))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy' , optimizer=opt,metrics=['binary_crossentropy'])#'binary_crossentropy'
return model
</code></pre>
<p>```</p>
<p>Indeed it was. </p>
<p>Yes. NN was also the best for stacking too.</p>
<p>OOF predictions only.</p> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.