Title of Competition
stringclasses 343
values | Title of Writeup
stringlengths 6
139
| User
stringclasses 1
value | Writeup
stringlengths 419
80.4k
|
---|---|---|---|
Severstal: Steel Defect Detection | 1st Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes</p>
<p>Around 10 more True Postive and 15 more True Negative on public LB.</p>
<p>Yes, extreme imbalance between different classes. </p>
<p>Yes, ImageNet pre-trained weights are used.</p>
<p>Because I only have limited GPU memory. This can simulate the gradient calculation for large batch size. However, the BatchNorm layers are still different.</p>
<p>I had one RTX2080S, my teammates rent a GTX1080TI and RTX2080TI machine near the end of this competition.</p>
<p>In pytorch, call loss.backward() for every batch but call optimizer.step() only when X images has been trained.</p>
<ol>
<li>We tried several values on the same fold, but actually their performances on local CV were really close. But compared with single BCE loss, fine tune with soft dice loss improves dice on local CV.</li>
<li>We fit this strategy with leaderboard score. Anyway, the effect of postprocessing become less when the score becomes higher.</li>
</ol>
<p>For public dataset, only around 850 images will pass classifiers and we only run segmentation networks on this partition. And use 1 dataloader instead of 3 and stack images into a batch of 3 will save you around 2-3 minutes on public dataset. Benchmark mode of Pytorch is also turned on. And we are sure that number of defective images in private dataset is similar since the time spent on private set is around 3 times.</p>
<p>Congratulations to all winners in this competition! This is my first competition in Kaggle and also the first competition for my teammates <a href="/zhuhongbo">@zhuhongbo</a> and <a href="/paffpaffyu">@paffpaffyu</a>. They were my former classmates in the university. We are all very surprised that we won the competition. </p>
<p>First of all, thanks to <a href="/hengck23">@hengck23</a> for sharing his insightful ideas. We followed some of his posts and got a huge improvement. And also thanks to <a href="/lightforever">@lightforever</a>, our best final solution used several models from him. And thanks to <a href="/pavel92">@pavel92</a> for his convenient segmentation library.</p>
<p><strong>Classification</strong>
Classification is an important part of this competition. Even though classifiers can only slightly improve your score after 0.915 in public LB, they can work as a preliminary screening by filtering out around half of images with no defects. This will enable us to ensemble more models in segmentation part.
We trained our classifiers with random crop of 224x1568 and do inference on full size. This random crop gives slightly improvement on accuracy.
Augmentations:
Randomcrop, Hflip, Vflip, RandomBrightnessContrast (from albumentations) and a customized defect blackout
Since this is a sematic segmentation task, we know exactly where the defects are. As a result, these defects components can be randomly blacked out and the label for this image will also change from 1 to 0 if all defects are blacked out. This augmentation indeed works on local CV and public LB. Here are some graphs of the training process of a ResNet34 classifier.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3601023%2Ffb6be779598553f5c9c1f02cecb252a4%2Floss_res22.png?generation=1571984109468690&alt=media" alt="">
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3601023%2F4c60680ddbb0c74b6d8d5ae6f86ae17f%2Faucv2.png?generation=1571984166024661&alt=media" alt="">
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3601023%2Fbe60f89444c0a3cea1a68c8be14283b6%2Faccv2.png?generation=1571984154002357&alt=media" alt="">
Batchsize: 8 for efficientnet-b1, 16 for resnet34 (both accumulate gradients for 32 samples)
Optimizer: SGD
Model Ensemble:
3 x efficientnet-b1+1 x resnet34
TTA: None, Hflip, Vflip
Threshold: 0.6,0.6,0.6,0.6</p>
<p><strong>Segmentation</strong>
We have to admit that we used models from <a href="/lightforever">@lightforever</a> these models improved our score from 0.907 private LB to our current score.
Train data: 256x512 crop images
Augmentations: Hflip, Vflip, RandomBrightnessContrast (from albumentations)
Batchsize: 12 or 24 (both accumulate gradients for 24 samples)
Optimizer: Rectified Adam
Models: Unet (efficientnet-b3), FPN (efficientnet-b3) from <a href="/pavel92">@pavel92</a> segmentation_models_pytorch
Loss:
BCE (with pos_weight = (2.0,2.0,1.0,1.5))
0.75*BCE+0.25*DICE (with pos_weight = (2.0,2.0,1.0,1.5))
Model Ensemble:
1 x Unet(BCE loss) + 3 x FPN(first trained with BCE loss then finetuned with BCEDice loss) +2 x FPN(BCEloss)+ 3 x Unet from mlcomp+catalyst infer
TTA: None, Hflip, Vflip
Label Thresholds: 0.7, 0.7, 0.6, 0.6
Pixel Thresholds: 0.55,0.55,0.55,0.55
Postprocessing:
Remove whole mask if total pixel < threshold (600,600,900,2000) + remove small components with size <150</p>
<p><strong>Pesudo Label</strong>
We did 2 rounds of pseudo labels in this competition. The first round is generated from a submission with 0.916 public LB, maybe it is too early? The second round was done several days before the end of this competition, generated from a submission with 0.91985 public LB. With pseudo label and public models, we finally improved from 0.91985 to 0.92124 on public LB and from 0.90663 to 0.90883 on private LB.
The pseudo labels are chosen if classifiers and segmentation networks make the same decisions. We got this idea from Heng. An image will only be chosen if the probabilities from classifiers are all over 0.95 or below 0.05 and it gets same result from segmentation part. According to this rule, 1135 images are chosen and added to trainset.</p>
<p><strong>Predictions on public LB:</strong>
Defect 1: 97(128)
Defect 2: 2(43)
Defect 3: 611(741)
Defect 4: 110(120)
Sum Pos: 820(1032)</p>
<p>Thank you for your ideas. I will make a try.</p>
<ol>
<li>We tried this augmentation and it slightly improves the accuracy</li>
<li>Set pixel values of train images to zero. We firstly find bounding boxes for all defects and then randomly set these areas to zero. I guess it may be better to fill it with nondefective region values, like some tools in photoshop? But it may be really hard to implement this.</li>
</ol>
<p>We are lucky. We had no enough submissions to play with the threshold in the last several days.</p>
<p>It is only used during trainning. And in validation, only normalization is used. </p>
<p><a href="https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/111457#latest-654845">https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/111457#latest-654845</a>
Like what Heng did here, find the maximum of probability in a single mask and use it to filter out FP.</p>
<p>Yes, you are right. But we still put all cropped images into trainset. Negative samples are very important in this task.</p>
<p>Yes, the crops are fixed. One image is croped into 4 256x512 images. And I used weighted BCE to balance samples. </p>
<p>SynchronizedBatchNorm works for multi GPUs. I did not make any changes to BatchNorm layers. On my machine, BatchNorm layers work for 12 image and on another machine, they work for 24 images.</p>
<p>I guess so. This works especially on class 1, maybe because the sizes of class 1 defects are small.</p>
<p>There are adaptive layers in these networks, they will output the same size.</p>
<p>This is a multilabel problem. 4 classes, each label can either be 0 or 1.</p>
<p>Kaggle provides free GPU but you can only use 30 hours per week</p> |
Severstal: Steel Defect Detection | 24th place training framework using pytorch | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We got 24th place in Steel Defect Detection. When we played in this competetion, we found that there is few good semantic segmentation training framework in pytorch which made it a hard time to do experiments, so we developed a library which mimiced <a href="https://github.com/open-mmlab/mmdetection">mmdetection</a>.</p>
<p>Our implementation is named <a href="https://github.com/Media-Smart/vedaseg">vedaseg</a>, it contains fpn, unet, deeplabv3plus, deeplabv3, pspnet, etc. This implementation is a modular semantic segmentation library which is flexible and extensible. We can implement a new semantic segmentation model like building a lego toy.</p>
<p>the repo has moved to <a href="https://github.com/Media-Smart/vedaseg">https://github.com/Media-Smart/vedaseg</a></p>
<p>the repo has moved to <a href="https://github.com/Media-Smart/vedaseg">https://github.com/Media-Smart/vedaseg</a></p> |
Severstal: Steel Defect Detection | 27th place solution (8 classification and 8 segmentation models) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Better late than never, I would like to share a short write up of my solution.</p>
<p>My solution is just a combination of 16 models:</p>
<p><strong>Segmentation</strong>
4 x PSNets with resnet152 backbones + 1 x FPN with resnet50 backbone, trained on full-sized images
3 models from the notebook of @lightforever</p>
<p><strong>Classification</strong>
2 x efficientnet-b4 + 2 x efficientnet-b5, trained on 64 x 400 images
1 x resnext101_32x4d + 1 x densenet201, trained on 224 x 224 resized images
1 x resnet152 + 1 x resnet50, trained on full-sized images</p>
<p><strong>Training</strong>
All my models were trained on 80% of random subsample of training data each using Catalyst and strong augmentations from Albumentations (it was my first serious experience with both libraries, so I have experimented with a lot of different augmentations, that seems to produce reasonable results):
```
A.VerticalFlip(p=0.5),
A.HorizontalFlip(p=0.5),
A.ShiftScaleRotate(rotate_limit=15,shift_limit = 0.01, p=0.3), A.GridDistortion(p=0.2)A.OpticalDistortion(p=0.2),
A.Blur(blur_limit = 1, p=0.2),
A.CLAHE(clip_limit = 2, tile_grid_size = (16,16), p=0.2),
A.HueSaturationValue(p=0.1),
A.JpegCompression(quality_lower = 50, p=0.2), A.RandomBrightnessContrast(brightness_limit=0.3, contrast_limit=0.3, p=0.2), A.RandomGamma(p=0.2)</p>
<ul>
<li>A.RandomSizedCrop or A.Resize
```</li>
</ul>
<p>For segmentation models I have used:
1) dice + pixelwise BCE loss if the class mask is not empty
2) pixelwise BCE + overall BCE (based on max predicted pixel value) if the class mask is empty</p>
<p>For classification models - BCE loss</p>
<p><strong>Inference</strong>
I have made simultaneous inference for all models: I have read small batches of test images, applied different sets of transformations, and computed predictions for all models. That means, I have read every image only once and it made inference quite fast.</p>
<p>The prediction of different classification and segmentation models were averaged. More complex blending schemes worsened my scores.</p>
<p><strong>TTA</strong>
Classification, TTA2: original image + horizontal or vertical or both flips (different for different models)
Segmentation, TTA2: original image + both horizontal and vertical flip simultaneously</p>
<p><strong>Postprocessing</strong>
Classification thresholds [0.5, 0.5, 0.5, 0.5]
Segmentation-classification thresholds [0.5, 0.5, 0.5, 0.5]
The sum of both classification approaches thresholds [1.05, 1.05, 1.05, 1.05]
Segmentation thresholds [0.3, 0.3, 0.3, 0.3]
Min sizes [400, 400, 1400, 4000]</p>
<p>My second chosen solution had different thresholds for different classes, and better public LB score – but it turned out to be overfitted to public LB. </p>
<p>It is not necessary to do input.reshape(-1) - BCE can take tensors with any shape as an input.</p>
<p>In my case, "pixelwise BCE" - is just ordinary BCE used for segmentation.
I have used "pixelwise" to separate it from the channel level "classification" BCE. It is not a broad term; I just wanted to stress out that I have used two different BCE.</p>
<p>"Pixelwise BCE":
<code>BCE_loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction='none')</code>
Computed for all predictions. Just a BCE for segmentation.</p>
<p>"overall BCE (based on max predicted pixel value)":
<code>F.binary_cross_entropy_with_logits(input.max(), 0, reduction='none')</code>
Computed for mask predictions of each class separately, and only in case where the target mask of this particular class is empty.</p> |
Severstal: Steel Defect Detection | 30th place solution with code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Wow, I've not expected such a big shake!
This 2~3 weeks I almost gave up this competition because my public LB score was so poor, but it seems to be better not to throw away and I'm so lucky!</p>
<hr>
<p>My solution was not so special, but I want to share it for the case it's useful for beginner or someone who wants to start segmentation competition.</p>
<ul>
<li>apply 5 class classification including background class, and then 4 class segmenteation.</li>
<li>For classification task, I trained resnet50, efficientnet-b3 and se-resnext50.</li>
<li>For segmentation task, Unet with resnet18, PSPNet with resnet18 and FPN with resnet50.</li>
</ul>
<p>What worked:
- random crop by 256x800 or full 256x1600
- 5 class classification (including background)
- 4 class segmentation
- BCE + Dice loss</p>
<p>What didn't worked:
- random crop by 256*400
- pseudo labeling
- softmax and cross entropy loss
- bigger model(encoder) was just waste of inference time
- mixup
- cutmix
- thresholds higher then 0.5</p>
<hr>
<p>My code for this competition specific part was mainly base on <a href="https://www.kaggle.com/rishabhiitbhu/unet-starter-kernel-pytorch-lb-0-88">this great starter kernel</a> by <a href="/rishabhiitbhu">@rishabhiitbhu</a>.
And I also borrowed many many idea from <a href="/hengck23">@hengck23</a> 's discussion.</p>
<p>Thank you for all competitors and congrats for all finished in gold zone!</p>
<hr>
<p>Here is my code:)
<a href="https://github.com/bamps53/kaggle-severstal.git">https://github.com/bamps53/kaggle-severstal.git</a></p>
<p>Thank you so much!</p>
<p>Thanks!</p>
<p>Yes, and of course it depends on the task. In this competition, steel surface is relatively simple compared other object detection task, so small model was enough, I believe.
Anyway, Thanks a lot! I learned a lot from you!</p>
<p>Thanks, I'm glad to share with you!</p>
<p>Thanks!</p>
<p><a href="/harshahat02">@harshahat02</a>
Yes, you're right.
You can do binary classification, but to eliminate false positive, it's more accurate to judge every image-class pair.</p> |
Severstal: Steel Defect Detection | 31st place solution with github code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Team - [ods.ai] stainless</strong>
Insaf Ashrapov, Igor Krashenyi, Pavel Pleskov, Anton Zakharenkov, Nikolai Popov</p>
<p><strong>Models</strong>
We tried almost every type of model from qubvel`s segmentation model library - unet, fpn, pspnet with different encoders from resnet to senet152. FPN with se-resnext50 outperformed other models. Lighter models like resnet34 performed aren't well enough but were useful in the final blend. Se-resnext101 possibly could perform much better with more time training, but we didn’t test that.</p>
<p><strong>Augmentations and Preprocessing</strong>
From <strong>Albumentations</strong> library:
Hflip, VFlip, RandomBrightnessContrast – training speed was not to fast so these basic augmentations performed well enough. In addition, we used big crops for training or/and finetuning on the full image size, because attention blocks in image tasks rely on the same input size for the training and inference phase.</p>
<p><strong>Training</strong>
- We used both pure pytorch and Catalyst framework for training.
- Losses: bce and bce with dice performed quite well, but lovasz loss dramatically outperformed them in terms of validation and public score. However, combining with classification model bce with dice gave a better result, that could be because Lovasz helped the model to filter out false-positive masks. Focal loss performed quite poor due to not very good labeling.
- Optimizer: Adam with RAdam. LookAHead, Over900 didn’t work well to use.
- Crops with a mask, BalanceClassSampler with upsampler mode from catalyst significantly increased training speed. Gradient accumulation as well.</p>
<ul>
<li><p>We tried own classification model (resnet34 with CBAM) by setting the goal to improve f1 for each class. The optimal threshold was disappointingly unstable but we reached averaged f1 95.1+. As a result, Cheng`s classification was used.</p></li>
<li><p>Validation: kfold with 10 folds. Despite the shake-up – local, public and private correlated surprisingly good.</p></li>
<li><p>Pseudolabing; We did two rounds of pseudo labeling by training on the best public submit and validating on the out of fold. It didn’t work for the third time but gave us a huge improvement.</p></li>
<li><p>Postprocessing: filling holes, removing the small mask by the threshold. We tried to remove small objects by connected components with no improvements.</p></li>
<li><p>Hardware: bunch of nvidia cards</p></li>
</ul>
<p><strong>Ensembling</strong>
Simple segmentation models averaging with different encoders, both FPN and Unet applied to images classified having a mask. One of the unchosen submit could give as 16th place.</p>
<p>Partial solution: <a href="https://github.com/Diyago/Severstal-Steel-Defect-Detection">https://github.com/Diyago/Severstal-Steel-Defect-Detection</a> </p> |
Severstal: Steel Defect Detection | 3rd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It didn't help at all. I add it in ensemble only because I used to think that it may bring more diversity for the ensemble. </p>
<p>12 for efficientnet-b5
16 for efficientnet-b4</p>
<p>Sorry to share so late . First of all thanks the organizer Severstal and Kaggle for a very interesting and challenging competition. I hope all the solutions could bring some help and ideas for steel defect detection. Thank everyone that share the ideas in this competition. Without your help, I can't get this gold. I have to thank <a href="/hengck23">@hengck23</a> ! After trying a lot of model pipeline, I found his solution is the most concise one and my final solution is based on that. </p>
<h2>Score</h2>
<p>Winning submission:
| Public LB | Private LB |
| ------------- |:-------------:|
| 0.91863 | 0.90765 | </p>
<p>Best submission
| Public LB | Private LB |
| ------------- |:-------------:|
| 0.91824 | 0.90934 | </p>
<h2>Summary</h2>
<p><strong>Basic Model:</strong> Unet, Feature Pyramid Network (FPN)
<strong>Encoder:</strong> efficientnet-b3, efficientnet-b4, efficientnet-b5, se-resnext50
<strong>Loss:</strong> Focal Loss
<strong>Optimizer:</strong> Adam, init lr = 0.0005
<strong>Learning Rate Scheduler:</strong> ReduceLROnPlateau (factor=0.5, patience=3,cooldown=3, min_lr=1e-8)
<strong>Image Size:</strong> 256x800 for training, 256x1600 for inference
<strong>Image Augmentation:</strong> horizontal flip, vertical flip
<strong>Sampler:</strong> Weighted Sampler
<strong>Ensemble Model</strong>:
I simply average 9 model output probability to achieve the final mask probability without TTA
1. FPN + efficientnet-b5 + concatenation of feature maps
2. FPN + efficientnet-b4
3. Unet + efficientnet-b4 , add pseudo labeling data in training data
4. Unet + efficientnet-b4, training with heavy image augmentation
5. Unet + efficientnet-b4 +SCSE layer
6. Unet + efficientnet-b4 +SCSE layer, add pseudo labeling data in training data
7. Unet + efficientnet-b4 + Mish layer
8. Unet + efficientnet-b3
9. Unet + se-resnext50</p>
<p><strong>Threshold</strong>
Label Thresholds: 0.7, 0.7, 0.6, 0.6
Pixel Thresholds: 0.4, 0.4, 0.4, 0.4</p>
<h2>Model Pipeline</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F263239%2Ff7bf0093fee5274cf1aa07efc29b0c7a%2Fmodel.png?generation=1573784178983559&alt=media" alt=""></p>
<p>You can check this <a href="https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/111457#latest-654845">Link</a> for more detail and code, thank to HengCherKeng again !</p>
<h2>The key point I think that help me win a gold</h2>
<ul>
<li>use code pipeline <a href="https://github.com/PavelOstyakov/pipeline">https://github.com/PavelOstyakov/pipeline</a></li>
<li>make sure the diversity of models in ensemble</li>
<li>using one-stage model pipeline prevent me from training classification model individually and tuning too much hyper-parameters. Simple is best.</li>
</ul> |
Severstal: Steel Defect Detection | 4th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everyone, and congratulation to all the participants!</p>
<p>We all entered this competition a little less than a month ago, right after the end of the APTOS competition.
We were solving it separately and merged only about a day before the Merger deadline because of the emptying feeling of despair.
It was the first competition, where the 'fit_predict' could not get into the bronze zone. Besides, heavy encoders were virtually as successful as simple resnet34.
At first, here are the things that we tried, and that did not work at all:</p>
<p><strong>I tried:</strong>
- Adding hard negative mining by resampling dataset every epoch. Weights were chosen as the inverse of the Dice score per image. Absolutely no difference;
- Adding ArcFace to embeddings. It should help to distinguish classes better. One more nope;
- Training multi-stage network (inspired by pose detectors). It should help mimic bad markings. Performed worse than single-stage;
- Adding label smoothing. Nope again.</p>
<p><a href="/bloodaxe">@bloodaxe</a> <strong>tried:</strong>
- Adding mixup and Poisson blending to increase the number of images with defects;
- Trainig on double-sized crops: 256x1600 -> crop(256x512) -> resize(512x1024). Nope;
- Adding result of anisotropic segmentation to the input of the neural network, no gain;
- Training HRNetV2 with full resolution. 22 hours with 8xV100 and no better than ResNet34.</p>
<p><strong>Now, things that actually worked:</strong>
- Training two-headed NN for segmentation and classification.
- Combine heads at inference time as with soft gating (mask.sigmoid() * classifier.sigmoid())
- Focal loss / BCE + Focal loss
- Training with grayscale instead of gray-RGB
- FP16 with usage of Catalyst and Apex</p>
<p>I trained only single-fold models and <a href="/bloodaxe">@bloodaxe</a> trained 5-fold CV.</p>
<p>Our individual solutions were no more than at the end of the silver zone of the Public LB, but then we teamed up... and got a bit higher in the silver zone!</p>
<p>Our best (and final) ensemble consisted of 9 models with densenet201, efficientnetb5, resnet34, seresnext50 encoders, some of them with FPN decoders, and some with UNet.
We added 3-flip TTA and averaged logits of the models, and soft gating applied. We binarized masks with a 0.55 threshold and zeroed out masks less than 256 pixels.
Our total runtime for private+public is ~30 min.</p>
<p><strong>Hardware</strong>
We used servers from FastGPU.net with 8xV100 and 4xV100, that greatly reduced our experiment cycle length.</p>
<p><strong>Some speculation about our rise on private LB:</strong>
- We did not overfit to the pubic LB;
- We chose our models by the Dice score without empty masks. We relied on our classifiers and soft-gating for empty masks.
- We had different seeds/folds/models. Out private scores before merge are not that great (0.89-0.90). But they are much stronger when combined.</p>
<p><strong>Some moral by</strong> <a href="/bloodaxe">@bloodaxe</a>
- Keep going. Even if it seems that everything is lost and there is no hope, you still can push a little bit forward and learn something useful along the way;
- Team up! Exchanging ideas is really beneficial!</p>
<p>I have personally never used fastai but looked through its code. To me, code base of Catalyst is far more clean and easy to understand, so I picked it up.
I switched from Keras and have no regret.
Also, Catalyst has great ecosystem (mlcomp, config api, etc) but I have not tried it yet.</p>
<p>Actually, threshold for the final submit was selected like "these masks look fatty with t=0.5, let's try a bit higher" :D</p> |
Severstal: Steel Defect Detection | 5th place summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Apologize for the late sharing. I failed miserably in the efficiency competition. haha
My solution is briefly presented below.</p>
<p><strong>Input Preprocessing and Input Size:</strong>
Since I used Imagenet pretrained models which required three-channel color input, each input image in this competition was duplicated two times and transformed into color image. There was no other preprocessing performed.
The input image size for model training was fixed to the original size of the images: 256x1600. During early stage of the model development, I roughly tried cropping a smaller size, e.g., 224x768 for training and the original size for testing, but the public leaderboard result with cropped input turned out to be slightly worse. Another reason to not using a cropped version for training was because there would be no benefit for inference speed.</p>
<p><strong>Image Augmentation:</strong>
Image augmentation is one of the key factors for the successful training of deep neural nets. This was particularly true for this competition because of the label imbalance. There are a lot more negative samples compared with positive ones. When using random input sampling to form training batches, the model would tend to predict with less confidence the positive samples after the first dozens of epochs training, which means we would need smaller thresholds (rather than 0.5) to segment the positive regions, and these thresholds could extra hyperprameters we would want to avoid that. This problem would gradually disappear after the model was trained long enough, but the risk of overfitting (due to overtraining) would increase, which could be solved with the help of heavy augmentation.
The augmentations I used includes crop and resize back, contrast and brightness, gamma correction, blurring and sharpening, scale and shift (no rotation and shearing), horizontal and vertical mirroring. The crop and resize back augmentation (the random_crop function in any of the train.py) first crops horizontally and vertically a random portion, for example, crop 90% horizontally and 80% vertically, and then resize back to the original size to introduce some distortion effect. The random shift augmentation is a little unusual: instead of using a small portion of shift (typically at most 20%), I used a 100% random shift, and made the border filling circular (cv2.BORDER_WRAP). I expected this circular shift augmentation to be simulating the manufacturing pipeline.
During test time, I used either horizontal or vertical flipping or both for augmentation. Totally there were four variants including the original version. </p>
<p><strong>Model Types:</strong>
I trained totally eight Unet models and averaged their results. All of them were based on qubvel’s github: <a href="https://github.com/qubvel/segmentation_models.pytorch">https://github.com/qubvel/segmentation_models.pytorch</a>. The encoders used were inception_resnet_v2, efficientnet_v4, se_resnext50, se_resnext101, all pretrained on Imagenet. I trained two models for each type of the encoders. Due to the nature of the evaluation metric, the benefit of correctly detecting a negative sample is larger than detecting and segmenting a positive sample. Therefore, removing false positives are the key to high score. To achieve this, I used the max of all the predicted pixel probabilities as something like a classification score to perform thresholding on the averaged (across eight models) max probabilities and rule out false positives. I also noticed that a lot of participants trained some classifiers to specifically perform this false positive removal, I didn’t do that because of the extra cost of training the classifiers and the extra time to run them during testing. Note that to further improve the performance, we can use the top K (K>1) pixel probabilities instead of a single max probability for a more reliable estimate, but I did not go that far during the competition.</p>
<p><strong>Training:</strong>
Every model was firstly trained for 40-50 epoches using the traditional BCE loss. After the model weights were trained to a good state, I reduced the learning rate and shifted to lovasz-hinge loss (<a href="https://github.com/bermanmaxim/LovaszSoftmax">https://github.com/bermanmaxim/LovaszSoftmax</a>) due to its robustness for imbalanced dataset. The models were trained until convergence, and further tuned using pseudo-labeling on the public test data for 10 epoches. This also helped improved the performance a little.</p>
<p><strong>Testing and Postprocessing:</strong>
In the early stage when I started this competition, I noticed that my cross-validation results were much better than the public leaderboard. Later, some participants posted in the Kaggle forum that there are some duplicate or near duplicate images in the training set, possibly generated by the same manufacturing pipelines, and the labels are very likely correlated between the duplicates. On the one hand, this breaks the i.i.d. assumption of data when applying machine learning models, resulting in over-optimistic validation estimates due to the label leakage; on the other hand, we could take advantage of the duplicates by jointly predicting clusters of duplicated images instead of independently predicting the single images. However, the fact that the cross-validation performance on the training set was much better than the public test set indicated that the duplicates in the training and public test set may not be perfectly overlapped, which implied new manufacturing pipelines in the public test set. Apparently, the difference between the train and public test set could not be a coincidence, it was more like intentionally done to mimic some real world scenario that required the models to generalize to new data rather than taking advantage of the label leakage. So I asked myself if the private test set would be another new set. My guess was, yes, because if the public and private test set are very much alike the participants could still take advantage of the label leakage by probing the public leaderboard. So some method to take advantage of the duplicates in the private set was necessary. For this, I used the efficientnet_b4’s bottleneck features (because of the lower dimensions) generated from the last output of the encoder as features for each image. I then grouped the duplicated images into clusters by kmeans clustering the features of the test data. In each cluster, I averaged the max pixel probabilities mentioned earlier. So, for each of the test image, the averaged max probabilities of its cluster were compared with pre-defined thresholds ([0.6,0.6,0.5,0.5] for the four classes), and this image would be determined to be negative if smaller than the pre-defined thresholds, if otherwise, the total number of positive pixels (binarized with the threshold 0.5) would be compared with pre-defined thresholds ([547,715,1145,2959] corresponding to the 2% quantile thresholds found on the train data) to further remove false positives. Only the test images that survived these two rounds of false positive removal went to the run-length encoding process to generate final positive masks.</p> |
Severstal: Steel Defect Detection | 7-th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>our PCs with GPUs are configured as gitlab runners. We push the code to the gitlab -> training starts. After finishing the training, the trained model is uploaded as a artifact.</p>
<p>yes</p>
<p>cross entropy loss. only on the training set (image with masks).</p>
<p>Our team member <a href="https://www.kaggle.com/xuyuan">xuyuan</a> and I had already participated in a segmentation challenge (TGS salt) and we thought this time will be a little easier, so joined this competation really late 4, 5 weeks ago. But these few weeks was not enough to try out many things. We haven't even time to train properly a classifier, so we stick to a pure segmentation ensemble.</p>
<p>Our final ensemble contained 3 efficientnet unet models:
* 2 folds of efficientnet-b7 unet. trained on 256x384 with mixup and label smoothing. The best public score was 0.91288. We don't know the private score because only the csv file was submitted. In the final ensemble, these b7 models was fine tuned on the full resolution.
* 1 fold of efficientnet-b4 unet. trained on 256x384 and fine tuned on the full resolution.
* 2 fold of efficientnet-b0 unet. trained on 256x384 and fine tuned on the full resolution. The best one got public 0.91284 and private 0.89719 score.</p>
<p>For the ensemble we have tried many thresholds. For final submission we have used [0.8, 0.99, 0.75, 0.75] which produced 87, 0, 602, 111 class images. Another submission which used [0.8, 0.99, 0.78, 0.75] produced 87, 0, 577, 111 class images. This would have a private score 0.90819 and public 0.92016 which would be enough for the second place. So predicting the class3 less seems to be a trick.</p>
<p>The min area was 100 and pixel threshold 0.3.</p>
<p>What worked for us so far:
* AdamW with the Noam scheduler
* gitlab pipeline to manage training experiments
* mixup and label smoothing
* fine tuning on the full resolution
* random sampler
* cross entropy loss</p>
<p>What didn't work:
* SGD
* SWA worked really good for the salt competition, but for this competition it didn't work at all
* pseudo labeling (trained on last 2, 3 days)
* training a classifier
* balanced sampler</p> |
Severstal: Steel Defect Detection | 8th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation/discussion/107824#latest-650999">https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation/discussion/107824#latest-650999</a> - Have a look over here for triple threshold. About unfreezing the BN layers we got a fair amount of boost in convergence. We thought that unfreezing the BN layers is better than freezing the complete encoder.</p>
<p>Yes. No Classifier 😊 </p>
<p>Congratatulations to the winners, and thanks to kaggle, competition sponsor and kernel contributors who shared their insights, it helped us a lot.</p>
<p>Models
We ensembled FPN-B0,B1,B2,B3,B4,Seresnext50. UNET-Seresnext50,Resnet34. Custom Attention Unet-B0,B1</p>
<p>Augmentations
Only Flipping and Random Brightness, Random Gamma, Random Contrast</p>
<p>Training
Progressive Learning was our primary approach started from 256x256 upto complete size.
256x256 were trained with encoder frozen for faster convergence - only batchnorm layers of encoder were unfrozen. In the end some of the models were over-fitted on complete train data(Stopped Early). </p>
<p>Post - Training
Used Triple Thresholding ( Magic element) and only flipping as TTA.</p> |
Severstal: Steel Defect Detection | Efficiency Prize Solution Summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Efficiency prize solution is EfficientNet. It is a single EfficientNet-B0 with FPN architecture.
I am using <a href="https://github.com/qubvel/segmentation_models.pytorch">https://github.com/qubvel/segmentation_models.pytorch</a> implementation. Thanks to @pavel92 for the great library.</p>
<p>The main points are:</p>
<ul>
<li><p>additional head having encoder features as input is used for classification. It predicts probabilities for each of the four defect classes. No separate classification network is used.</p></li>
<li><p>training was performed on single 80 / 20 split in three stages:</p>
<ol><li><p>train with loss combined of classification binary cross-entropy loss and segmentation cross-entropy loss</p></li>
<li><p>added test images and predicted masks from my best submission to the train data. Added Dice loss as a component to the loss and continue training on the checkpoint from stage 1</p></li>
<li><p>added validation images and masks to the train data and continue training for a few more epochs. Again the loss consists of 3 components: classification BCE loss, segmentation CE loss and Dice loss</p></li></ol></li>
<li><p>for the final model averaged the weights of 5 checkpoints from stage 3</p></li>
<li><p>training on 256x512 crops</p></li>
<li><p>no test-time augmentation is applied</p></li>
<li><p>if the classification head predicts no defect then no decoding is performed. No threshold tuning for classification has been done – just using 0.5</p></li>
<li><p>TorchScript tracing is applied</p></li>
</ul>
<p>Speed improvements are coming from not using a separate classifier network and not running the decoder when the classification head predicts no defect.</p>
<p>Many thanks to @lightforever for the Catalyst framework - it is awesome and helped me to experiment quickly in this competition.</p> |
Severstal: Steel Defect Detection | First silver(3x kaggle expert) 36th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>thank you <a href="/manojprabhaakr">@manojprabhaakr</a> </p>
<p>congratulations to you also <a href="/hamishdickson">@hamishdickson</a> </p>
<p>with my first silver today i became "Kaggle Competition Expert".
congratulations to all the participants of this competition,
First of all, thanks to <a href="/hengck23">@hengck23</a> for sharing his insightful ideas And thanks to <a href="/pavel92">@pavel92</a> for his convenient segmentation library.
i am thankful to all my talented team mates <a href="/pukkinming">@pukkinming</a> <a href="/aleksandradeis">@aleksandradeis</a> <a href="/frtgnn">@frtgnn</a> <a href="/freestander">@freestander</a> for their cooperation.
but i am very sad after seeing heng couldn't get his solo gold medal,i learnt plenty
from his work and we implemented a lot of his ideas in this competition,his discussion posts helped us a lot.
can't wait to see him becoming kaggle competition grand-master,he richly deserve that.
he is my inspiration and motivation for open source community.
he taught me one thing "if you share your work,your wisdom won't be reduced in fact your wisdom will be increased exponentially".kaggle is alive for people like them</p>
<h1>Our solution :</h1>
<p>Ensemble of these algorithms :
unet_resnet34,
unet_se_resnext50_32x4d,
unet_mobilenet2,
fpnb0,
fpnb1,
fpnb2,
fpn_se_resnext50_32x4d,
fpn_resnet34_v2</p>
<p>with popular kaggler heng's resnet34 classifier </p>
<p>github : <a href="https://github.com/mobassir94/Severstal-Steel-Defect-Detection-Kaggle">https://github.com/mobassir94/Severstal-Steel-Defect-Detection-Kaggle</a></p>
<p>final inference kernel : (private now)</p>
<p>thanks to rishab,heng,bibek,abhishek and a lot other kagglers who provided codes and ideas publicly,we used their resources in this competition</p>
<p>same here mate <a href="/frtgnn">@frtgnn</a> :)</p>
<p>My pleasure mate</p>
<p>Thank you mate</p>
<p>Thank you <a href="/dataraj">@dataraj</a> </p>
<p>thanks for the wish mate <a href="/muhakabartay">@muhakabartay</a>
i also wish you the same</p>
<p>You are welcome <a href="/phunghieu">@phunghieu</a> </p>
<p>Thank you</p>
<p>glad to be of help mate :) </p>
<p>thank you sir :) <a href="/andrew">@andrew</a></p>
<p>thank you <a href="/cdeotte">@cdeotte</a>
we regularly learn plenty from kagglers like you,raddar,heng,andrew etc etc
kaggle is great place for learning data science as we have lots of people like you here. god will bless you all :)</p>
<p>you did way better than me with your EDA kernel's mate, those kernels are always helpful,thank you a ton and congratulations again</p>
<p>thank you for your well wishes and prayers :)</p> |
Severstal: Steel Defect Detection | Lessons Learned - The Good and the Bad! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Oh, I didn't know that Kaggle had a standard image uploader. (Some public forums do not). I will try it in the future.</p>
<p><a href="/raghaw">@raghaw</a> Wow, that's unfortunate losing Silver like that Raghawendra. Sorry to hear that. I remember that poisonous kernel. </p>
<p>I think the public kernel I used here was trustworthy. The problem is that I can't use CV to properly combine it with my model. So instead I just maximize public LB. This Steel Competition is a perfect example of why you can't ensemble models by maximizing public LB.</p>
<p>Thanks. My original model was only segmentation and post-classification filter and it maintained 2nd or 3rd place on public leaderboard for the first month. So I decided to keep this idea even though there are more efficient designs.</p>
<p>Note that my final model uses these scan classification predictions for both pre-classification and post-classification. In pre-classification, I identify any image that has less than 0.6 in <code>np.max(scan_lines[k,:,:])</code> for all defect types <code>k</code>, and then avoid segmenting that image. After segmenting the other images, I use the <code>scan_lines</code> that I already calculated as filter. In pre-classification, I add one more classification model trained with efficientnetb4 on full images to help. All this is my pre and post classification and it's probably twice as much work as others but I believe it does a good job of removing false positives as shown by a final private LB score of 0.905.</p>
<p>Thanks for letting me know. I think it is because I host my images at a "game" website and many company's internet blocks the site. I will try to start using the Kaggle image uploader.</p>
<p>Congratulations to all the winners. Thank you Kaggle and Severstal for a fun competition. </p>
<h1>Lessons Learned</h1>
<p>I learned two important lessons this competition. One was frustrating, one was exciting.
* Always submit one final submission that does not overfit public LB
* You can train segmentation on crops and predict on full images</p>
<h1>Don't Overfit Public LB - (The Bad Lesson)</h1>
<p>Like many others, I believed that the private test images would be more similar to the public test images than similar to the training images. Therefore I knowingly used bad practices to increase my public LB. In hindsight, one should always make at least one final submission that is not overfit to the public LB (even if you have reason to overfit public LB). One of my models scores public LB 0.913 and private LB 0.905. The other scores public LB 0.915 and private LB 0.900. I chose two versions of the second one :-(</p>
<h1>Train on Crop, Predict on Full - (The Good Lesson)</h1>
<p>This was my first time making segmentation models. Thank you everyone that shared their knowledge. The most exciting thing that I learned is that you can train on crops and predict on full. </p>
<h2>Train on Crop</h2>
<p>My model was trained on 256x256 crops where 20% of crops have no defects, 20% of crops have defect one, 20% of crops have defect two, 20% of crops have defect three, and 20% of crops have defect four. My custom dataloader outputs an equal ratio of defect crops (by oversampling from training images): <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F5b7b45cb70b9b478e4c8981c82e45d1d%2Ftrain102519.png?generation=1572523450102792&alt=media" alt=""></p>
<p>Then I built the following Keras model:</p>
<pre><code>from segmentation_models import FPN
from segmentation_models.losses import bce_jaccard_loss
model = FPN('inceptionv3', input_shape=(256, 256, 3), classes=4, activation='sigmoid')
model.compile(optimizer='adam', loss=bce_jaccard_loss, metrics=[dice_coef])
</code></pre>
<p>This model achieved validation of <code>Dice = 0.73</code>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F6f3a4f20292dbfae351c63d8325d9b78%2Ftrain2102519.png?generation=1572523497768713&alt=media" alt=""></p>
<p>Then you can use the trained model to predict crops
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2Fcdfcc3dcca7ac76331db167c46f2bf89%2Fcrops102519.png?generation=1572523540768947&alt=media" alt=""></p>
<h1>The Magic - Predict on Full</h1>
<p>Now here's the magic. You can transfer these learned weights to a model that predicts on full images as follows: </p>
<pre><code>model2 = FPN('inceptionv3', input_shape=(256, 1600, 3), classes=4, activation='sigmoid')
model2.compile(optimizer='adam', loss=bce_jaccard_loss, metrics=[dice_coef])
model2.set_weights(model.get_weights())
</code></pre>
<p>Now you can predict on full images. And your trained model has seen lots of defect 1, 2, 3, and 4 types so it has no problem segmenting all types: </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F2394a213f911f70ce695a21ff0fb04e7%2Ffull102519.png?generation=1572523584795279&alt=media" alt=""></p>
<h1>Examples</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F616dc87c1868b955e76d92b21ab92dbe%2Fd1102519.png?generation=1572523664901311&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2Fee390d0b1bf3bacf0121c497a065dbdf%2Fd2102519.png?generation=1572523707429032&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F9755ee4c3529fb05b042ac7d4071bfd9%2Fd4102519.png?generation=1572523751935264&alt=media" alt=""></p>
<h1>Pre-Classification Stage</h1>
<p>To achieve public LB 0.913 and private LB 0.905 with this model, I added a standard pre-classification stage before segmentation to remove false positives. And I added an innovative post-classifcation described below.</p>
<h1>Post-Classification Stage</h1>
<p>Here's a cool trick. I trained a classifier on 256x256 crops where the crops where just random crops from images. (The proportions were not altered to 20%, 20%, 20%, 20%, 20%). Then after predicting a mask on a test image, I would "scan" each image by moving the crop window from left to right and compute a moving average of classification prediction. In any area where probability is less than 0.6, I would remove the mask within that 256x256!</p>
<h1>Only Predict Private Test Images</h1>
<p>To maximize the one hour inference time (and ensemble more models), you can skip predicting the public test images and only predict the private test images. You will notice that the LB score below doesn't contain a public LB score, it only contains a private LB score. Here is the code to get a list of private test images:</p>
<pre><code>all = pd.read_csv('../input/severstal-steel-defect-detection/sample_submission.csv')
all = all['ImageId_ClassId'].map(lambda x: x.split('.')[0]).values[::4]
public = pd.read_csv('../input/dataset/submission.csv')
public = public['ImageId_ClassId'].map(lambda x: x.split('.')[0]).values[::4]
private = np.setdiff1d(all,public)
</code></pre>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F427300bf00ec486c535c4e262548ab38%2Fprivate2102419.png?generation=1572523789973493&alt=media" alt=""></p>
<h1>Model Details</h1>
<p>Pre-Classification - 3 models
* xception 256x256 crop (then scan image)
* xception 256x1600
* efficientNetb4 256x400 crop (then scan image)</p>
<p>Segmentation - 3 models
* Unet resnet34 256x256 crop
* FPN inceptionv3 256x256 crop
* FPN resnet34 224x384 crops</p>
<p>Post-Classification - 2 models
* xception 256x256 crop (then scan image)
* efficientNetb4 256x400 crop (then scan image)</p>
<p>Thank you student. I'm tempted to jump into Cloud Comp. But I've done a few competitions consecutively and should probably take a break first.</p>
<p>You just need a dataloader that moves the crop window across images. Here is an example using 256x400 crops with moving <code>step_size=200</code>:</p>
<pre><code>valid_batches = DataGenerator(images, shape=(256,400), step_size=200, crops=7)
classification_preds = classification_model.predict_generator(valid_batches)
</code></pre>
<p>This datagenerator gives out 7 consecutive crops for each image where the first crop has top left corner <code>(0,0)</code>, the second has <code>(0,200)</code>, the third has <code>(0,400)</code> etc. If <code>images</code> is a list of 1801 image names, then <code>classification_preds</code> is an array of <code>12607 = 1801x7</code> rows by 5 cols (four defect plus non-defect) where every 7 rows are the moving crop of <code>256x400</code> taking step size 200 across the 1600 width image. Then you make <code>scan_lines</code>:</p>
<pre><code>scan_lines = process_classification(classification_preds)
</code></pre>
<p>The variable <code>scan_lines</code> is an array of 1801 rows by 1600 cols by 5 channels where you combined the 7 crop predctions into a single row of length 1600 which is image pixel width. Finally you just multiply your segmentation predictions by the scan_lines: </p>
<pre><code>segment_preds = segment_model.predict_generator(valid_batches)
segment_preds = process_segment(segment_preds, scan_lines).
</code></pre>
<p>The basic operation inside <code>segment_process()</code> is to multiple <code>segment_preds[k,:,j] * (scan_lines[k,:,j]&gt;0.6).astype(int)</code> where 0.6 is your threshold, <code>k</code> is image number and <code>j</code> is defect type. One of these days, I'll clean and post my notebook.</p>
<p>Wow, that's a great trick. Then I can train with different crop sizes during one epoch. And I can validate on full size images after each epoch.</p>
<p>Yes exactly. I control the overlap with the <code>step_size</code>. For my 256x256 crops (Xception network), I use a step size of 103. This uses 14 crops and most of the image will be covered by an average of 3 overlapping crops. I also use 256x400 crops (Efficientnetb4 Network) with step size 200. This uses 7 crops and most of the image will be covered by 2 overlapping crops. I then average the <code>scan_lines</code> of my 256x256 crops with my <code>scan_lines</code> of the 256x400 crops.</p>
<p>In post-classification, I multiple these <code>scan_lines</code> by the segmentation mask. I also use <code>np.max()</code> of the <code>scan_lines</code> for decisions in my pre-classification.</p>
<p>.1. A typical classification prediction is 5 numbers; i.e prob of no defect, prob of defect 1, prob of defect 2, etc. Row one of <code>classification_preds</code> is crop 1 on image 1. Row two is crop 2 on image 1. ... Row seven is crop 7 on image 1. Row eight is crop 1 on image 2, row nine is crop 2 on image 2, etc etc. Each row has 5 numbers corresponding to the five probabilities. And there are 7 times 1801 equals 12607 rows if there are 1801 images.</p>
<p>Next <code>scan_lines</code> takes the first 7 rows of predictions (where each row has 5 columns) and converts it into 1 row with 1600 columns and 5 channels. Hence (for the first image) a 2D array of 7x5 is converted into a 3D array of 1x1600x5. An image is 256x1600 pixels. In <code>scan_line</code> which is 1x1600x5, the number at [0,300,2] is the probability that the pixels with y coordinates 300 have defect 2.</p>
<p>.2. The moving average is computed as I describe in my answer 1 above.</p>
<p>.3. My segmentation dataloader isn't like a usual dataloader (a usual loader takes a list of 12000 images and shows each image to your network 1 by 1). Instead my segmentation dataloader knows which images have defect 1, which images have defect 2, etc. When the dataloader needs to output a crop it first picks a random number from 1 to 5. If it picks 3 then it randomly picks an image with defect 3. (Note that it may pick an image that it picked previously during this epoch). Next it finds a crop on that image that contains all or part of a defect 3. Each "epoch" it repeats this procedure 12000 times.</p>
<p>My classification dataloader is more like a typical dataloader. It goes through a list of images 1 by 1 and for each image it just randomly picks a crop from that image. (It avoids any crop that is more than 95% all black).</p>
<p>Thanks Karl. You can use my images. </p>
<p>I just read your writeup. I'm sorry to hear that you had many private LB with a winning score and didn't choose them. I am surprised by how much private LB can drop with small changes that improve public LB.</p>
<p>Using just my models, my public LB is 0.913 and private LB is 0.905. When I ensemble my model with the public "mlcomp+catalyst infer" the public LB raises to 0.915 but the private LB drops to 0.900. That's crazy! I would never think that such a small increase in public could result in such a large decrease in private.</p>
<p>Ah, yes, I wanted to do that but couldn't figure out how to do it in Keras using <a href="/pavel92">@pavel92</a> segmentation models GitHub repository.</p>
<p>Do you mean, change the input shape to full images and train on full images (with over sampling)?</p>
<p>Even if every full image had a defect, (over sampling) your model would not "see" as many defects (compared to every crop having a defect). Here's why. The first layer is a convolutional layer. If it is 5x5 then we are computing only 25 weights. That 5x5 convolution moves over the entire training image or crop. Roughly speaking if the full image is 255 x 1600 then the 5x5 convolution "sees" <code>(255/5)*(1600/5) = 51*320 = 16320</code> little windows. On a full image most of those "little windows" have no defect. On a crop, most of those "little windows" has a defect. </p>
<p>So using crops increases how many times your 5x5 convolution "sees" defects during training. Thus it finds better 25 weights.</p>
<p>When i get time, I can clean my code and post my dataloader.</p>
<p>Oh. That is the metric while my net is training used by Keras. After my net is finished training, I would predict masks for every image in the hold out fold. I would then compute the competition metric exactly as it is written in the competition rules section.</p>
<p>In each epoch the data loader provides 12000 crops but it doesn't care which images it uses. The data loader has a list of all images with defect 2 and all images with defect 3, etc. When the dataloder needs to provide a crop, the dataloader picks a random number between 1 and 5. If it picks 2, then it randomly chooses a defect 2 image (it may be the same that it previously chose this epoch) and then it makes a random crop that contains all or part of that defect 2. (If it picks 5 then it picks an image without defect and makes a crop without defect).</p>
<p>No. 20% do not have a defect and 80% have a defect (equal proportion 1, 2, 3, or 4).</p>
<p>Yes, I know that it is wrong but it is just a metric (not a loss). It's purpose is to give an indication of model quality for tuning hyperparameters. (The loss function is what trains a model and alters weights).</p>
<p>Great question. In my segmentation model dataloader 80% of crops must have a defect and therefore cannot be all black. In the other 20% (which must not have a defect), I do not remove all black crops when they occur. My classification model dataloader (described in "post-classification" above) removes any crop that has more than 95% black. </p>
<p>Pre-Classification - 3 models
* xception 256x256 crop (then scan image)
* xception 256x1600
* efficientNetb4 256x400 crop (then scan image)</p>
<p>Segmentation - 3 models
* Unet resnet34 256x256 crop
* FPN inceptionv3 256x256 crop
* FPN resnet34 224x384 crops</p>
<p>Post-Classification - 2 models
* xception 256x256 crop (then scan image)
* efficientNetb4 256x400 crop (then scan image)</p>
<p>It's the one posted in public kernels. Note that I am validating on 256x256 crops which are equal proportions of defect 1, 2, 3, 4, no defect.</p>
<pre><code># https://www.kaggle.com/xhlulu/severstal-simple-keras-u-net-boilerplate
from keras import backend as K
def dice_coef(y_true, y_pred, smooth=1):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
</code></pre>
<p>Thanks</p>
<p>I did not fine-tune, I just used <code>model2</code> as is.</p>
<p>No, I'm not</p>
<p>Here is the code. The dataframe <code>sub</code> has both the predicted masks and true masks:</p>
<pre><code>sub['dice'] = sub.apply(lambda x: dice_coef2( rle2mask(x['true']),
rle2mask(x['predicted']) ),axis=1)
print(sub['dice'].mean())
</code></pre>
<p>Here is Kaggle's competition metric:</p>
<pre><code>def dice_coef2(y_true, y_pred):
y_true_f = y_true.flatten()
y_pred_f = y_pred.flatten()
union = np.sum(y_true_f) + np.sum(y_pred_f)
if union==0: return 1
intersection = np.sum(y_true_f * y_pred_f)
return 2. * intersection / union
</code></pre>
<p>Yes. Good point. I added that to my original post above to me it more clear.</p>
<p>Thank you. I also fell into the temptation of overfitting too. For the first month of this competition, my model was 2nd place (and/or 3rd) at LB 0.905 and Phalanx was 1st place at LB 0.910. Then I went away for a month. When i came back, to my surprise three hundred people had LB 0.906 or higher! I got frustrated and just focused on climbing back up public LB instead of building a correct CV based model.</p>
<p>My dataloader returns 20% crops with <strong>no</strong> defect (and then 20% with defect 1, 2, 3, 4 each). Therefore my segmentation model learns about textures. Furthermore, my pre-classification and post-classification remove false positives created by textures too. </p>
<p>What a wonderful gift. Nice job. If I stayed calm and submitted my best CV, I too would have been rewarded here.</p>
<p>I kept looking at my rank in the public LB and was tweaking my model to get into Silver. But that isn't good. I need to just build the best model possible using my local CV and not worry about public LB.</p>
<p>I changed the images in this post to use Kaggle's uploader. You should be able to see them now. Thanks again for this tip. It works well.</p>
<p><a href="/returnofsputnik">@returnofsputnik</a> Jacek explained it correctly. I posted a starter kernel <a href="https://www.kaggle.com/cdeotte/train-with-crops-cv-0-60">here</a>. If you create your model using Qubvel's Segmentation models and <code>(None,None,3)</code> </p>
<pre><code>from segmentation_models import FPN
model = FPN('efficientnetb2', input_shape=(None, None, 3), classes=4, activation='sigmoid')
</code></pre>
<p>Then you don't need to transfer weights. And you can train one epoch with <code>128x128</code> crops then the next epoch with <code>256x256</code> crops, then <code>256x512</code>. And finally predict with <code>256x1600</code>. All using the same network. (Dimensions must be divisible by 32 to use this convenient method). It's pretty cool.</p>
<p>Trust my CV. My best CV would have achieved gold. As a new Kaggler it was hard to trust my CV and select a model with lower LB, but now i do it more often and it pays off.</p>
<p><a href="https://www.kaggle.com/marlenepinzi" target="_blank">@marlenepinzi</a> The trick in this competition is to over sample defects during training. So you need to write a custom dataloader that takes random crops from the train data, make sure that each crop contains a type 1 defect in <strong>every crop</strong>. Therefore you will only train with images that have type 1 defects (and you will crop the image to focus on the defect). This way, your models sees lots of type 1 defects.</p>
<p>(In the competition, 25% of my crops had type 1, 25% type 2, 25% type 3, 25% type 3, but you just want type 1)</p>
<p>Now when you infer the test dataset, your model will predict type 1 defects on every image (because it was trained with over sample), so you need to build an additional classification model that says whether an image has a type 1 defect or not.</p>
<p>Train your classification model without upsample. Train your classification on all train images (both the ones with type 1 defect and without).</p> |
Severstal: Steel Defect Detection | Private LB 0.91 solution + code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It looks like either the environment isn't activated, or <code>sever</code> hasn't been installed into it.</p>
<p>You can check what's installed in your active env using <code>conda list</code>. You can specifically check <code>sever</code> is installed using <code>sever --help</code>. </p>
<p>Anaconda version shouldn't matter but it's probably something to do with not activating the environment correctly.</p>
<p>Hope that helps!</p>
<p>Edit: looking at your screenshot I think the * next to base means the sever env is not active. Try
<code>
conda init
source ~/.bashrc
conda activate sever
sever --help
</code></p>
<p>Honestly I didn't do a lot of testing around the individual augmentations. I went from using only random crop + flips to the configuration above and found it worked better. It might have been that the mask-selection cropping and CLAHE helped while blur/sharpen hurt - I'm not sure. I did try using CLAHE TTA which hurt performance.</p>
<p>Since TTA is in post-processing I was using the public LB to see whether it helped or not - so it's possible it was actually helping more than it appeared. Using a classifier showed almost zero improvement on public LB, but huge private gain - so who knows?</p>
<p>Hi all,</p>
<p>Congrats to all the winners! I'd like to thank <a href="/hengck23">@hengck23</a> for all his public discussion and code - it was really useful for me to validate my EfficientNet + Unet implementation. Also to <a href="/lightforever">@lightforever</a> for his kernel - I got a lot of ideas from looking at his training configuration.</p>
<p>Write-up for my solution below.</p>
<p><a href="https://github.com/khornlund/severstal-steel-defect-detection">Github</a>
<a href="https://www.kaggle.com/khornlund/sever-ensemble-classification?scriptVersionId=22207424">0.91 Scoring Kernel</a></p>
<h1>Results</h1>
<p>Winning submission:</p>
<p>| Public LB | Private LB |
| --- | --- |
| 0.92124 | <strong>0.90883</strong> |</p>
<p>My best submission:</p>
<p>| Public LB | Private LB |
| --- | --- |
| 0.91817 | <strong>0.91023</strong> |</p>
<p>My chosen submission:</p>
<p>| Public LB | Private LB |
| --- | --- |
| 0.91844 | 0.90274 |</p>
<p>I chose my submission according to public LB score, and ended up rank 55/2436. Silly me!</p>
<h1>Models</h1>
<p>I used <a href="https://github.com/qubvel/segmentation_models.pytorch">segmentation_models.pytorch</a> (SMP)
as a framework for all of my models. It's a really nice package and easy to extend, so I implemented
a few of my own encoder and decoder modules.</p>
<p>I used an ensemble of models for my submissions, covered below.</p>
<h2>Encoders</h2>
<p>I ported <a href="https://github.com/lukemelas/EfficientNet-PyTorch">EfficientNet</a> to the above framework
and had great results. I was hoping this would be a competitive advantage, but during the
competition someone added an EfficientNet encoder to SMP and many others started using it. I used
the <code>b5</code> model for most of the competition, and found the smaller models didn't work as well.</p>
<p>I also ported <code>InceptionV4</code> late in the competition and had pretty good results.</p>
<p>I ported a few others that didn't yield good results:</p>
<ul>
<li><a href="https://github.com/gasvn/Res2Net">Res2Net</a></li>
<li><a href="https://github.com/wuhuikai/FastFCN/blob/master/encoding/dilated/resnet.py">Dilated ResNet</a></li>
</ul>
<p>I had good results using <code>se_resnext50_32x4d</code> too. I found that because it didn't consume as much
memory as the <code>efficientnet-b5</code>, I could use larger batch and image sizes which led to
improvements.</p>
<h2>Decoders</h2>
<p>I used Unet + FPN from SMP. I added Dropout to the Unet implementation.</p>
<p>I implemented <a href="https://github.com/bigmb/Unet-Segmentation-Pytorch-Nest-of-Unets/blob/master/Models.py">Nested Unet</a>
such that it could use pretrained encoders, but it didn't yield good results.</p>
<h2>Other</h2>
<p>I ported <a href="https://github.com/pytorch/vision/blob/master/torchvision/models/segmentation/deeplabv3.py">DeepLabV3</a>
to SMP but didn't get good results.</p>
<h2>Scores</h2>
<p>These are the highest (private) scoring single models of each architecture.</p>
<p>| Encoder | Decoder | Public LB | Private LB |
| --- | --- | --- | --- |
| efficientnet-b5 | FPN | 0.91631 | 0.90110 |
| efficientnet-b5 | Unet | 0.91665 | 0.89769 |
| se_resnext50_32x4d | FPN | 0.91744 | 0.90038 |
| se_resnext50_32x4d | Unet | 0.91685 | 0.89647 |
| inceptionv4 | FPN | 0.91667 | 0.89149 |</p>
<h1>Training</h1>
<h2>GPU</h2>
<p>Early on I used a 2080Ti at home. For the final stretch I rented some Tesla V100's in the cloud.
I found being able to increase the batch size using the V100 (16GB) gave a significant improvement
over the 2080Ti (11GB).</p>
<h2>Loss</h2>
<p>I used <code>(0.6 * BCE) + (0.4 * (1 - Dice))</code>.</p>
<h2>Targets</h2>
<p>I treated this as 4-class classification (no background class). If a pixel was predicted to have
two kinds of detects, the lower confidence predictions were removed in post-processing.</p>
<h2>Optimizer</h2>
<pre><code>- RAdam
- Encoder
- learning rate 7e-5
- weight decay: 3e-5 (not applied to bias)
- Decoders
- learning rate 3e-3
- weight decay: 3e-4 (not applied to bias)
</code></pre>
<h2>LR Schedule</h2>
<p>Flat for 30 epochs, then cosine anneal over 220 epochs. Typically I stopped training around 150-200
epochs.</p>
<h2>Image Sizes</h2>
<p>256x384, 256x416, 256x448, 256x480</p>
<p>Larger image sizes gave better results, but so did larger batch sizes. The <code>se_resnext50_32x4d</code>
encoders could use a batch size of 32-36, while the <code>efficientnet-b5</code> encoders typically used a
batch size of 16-20.</p>
<h2>Grayscale Input</h2>
<p>The images were provided as 3-channel duplicated grayscale. I modified the models to accept 1
channel input, by recycling pretrained weights. I did a bunch of testing around this as I was
worried it might hurt convergence, but using 3-channel input didn't give better results.</p>
<p>I parameterised the recycling of the weights so I could train models using the R, G, or B pretrained
weights for the first conv layer. My hope was that this would produce a more diverse model ensemble.</p>
<h2>Augmentation</h2>
<p>I used the following <a href="https://github.com/albu/albumentations">Albumentations</a></p>
<p><code>
Compose([
OneOf([
CropNonEmptyMaskIfExists(self.height, self.width),
RandomCrop(self.height, self.width)
], p=1),
OneOf([
CLAHE(p=0.5), # modified source to get this to work with grayscale
GaussianBlur(3, p=0.3),
IAASharpen(alpha=(0.2, 0.3), p=0.3),
], p=1),
Flip(p=0.5),
Normalize(mean=[0.3439], std=[0.0383]),
ToTensor(),
])
</code></p>
<p>I found the <code>mean</code> and <code>std</code> from the training images.</p>
<p>It would have been nice to experiment with more of these, but it took so long to train the models
it was difficult. I found these augs worked better than simple crops/flips and stuck with them.</p>
<h2>Validation</h2>
<p>I used a random 20% of the training data for validation with each run.</p>
<p>Models were largely selected based on their Mean Dice Coefficient. Where a few models had similar
performance I would look at the Dice Coefficient for the most common class and the loss.</p>
<p>High scoring models I trained had a Mean Dice Coefficient around 0.951 - 0.952. Here's an example
validation score:</p>
<p><code>
val_dice_0 : 0.9680132865905762
val_dice_1 : 0.9881579875946045
val_dice_2 : 0.8649587631225586
val_dice_3 : 0.9835753440856934
val_dice_mean : 0.9511765241622925
</code></p>
<h2>Pseudo Labels</h2>
<p>I used the ensemble outputs of models as pseudo labels, which gave a huge performance boost. I
used a custom <a href="https://github.com/khornlund/pytorch-balanced-sampler">BatchSampler</a> to undersample
(sample rate ~60%) from the pseudo-labelled data, and fix the number of pseudo-labelled samples per
batch (each batch would contain 12% pseudo-labelled samples).</p>
<p>Some other people had poor results with pseudo-labels. Perhaps the technique above helped mitigate
whatever downsides they faced.</p>
<h2><a href="https://github.com/NVIDIA/apex">Apex Mixed Precision</a></h2>
<p>I tried to get this to work for so long in order to take advantage of the larger batch sizes it
enables. However, now matter what I tried, I had worse convergence using it. Eventually I gave up.</p>
<p>It's possible I was doing something wrong - but I invested a lot of time into trying this, and from
talking to others at work it seems like they've had similar issues.</p>
<h1>Post Processing & Submission</h1>
<h2>TTA</h2>
<p>Only flip along dim 3 (W). I found TTA wasn't very useful in this competition, and consumed
valuable submission time.</p>
<h2>Prediction Thresholds</h2>
<p>I used 0.5 for each class ie. if the output was > 0.5, the output was positive for that defect.</p>
<p>I was worried that tweaking these would risk overfitting public LB.</p>
<h2>Defect Pixel Thresholds</h2>
<p>I used 600, 600, 1000, 2000. If an image had fewer than this number of defect pixels for a class,
all predictions for that class were set to zero.</p>
<p>I tested some different values but it actually didn't have much impact.</p>
<h2>Component Domination</h2>
<p>Since my models were set up to predict 4 classes, I was using <code>sigmoid</code> rather than <code>softmax</code>
on their outputs, which meant sometimes I got overlapping defect predictions. I had an idea to
look at the size of each component, and have the larger components "dominate" (remove) smaller
overlapping components. I got a tiny boost from this, but I think it may simply be because at that
stage I didn't have another way of ensuring there was only 1 defect prediction at each pixel.</p>
<p>I stopped using this technique in favour of simply taking the highest defect prediction for each
pixel.</p>
<h2>Dilation</h2>
<p>I tried varying amounts of dilation. Sometimes I got a small improvement, and sometimes got worse
results so I stopped using it.</p>
<h2>Ensemble Averaging</h2>
<p><em>Here is where I made the mistake that cost me 1st place.</em></p>
<p>I had been using mean averaging (eg. train 5 models, take the mean prediction for each class for
each pixel), and was struggling to break into the gold medal bracket. On the
last day, I was reading the discussion forums and started comparing the defect distributions of my
output with what others had probed to be the true defect distribution.</p>
<p>It looked like my models were overly conservative, as the number of defects I was detecting was
lower than other people and much lower than the probed LB distribution. So, I started thinking about
how I could increase the number of defect predictions. I had done some experimentation with
pixel thresholds, and found that changing them didn't have much of an effect. I knew that the score
was very sensitive to the prediction thresholds, so I was worried about fiddling with that and
potentially overfitting to the public LB. Then, I had an idea:</p>
<p>I'd noticed that sometimes I would add new, high-performing models to my ensemble, and my LB score
would decrease. I wondered if this might be explained by a majority of models <em>mean averaging</em> out
positive predictions too often. If we're detecting faults, maybe we should weight positive
predictions more than negative ones? I decided to try <em>Root Mean Square</em> averaging, as this would
hug the higher values. For example:</p>
<p>```
input: [0.2 0.3 0.7]
Mean: 0.40
RMS: 0.45</p>
<pre><code>input: [0.1 0.2 0.9]
Mean: 0.40
RMS: 0.54
input: [0.4 0.5 0.6]
Mean: 0.50
RMS: 0.51
input: [0.3 0.3 0.8]
Mean: 0.47
RMS: 0.52
input: [0.1 0.8 0.8]
Mean: 0.57
RMS: 0.66
</code></pre>
<p><code>``
This looks good. If one model prediction is a</code>.9<code>, and the others are</code>.1<code>and</code>.2`, shouldn't
we consider that a defect? (No, no we shouldn't. I was wrong.)</p>
<p>But when I tried it, I got a significant improvement on the LB! I went from <code>0.91809</code> to
<code>0.91854</code>, which was my best (public) score yet. Unknown to me, my private LB score had just
dropped from <code>0.90876</code> (winning score) to <code>0.90259</code> (rank 55).</p>
<p>I'm pretty new to Kaggle, and while I'd heard about leaderboard "shakeup", I didn't know it could
be this severe. I should have selected a 2nd submission from before I started using RMS to average
the results - and if I'd picked any of my recent submissions, I would have taken 1st place.</p>
<h2>Classification Model</h2>
<p>Others on the discussion forums were advocating use of a two-step submission:</p>
<pre><code>1. Use a classifier to determine whether an image contains a each fault anywhere
2. Ignore segmentation predictions for those ruled out by the classifier
</code></pre>
<p>The rationale was that false positives were very expensive, due to the way the Dice metric is
calculated. By doing this, you could reduce FP.</p>
<p>I was pretty skeptical of this approach, and thought it would only be useful early in the
competition while the precision of people's convolutional models was poor. But, as the competition
progressed and I was struggling to climb the LB, I thought I'd better give it a go.</p>
<p>Since I'd spent so long tuning my fully convolutional segmentation ensemble, I was worried about
allowing an "untuned" classifier to veto my segmentation predictions (and tuning it takes time).
I decided on a strategy to use the classification prediction to amplify the defect pixel
thresholds:</p>
<pre><code>1. When the classifier output is high (fault), we leave the pixel thresholds at their normal level.
2. When the classifier output is low (no fault), we raise the pixel threshold by some factor.
</code></pre>
<p>The idea was that this would allow a false negative from the classifier to be overruled by a strong
segmentation prediction.</p>
<p><code>
def compute_threshold(t0, c_factor, classification_output):
"""
t0 : numeric
The original pixel threshold
c_factor : numeric
The amount a negative classification output will scale the pixel threshold.
classification_output : numeric
The output from a classifier in [0, 1]
"""
return (t0 * c_factor) - (t0 * (c_factor - 1) * classification_output)
</code>
Here's an example illustrating how the threshold is scaled with different factors. I tried values
5, 10, and 20.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2116899%2F1012293782d945641d142a392097f779%2Fclassifier-threshold-scaling.png?generation=1572057115647669&alt=media" alt=""></p>
<p>Here's a table comparing the results of my submissions with a classifier, to my previous ones. Note
I ran it twice with <code>c_scale = 5</code> and changed some weights in my ensemble.</p>
<p>| Config | Public LB | Private LB |
| --- | --- | --- |
| No classifier | 0.91817 | 0.90612 |
| c_scale = 5 | 0.91817 | 0.91023 |
| c_scale = 5 | 0.91832 | 0.90951 |
| c_scale = 10 | 0.91782 | 0.90952 |
| c_scale = 20 | 0.91763 | 0.90911 |</p>
<p>From looking at my public LB score, I got zero and tiny improvements using a classifier and
<code>c_scale=5</code>. When I tried increasing it, it looked like the results got much worse. Unknown to me,
this was actually taking my private LB score from rank 11 to significantly better than rank 1! The
first result, where my public LB score didn't increase at all, was actually the highest scoring
submission I made all competition. As far as I know, no one on the discussion board has reported
scoring this high on any of their submissions.</p>
<p>I gave up on using a classifier after this, and for the rest of my submissions I used only
fully convolutional models. I managed to get similar Private LB scores with a fully convolutional
ensemble, but using a classifier may have improved this further. Kaggle has disabled submissions
while the efficiency prize is running, so I won't be able to test this.</p>
<h2>Final Ensemble</h2>
<p>I used the following fully convolutional ensemble for my final submissions:</p>
<ul>
<li>Unet
<ul><li>2x se_resnext50_32x4d</li>
<li>1x efficientnet-b5</li></ul></li>
<li>FPN
<ul><li>3x se_resnext50_32x4d</li>
<li>1x efficientnet-b5</li>
<li>1x inceptionv4</li></ul></li>
</ul>
<p>| Averaging Technique | Public LB | Private LB |
| --- | --- | --- |
| RMS | 0.91844 | 0.90274 |
| Mean^ | 0.91699 | <strong>0.90975</strong> |</p>
<p>^I re-ran my final submission with mean-averaging after the deadline to check its performance.</p>
<h2>Submission Scores</h2>
<p>Visualisation of scores in the final week of the competition:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2116899%2Fc6e35f8812b58a09a427e5a22d86b5c0%2Ffinal-week-lb-scores.png?generation=1572057076948200&alt=media" alt=""></p>
<p>The dip at the end is when I started using RMS averaging.</p>
<h1>Submission Kernels</h1>
<p>Here are some public kernels showing the scores. There's a lot of copy-pasted code because of
the kernel requirement of this competition - no easy way around it!</p>
<ol>
<li><a href="https://www.kaggle.com/khornlund/sever-ensemble-classification?scriptVersionId=22207424">Private LB 0.91023 | Classification + Segmentation Ensemble </a></li>
<li><a href="https://www.kaggle.com/khornlund/fork-of-sever-ensemble-3?scriptVersionId=22527620">Private LB 0.90975 | Fully Convolutional Segmentation Ensemble</a></li>
</ol>
<h1>Discussion</h1>
<h2>Improvements</h2>
<p>Next time I would like to:</p>
<ul>
<li>Softmax w/ background class</li>
<li>Lovasz Loss</li>
<li>Inplace BatchNorm (potentially huge memory saving)</li>
</ul>
<p>And of course, <em>manually choose two submissions that are appropriately diverse</em>.</p>
<p>With se_resnext + FPN, I think you'll be able to fit a batch size of 20-24? There are some configs provided in the repo, so just modify the batch size. If you have any problems with the code raise an issue on github and I'll have a look.</p>
<p>You can use these pseudo labels, just put them in the same location as <code>train.csv</code>.</p>
<p>Thanks Chris. It didn't make sense to me that people weren't taking advantage of the continuous output of their classifiers!</p>
<p>The averaging was performed when I ran inference on the test data, so I didn't have a way to evaluate it other than by the change in public LB. I <em>could</em> have set up a local CV to evaluate the ensemble and different post-processing techniques, but then I'd have to have another hold-out set for a second validation phase, reducing the amount of training data.</p>
<p>I did do trial runs and count the number of defect images output by the ensemble with different techniques. It was something like:</p>
<p>| source | class 0 | class 1 | class 2 | class 3 |
| --- | --- | --- | --- |
| public test set | 128 | 43 | 741 | 120 |
| mean avg | 92 | 1 | 579 | 110 |
| rms | 104 | 3 | 590 | 110 |</p>
<p>So I figured I was getting closer to the target distribution, and then my LB score improved significantly. I was so sure it was a good idea and was so wrong 👎 </p>
<p>The DataLoader undersamples the pseudo labelled images by creating a custom BatchSampler. It can figure out which is which by loading train.csv and pseudo.csv</p> |
Severstal: Steel Defect Detection | Silver frankenstein's monster (57th place) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This solution is actually a small modification of great kernel by @lightforever (<a href="https://www.kaggle.com/lightforever/severstal-mlcomp-catalyst-infer-0-90672">https://www.kaggle.com/lightforever/severstal-mlcomp-catalyst-infer-0-90672</a>) which used mlcomp/catalyst 🤘.</p>
<p>Things i added:
- Heng's resnet34 classifier (found here: <a href="https://www.kaggle.com/bibek777/heng-s-model-inference-kernel">https://www.kaggle.com/bibek777/heng-s-model-inference-kernel</a>) ;sorry, didn't notice appropriate model in mlcomp kernel : /
- One more unet with resnext50_32x4d encoder (fold number 3)</p>
<p>What could work (it gave higher private score, but i didn't select that solutions):
- Efficientnet-b5/vgg16 encoders
- More resnext50_32x4d encoders trained on different folds </p>
<p>What didn't work:
- Additional augmentations for classification (vertical and horizontal flip) and segmentation (vertical flip)
- Thresholds higher than 0.5</p>
<p>Here is my code:
<a href="https://www.kaggle.com/shiron8bit/silver-frankenstein">https://www.kaggle.com/shiron8bit/silver-frankenstein</a></p>
<p>Kudos to @hengck23 and @lightforever!</p> |
Severstal: Steel Defect Detection | Small Writeup - From 20th to 40th | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Split the data in 5 folds, train on 4 of them and validate on the last one.
You have 5 ways to select the 1 to validate, we only used one of the five ways.</p>
<p>I think you've got it. To be completely accurate, the only thing that changes is the crop_size parameter of my cropping augmentation in the dataset. Model stays the same.</p>
<p>Hi everybody! </p>
<p>First of all thanks to my team-mates @mnpinto @iafoss for those intense weeks of competition. When we merged 2 weeks ago we were about spot 300 but we managed to climb pretty well, to eventually go down by 20 spots. </p>
<h3>Why we went down (probably)</h3>
<ul>
<li>Too low thresholds</li>
<li>Pseudo labeling : Models without it scored a bit better on private </li>
<li>Not robust enough validation, we did not have the time to do 5 folds</li>
<li>Not enough luck, still we're quite lucky to be 40th given that we could've fallen much more</li>
</ul>
<h2>Models</h2>
<h3>Classification</h3>
<p>Simple Resnet34, we used two folds out of 5.
TTA : None , hflip, vflip, both</p>
<h3>Segmentation</h3>
<p>We went for a voting ensemble of 7 models trained on the first of our 5 folds:
- UnetSeResNext50 x 4
- UnetDensenet169
- FPNEfficientnet x 2</p>
<p>SeResNexts performed better but we still wanted to add some diversity. The FPN is basically the one shared by Heng (huge thanks for sharing so much) with some personal modifications.</p>
<p>TTA : None, hflip, vflip</p>
<h3>Tricks for segmentation</h3>
<ul>
<li>Auxiliary classifier head (4 classes)</li>
<li>Lovasz Loss (5 classes + softmax) + BCE for the classification.</li>
<li>Training on progressive crop size : 256x256 -> 512x256 -> 1024x256 -> Full resolution</li>
</ul>
<h3>Augments</h3>
<ul>
<li>Crops</li>
<li>Flips</li>
<li>Brightness</li>
<li>Putting black bars on one of the sides to simulate the edge of the plate</li>
</ul>
<p><em>Thanks for reading!</em></p>
<p>We first use the 2 classifier to select images that could have defects and only use segmentation models on those images.</p>
<p>Overall it improve my dice scores during training. Plus afterwards it can be used to remove some false positives. Used with good thresholds it can be really powerful</p> |
Severstal: Steel Defect Detection | Step by step (9th place solution) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Severstal: Steel Defect Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Well I was thinking to add third stage. Instead calculating number of pixels in mask it would be nicer to train model which takes image and mask and decide is it valid or false positive. But I had no time. Maybe in another competition :)</p>
<p>Problem: there is small number of images with defect.
Solution 1: train segmentation model only on images with defect.
It may work only if your classifier is so awesome it can predict everything.
Solution 2: train on all images with defect plus some empty images.
Which images to choose? The ones with high probability from classifier.
When high means >0.01 :)</p>
<p>It will be a long story (just like I did with <a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44659#251185">Porto Seguro</a>) so grab yourself a coffee :)</p>
<h2>Premature optimization is the root of all evil</h2>
<p>What was the goal of this competition? In my opinion the goal was to predict mask for specific image and specific defect type. That's the way data is constructed - one row of data is combination of image and defect type. </p>
<p>But for some reason (almost) everyone assumed that the goal was to build model which predicts 4 masks from 1 image. You may think it's not really a difference, I will explain why I think it matters.</p>
<p><a href="https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/106099#610085">Two months ago</a> I wanted to build 4 classification models and 4 segmentation models - one for each class. In this thread <a href="/phoenix9032">@phoenix9032</a> proposed three Pipeline Strategies. Please look at strategy 2. </p>
<p>We agree we need some kind of classifier. Why we need it? What should it do? Classifier takes image as input and predicts probability that class has a defect. And what we should do with that information? </p>
<p>We can use that information after segmentation to say that segmentation was pointless. But in that case what was the reason to start segmentation at all? We have only 1 hour to predict everything.</p>
<p>So we need to use that information before segmentation. When classifier predicts that there is specific defect we can call segmentation, if not - we can skip segmentation and save time. But wait - if your segmentation model is predicting 4 masks at once you don't need information about defect type, because you will call segmentation anyway if at least one defect is possible.</p>
<p>But what if you build single classifier with 4 outputs and then you use separate segmentation model for defect? Then you need to execute segmentation only for the defect type with high probability.</p>
<p>How do you select which model is the best one? The one with best metrics, right? But how do you know which model has best metrics if you average 4 defects? What if I told you that predicting each mask is independent task? When you split competition to 4 different parts you can work on each of them independently - which makes solution much more stable - and then you can even survive the shakeup.</p>
<p>And that's not all. How to handle class inbalance? I read many posts that people can't predict class 1 and class 2. Some people dropped class 2. In my case each class was separate training. On different training data.</p>
<p>You can assume otherwise - that predicting defects is not independent task, there are no pixels with two different defects, so you may say model which predicts all defects at once is smarter. Like softmax with 5 class (I had no time to try that). But what are you doing after predicting mask and before saving submission? You are executing some kind of postprocessing - and then you can remove defect at all if it's too small. And again this is independent for each class.</p>
<h2>metrics</h2>
<p>Soon I realized that <a href="https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/107182">something is wrong with the metrics</a>. I wasn't sure, maybe there was some mistake in my thinking - thanks to <a href="/zavareh89">@zavareh89</a> for confirmation.</p>
<p>"The leaderboard score is the mean of the Dice coefficients for each ImageId, ClassId pair in the test set." </p>
<p>It means that score in this competiton was calculated separately for each image and for each defect type. So if ground truth is mask with zero pixels and you will predict zero pixels - you have score 1, but if you predict at least one pixel - your score is 0. It is crucial to filter out empty masks.</p>
<p>Default metrics used in kernels (at least in keras kernels) takes whole batch of images, four masks together and flattens that. Then intersection and union is calculated in array of one dimension. In which case ground truth is mask with zero pixels if you flatten all images and all masks? Only when all 4 masks in all images in batch are empty - which is rare case. This metrics is totally wrong.</p>
<h2>loss</h2>
<p>If there is something I learned from TGS Salt competiton it is the lovasz loss.</p>
<p>The problem with binary crossentropy is that it works in the beginning but then the loss is decreasing but the metrics is not increasing. Lovasz loss is much better for segmentation, but it needs good model to start, because the training is slower.</p>
<p>I tested lovasz_hinge and lovasz_softmax. </p>
<p>So the correct workflow is:
- train with BCE
- take best model (using metrics not loss!)
- train with lovasz_hinge</p>
<p>It always works. </p>
<h2>segmentation models</h2>
<p>Thanks to <a href="/cdeotte">@cdeotte</a> for <a href="https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/103367#latest-649639">information how to use segmentaton models</a></p>
<p>Thanks to <a href="/bibek777">@bibek777</a> for <a href="https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/111110#640168">info about EfficientNet.</a></p>
<p>And of course thanks to <a href="/pavel92">@pavel92</a> for creating segmentation models :)</p>
<p>In the beggining I was using unet with resnet34. Then I was trying different networks - resnet50, densenet, etc... but performance was lower. Then I tried efficientnet and it finally it was better.</p>
<p>I used resnet34 classifier to the end of the competition, but for segmentation I switched to b0, then b1, then b2.</p>
<h2>PyTorch</h2>
<p>Everything was good, I was doing my deep learning and then Kaggle announced new feature - now time of GPU usage is limited to 30 hours per week. You may think 30 hours is plenty of time, truth is I was able to use half of this time during one day. <a href="https://www.kaggle.com/general/108481#624663">So I wrote this.</a> </p>
<p>I borrowed GTX 1080 and to my surprise I was quickly able to make everything work locally, so few days later I bought RTX 2070. From that time all my training was performed on my GPU. Funny thing: last competiton when I was using my home computer instead Kaggle kernels was Porto Seguro (the one with gold medal).</p>
<p>I had no issues with speed of my local GPU - it was similar to Kaggle kernel. But <a href="https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/109185">there was another kind of problem.</a>. Cheap GPU has smaller memory than the one used by Kaggle. So I wasn't able to fit same batch. And my model started to overfit. Thanks to <a href="/bfishh">@bfishh</a> for confirmation. Thanks to <a href="/ostamand">@ostamand</a> for info about gradient accumulation in PyTorch. Also thanks to <a href="/sandeepsign">@sandeepsign</a> and <a href="/ashwan1">@ashwan1</a> for info about float16.</p>
<p>I started to learn PyTorch. spoiler: I was never able to try float16 and gradient accumulation didn't work correctly. But I realized PyTorch is much much better than TensorFlow/Keras, I had much more control over everything. I know guys in Google are smart and that's why they designed TensorFlow 2.0 but I had no time to check it, so I use PyTorch.</p>
<p>Funny thing: I could fit larger batch in PyTorch than in Keras for same architecture. Maybe it is related to framework itself, maybe it is related to segmentation models implementation.</p>
<h2>classifier and class inbalance</h2>
<p>On first October I trained 4 folds resnet34 classifier. As I wrote before classifier is used to skip segmentation for some images. But in my case it is also used to handle class inbalance.</p>
<p>When classifier predicts that image may have defect this image is later processed by segmentation model. Let me rephrase this sentence - when classifier predicts that image doesn't have defect - it is not processed by segmentation model. We can conclude from that something about distribution of data used in segmentation model.</p>
<p>For each class:
- take all images with mask
- add all images with high prediction of classifier
- train segmentation only on this data</p>
<p>I started from 0.5 threshold, final threshold was 0.01.
This is my final trainset for each class:
(targets means ground truth mask, outputs means classifier > threshold, all is union of them)</p>
<p><code>
class 1
all 2405 / 12568
outputs 2377 / 12568
targets 897 / 12568
all-targets 1508 / 12568
class 2
all 949 / 12568
outputs 941 / 12568
targets 247 / 12568
all-targets 702 / 12568
class 3
all 8398 / 12568
outputs 8381 / 12568
targets 5150 / 12568
all-targets 3248 / 12568
class 4
all 1603 / 12568
outputs 1591 / 12568
targets 801 / 12568
all-targets 802 / 12568
</code></p>
<h2>final training workflow</h2>
<p>In the EfficientNet thread <a href="/hengck23">@hengck23</a> wrote following sentence:</p>
<p>"My efficientnet b3 and b5 with fpn gives good results. Input size is crop of 256x400. You may want to use this as reference"</p>
<p>But for my crop was always worse than full size. You can read it <a href="https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/111110#648254">here</a>. </p>
<p>And then <a href="/phoenix9032">@phoenix9032</a> wrote following sentence:</p>
<p>"Yes Chris . With qubvel models I can train with 256x256, and then predict with 256x1600. Or even run 256x256 for 30 epochs , load the model and finetune with 256x512 by passing that as input size and then finally finetune stage 3 with 256x1600 as well . "</p>
<p>Theoreticaly there is no new information here. I was also using crop for training and full size for prediction. But finetuning on different sizes? Why not? So I tested it. And there was no improvement.</p>
<p>The main problem is overfitting. First model generalizes but at some point it overfits to training data. By using augmentation we can push it more, but then it overfits to all possible augmentatons and there is no longer improvement on validation.</p>
<p>But why when I train on 256x416 and then I train on 256x800 it needs time to overfit? If the model is already dumb it should stay dumb with 256x800 too. Instead it looks at larger input data and learn new stuff. Validation metrics was not increasing, but...</p>
<p>What if I use different crop sizes not to finetune but as an augmentation?</p>
<p>The problem was that augmentation was in DataGenerator which produces single image and for training I needed whole batch of same resolution. So I could look into PyTorch docs because it's probably very easy to implement custom DataLoader but it was October 17th and I really had no time to fix potential bugs in my code.</p>
<p>I was already using two DataGenerators: one for the training and one for validation. Why not create 4 DataGenerators and 4 DataLoaders? (PyTorch FTW!)</p>
<p>My final training workflow is:
- train on 256x256 crops (all images from train dataset) batch_size = 12
- train on 416x256 crops (all images from train dataset) batch_size = 12
- train on 608x256 crops (all images from train dataset) batch_size = 8
- validate on full size (all images from valid dataset) batch_size = 4
- call it an epoch</p>
<p>I verified that it works in each class - always result is better than full size training.</p>
<p>It's important to decrease lr:
<code>
scheduler = ReduceLROnPlateau(optimizer, mode="min", patience=3, factor=0.5, eps=1e-09, verbose=True)
</code></p>
<p>For BCE stage:
<code>
optimizer = optim.Adam(model.parameters(), lr = 1e-3)
</code></p>
<p>And for lovasz stage:
<code>
optimizer = optim.Adam(model.parameters(), lr = 1e-4)
</code></p>
<h2>final days</h2>
<p>After October 18th I used 24h per day only to train my final models.
Last one was trained few hours before competition deadline.
Without it I would be probably in the silver zone.</p>
<p>My top two submissions from public are also my top two submission from private. I believe it is because my solution is stable.</p>
<p>and now some details from my winning kernel:</p>
<h3>classification - 4 folds resnet34 (BCE loss in name)</h3>
<p><code>
20191003-classifier-class-2/model_class_a_56_0.072748.pth
20191003-classifier-class-2/model_class_a_46_0.074594.pth
20191003-classifier-class-2/model_class_a_51_0.065235.pth
20191003-classifier-class-2/model_class_a_37_0.063228.pth
</code></p>
<h3>segmentation - 5 folds FPN efficientnet-b2 (my metrics in name)</h3>
<p>```
20191019-class-3/model_lovasz_class_3_fold_0_c_25_0.787906.pth
20191020-class-3-fold-1/model_lovasz_class_3_fold_1_c_38_0.800861.pth
20191021-class-3-fold-2/model_lovasz_class_3_fold_2_c_30_0.805968.pth
20191022-class-3/model_lovasz_class_3_fold_3_c_28_0.809485.pth
20191024-class-3-fold-4/model_lovasz_class_3_fold_4_c_28_0.817969.pth</p>
<p>20191024-class-1/model_lovasz_class_1_fold_0_d_1_0.829611.pth
20191024-class-1/model_lovasz_class_1_fold_1_d_1_0.831025.pth
20191024-class-1/model_lovasz_class_1_fold_2_e_1_0.836173.pth
20191024-class-1/model_lovasz_class_1_fold_3_c_30_0.867282.pth
20191024-class-1/model_lovasz_class_1_fold_4_c_20_0.833884.pth</p>
<p>20191020-class-2/model_lovasz_class_2_fold_0_c_7_0.877366.pth</p>
<p>20191023-class-4/model_lovasz_class_4_fold_1_d_21_0.869015.pth
20191023-class-4/model_lovasz_class_4_fold_0_e_3_0.879779.pth
20191023-class-4/model_lovasz_class_4_fold_3_d_1_0.884751.pth
20191023-class-4/model_lovasz_class_4_fold_4_d_2_0.901342.pth
20191023-class-4/model_lovasz_class_4_fold_2_e_9_0.849709.pth
```</p>
<h3>classification thresholds</h3>
<p><code>
class 1: t = 0.3
class 2: t = 0.3
class 3: t = 0.25
class 4: t = 0.1
</code></p>
<h3>classification results</h3>
<p><code>
class 1
idx_outputs_positive 191
class 2
idx_outputs_positive 71
class 3
idx_outputs_positive 781
class 4
idx_outputs_positive 176
</code></p>
<h3>segmentation thresholds</h3>
<p>for each class: min mask size = 1000
pixel thresholds:</p>
<p><code>
class 1: t = 0.55
class 2: t = 0.7
class 3: t = 0.53
class 4: t = 0.5
</code></p>
<h3>segmentation results</h3>
<p><code>
class 1: found masks: 92
class 2: found masks: 9
class 3: found masks: 586
class 4: found masks: 113
</code></p>
<h2>summary</h2>
<ul>
<li>Kaggle is still the best place on the planet</li>
<li>to me PyTorch is better than Tensorflow 1.x / Keras</li>
<li>using segmentation models was very good idea</li>
<li>reading everything on forum is extremely important</li>
<li>I had 3 months for this competition and I trained everything in the last week</li>
</ul>
<p>Thanks Severstal for an awesome competition.
Thanks everyone for great fun!
See you in another competition, hopefuly soon, not in another 2 years :)</p>
<p><a href="/phoenix9032">@phoenix9032</a> thanks for inspiration, without your post probably there will be no gold for me :)</p>
<p>lovasz_hinge was better than lovasz_softmax in my tests</p>
<p>image on input, 4 binary outputs</p>
<p>Yes, I took 897 images which had mask.
My classifier said 2377 images should have mask (because I used threshold 0.01).
If you combine 2377 with 897 the sum is 2405, because some of them overlaps.
Then you can subtract 897 from 2405 and you see false positives.
So my training set was 1508 no-defect-1 and 897 defect-1.</p>
<p>BCE is bad loss, so there is no point to combine it with lovasz, it is just used for warm-up ;)</p>
<p>1) first we have weights from model pretrained on ImageNet
2) then with BCE we train it to have so-so quality
3) then with lovasz we train it to have state-of-art quality</p>
<p>For class 3 step 2) and 3) was executed whole night.</p>
<p>Please look at section "classifier and class inbalance", I put a list of things for each class:
- targets - ground truth masks
- outputs - classifier predicted masks
- all - union of targets and outputs, i.e. both ground truth and false positives from classifier</p>
<p>The only place when I was doing something on all 4 defects was classifier. Later everything was separated, so as you can see for each defect type I have different training set.</p>
<p>lovasz is used only for segmentation</p>
<p>When you are training your segmentation model with BCE you reach some validation score, like 0.800, then you can't go more, and you can use this 0.800 model, or you can execute exactly same code (in my case with lower lr) with lovasz loss instead bce - and then your validation score goes up to 0.820 or 0.830. And it always works.</p>
<p>I was using that in Keras in TGS Salt, for some reason tensorflow implementation of lovasz loss was slow. Pytorch implementation of lovasz loss has same speed as bce.</p>
<p>BTW <a href="/jihangz">@jihangz</a> where is your solution, you are gold, so.... you need to write one :)</p>
<p>this is part of magic, please look at my training workflow</p>
<p>Very similar, I iterate over dataloaders in train step :)</p>
<p>Always metrics is more important, because metrics tells you how good the model is.</p> |
Shopee - Price Match Guarantee | 114th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thank you for the hosts and all participants joined this competition, especially my teammate <a href="https://www.kaggle.com/sumansahoo16" target="_blank">@sumansahoo16</a>, <a href="https://www.kaggle.com/apretrue" target="_blank">@apretrue</a> and great notebook shared <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, <a href="https://www.kaggle.com/tanulsingh077" target="_blank">@tanulsingh077</a>, <a href="https://www.kaggle.com/parthdhameliya77" target="_blank">@parthdhameliya77</a>.</p>
<h1>Summary</h1>
<p><img src="https://user-images.githubusercontent.com/66665933/117750943-3e298100-b24f-11eb-990e-07c7e2c57026.png" alt="image"></p>
<h1>Models</h1>
<table>
<thead>
<tr>
<th>model</th>
<th>LB</th>
<th>Private LB</th>
<th>notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>nfnet_l0+tfidf</td>
<td>0.732</td>
<td>0.721</td>
<td>ArcFace</td>
</tr>
<tr>
<td>concat([nfnet_l0, sbert])+tfidf</td>
<td>0.736</td>
<td>0.726</td>
<td>ArcFace</td>
</tr>
<tr>
<td>concat([nfnet_l0, sbert])+tfidf</td>
<td><strong>0.739</strong></td>
<td><strong>0.729</strong></td>
<td>CurricularFace, not chosen</td>
</tr>
<tr>
<td>concat([nfnet_l0, nfnet_l1, sbert, distilbert,w2v, phash])+tfidf</td>
<td><strong>0.739</strong></td>
<td>0.728</td>
<td>selected</td>
</tr>
</tbody>
</table>
<h1>Worked</h1>
<ul>
<li>CurricularFace</li>
<li>SAM optimizer</li>
<li>concatenating image_embeddings and text_embeddings</li>
<li>tfidf</li>
<li>TTA(Resize+CenterCrop)</li>
</ul>
<h1>Didn’t work</h1>
<ul>
<li>swin, vit, roberta</li>
<li>thresholds for each mode</li>
<li>CV score </li>
<li>learning bert embeddings for nfnet </li>
</ul>
<h1>inference code</h1>
<p><a href="https://www.kaggle.com/tmhrkt/shopee-inference-notebook?scriptVersionId=62444538" target="_blank">https://www.kaggle.com/tmhrkt/shopee-inference-notebook?scriptVersionId=62444538</a> </p>
<p>Thank you!</p>
<p>Thank you! </p> |
Shopee - Price Match Guarantee | 11th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/shigemitsutomizawa/shopee-training-bert-11th-place-simple-solution" target="_blank">This is my BERT training code.</a></p>
<p>This code is simplified version, so I'm not sure if you can reproduce the exact same results, but I think you will get similar results if you set 'compute_cv = False' and train.</p>
<p>Sorry, I rechecked the version and found that the epochs and lr_min mentioned above were slightly different.</p>
<ul>
<li>epochs : 7 for xlm-r, 8 for indonesian</li>
<li>lr_min : 2.74e-5 for xlm-r, 1.5e-5 for indonesian</li>
</ul>
<p>Thank you very much. I learned a lot from you.<br>
The post processing I have used was only I pictured.</p>
<p>To avoid overfitting, I adjusted the number of epochs and other hyperparameters by looking at the local GroupKFold CV.<br>
Then, I used full dataset to make the final BERT models.<br>
Single BERT model was able to get at least 0.654 on public LB. I did not have enough submissions left to tune single model threshold for LB, which was different than for local CV.</p>
<p>Two days before the end of the competition, I had made a prediction using the KNN cosine distance for each of the image embeddings and BERT embeddings and union the results, and the public LB was 0.766, a little short of the gold medal. I was at a crossroads of whether to continue tuning that method with the remaining submissions or to come up with a new method.<br>
I thought the only way to go for the gold medal was to come up with another method. My first thought was to make the boundary circular, and I implemented with a bit generalized ellipse, but the result was more false positives and the LB dropped to 0.742. The next step was to increase the power of the elliptic equation to reduce the false positives. With a power of 4, the LB became 0.768, and with a power of 6, the LB improved to 0.769. At this point, there were only five submissions left, so I adjusted the threshold and used up the number of submissions when the LB was 0.771.</p>
<p>Selecting more values will increase the number of false positives, so I selected 42 based on the validation results using the training data. The results do not change much with other values.</p>
<p>I just tried a few parameters based on my experience. There was only a slight tuning effect, so I'm not sure if my parameters were good or not.</p>
<p>I tried EfficientNets, NFNets, etc. but Swin Transformer worked best. I tried the ensemble of image models, but could not surpass TTA of the best single model.<br>
As the inference time of my best image model took more than 30 min to inference submission data, 3x TTA was the limit.<br>
The BERT model were fast, but I could not find good models to ensemble.<br>
I also tried AdaCos and Focal Loss, but it didn't work well.<br>
I splitted train data using GroupKFold(4) to compute CV and tuned hyperparameters. Then I trained the models using all the train data, and tested by submission.<br>
Since this contest was difficult to test locally and the number of submissions was limited, I could not able to do sufficient testing and tuning.</p>
<p>First of all, I would like to thank all the people who shared their knowledges and suggestions in this competition. I am happy to have won my first gold medal.</p>
<p>The details of my solution are as follows.</p>
<h1>Trainig</h1>
<p>I finetuned one image model and two BERT models.</p>
<h3>1. Image model</h3>
<ul>
<li>backbone_model : swin_base_patch4_window12_384 (from timm)</li>
<li>input_size : 384</li>
<li>loss : ArcFace (scale=34, margine=0.5)</li>
<li>fc layer (embedding) dimensions : 768</li>
<li>batch_size = 12 x 2 accumulation (I trained on Google Colab with AMP)</li>
<li>train epochs : 11</li>
<li>optimizer : Ranger</li>
<li>lr scheduler : warmup 2 epochs from 7.5e-6 to 1e-4, cosine decay to 1.5e-5 for backbone, x 2 for fc layer</li>
<li>augmentations : RandAugment (see blow)</li>
</ul>
<pre><code>from timm.data import create_transform
def get_train_transforms(input_size=384):
return create_transform(
input_size=input_size,
scale=(0.6, 1.0), # Default: (0.08, 1.0)
ratio=(1.0, 1.0), # Default: (3. / 4., 4. / 3.)
hflip=0.5,
vflip=0.5,
is_training=True,
color_jitter=0.1,
auto_augment='rand-m3-n1-mstd0.5-inc1',
re_prob=0.1, # RandomErasing probability
re_mode='pixel', # ['const', 'rand', 'pixel']
re_count=1, # number of erasing blocks per image
)
</code></pre>
<h3>2. BERT models</h3>
<ul>
<li>backbone_model1 : sentence-transformers/paraphrase-xlm-r-multilingual-v1 (from Huggingface)</li>
<li>backbone_model2 : cahya/distilbert-base-indonesian (from Huggingface)</li>
<li>loss : ArcFace (scale=30, margine=0.5)</li>
<li>fc layer (embedding) dimensions : 768</li>
<li>batch_size = 16</li>
<li>train epochs : 7 for xlm-r, 8 for indonesian</li>
<li>optimizer : SAM with AdamW</li>
<li>lr scheduler : (same as image model, except lr_min)</li>
</ul>
<p><a href="https://www.kaggle.com/shigemitsutomizawa/shopee-training-bert-11th-place-simple-solution" target="_blank">This is my BERT training code.</a></p>
<h1>Inference</h1>
<p><img src="https://user-images.githubusercontent.com/75870915/153094971-1b865614-8767-4a2d-8dfb-06a7f9c9d480.jpg"></p>
<h3>Prediction Code</h3>
<pre><code>def get_predictions(df, img_embeddings, bert_embeddings, img_threshold=0.84, bert_threshold=0.84, chunk=32, nearest_one=True, max_preds=42):
CTS = len(df) // chunk
if (len(df) % chunk) != 0:
CTS += 1
preds = []
for j in tqdm(range(CTS)):
a = j * chunk
b = min((j+1) * chunk, len(df))
img_cts = torch.matmul(img_embeddings, img_embeddings[a:b].T).T
bert_cts = torch.matmul(bert_embeddings, bert_embeddings[a:b].T).T
for k in range(b-a):
similarity = (img_cts[k,] / img_threshold) ** 6 + (bert_cts[k,] / bert_threshold) ** 6
sim_desc = torch.sort(similarity, descending=True)
IDX = sim_desc[1][sim_desc[0] > 1][:max_preds].cpu().detach().numpy()
o = df.iloc[IDX].posting_id.values
if (len(IDX) == 1) and nearest_one:
IDX = sim_desc[1][:2].cpu().detach().numpy()
o = df.iloc[IDX].posting_id.values
preds.append(o)
return preds
</code></pre>
<p><a href="https://www.kaggle.com/shigemitsutomizawa/shopee-inference-11th-place-simple-solution" target="_blank">This is my simple inference code.</a></p>
<p><img src="https://user-images.githubusercontent.com/75870915/118309627-7b7c6000-b528-11eb-9eb5-9cec2d97bf2c.jpg" alt="boundary"></p> |
Shopee - Price Match Guarantee | 135th place - Adding item amount check and Prefix Tree+Binning based text processing | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Following is the code:<br>
<a href="https://www.kaggle.com/imroze/tfidf-arcface-advanced-text-processing-135th" target="_blank">https://www.kaggle.com/imroze/tfidf-arcface-advanced-text-processing-135th</a></p>
<p>This approach uses following:</p>
<p>RAPIDs KNN Search using TFIDF for Text and EfficientNet+NFNet Ensemble with ArcFace for Images - (copied from public kernel)</p>
<p>Product quantity detection and filtering, in order to differentiate between same product name and different size/quantity.</p>
<p>Initial cleaning of text by replacing Strings.</p>
<p>Using Prefix-Tree data structure to find and join words which are sometimes broken in two parts and sometimes not.</p>
<p>Making all similar misspelled big words same with first and last letter same and max 1 letter difference using binning.</p>
<p>Adding all posts with same title or phash.</p>
<p>Filtering text matches using Jaccard Index.</p> |
Shopee - Price Match Guarantee | 14th Place Gold - Image Text Decision Boundary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks RezAm</p>
<p>I did not train on product labels. I trained on optimal threshold for each product. This is what i did. My local validation was two folds. (i.e. I randomly split the <code>label_groups</code> in half and put half in fold 0 and half in fold 1).</p>
<p>I trained my model on fold 0 and predicted fold 1. For each product (i.e. row in fold 1), i predicted using <code>USE_ADJUST = [-0.25, -0.2, -0.15, -0.1, -0.05, 0, 0.05, 0.1, 0.15, 0.2, 0.25]</code> , i then calculated F1 scores for all 11 match predictions and determined what the best <code>USE_ADJUST</code> threshold is for this product (i.e. row in fold 1). Similarly, i trained on fold 1 and predicted fold 0.</p>
<p>Afterward i had a complete train dataframe of 34250 rows and what their optimal <code>USE_ADJUST</code> is. I then trained my stage2 on these optimal <code>USE_ADJUST</code> values.</p>
<pre><code>model = build_stage2()
model.fit(X = [image_embedding, text_embedding],
y = USE_ADJUST*2.0 + 1.0, epochs=5)
model.save_weights('stage2_v1_fold0.h5')
</code></pre>
<p>Product is what category things are. For example, the product may be <code>clothing</code> or the product may be <code>furniture</code> etc. For clothing it is best to use a smaller threshold and for furniture a larger threshold. This is because, there are more false positives for clothing and less false positives for furniture.</p>
<p>So my decision boundary becomes with the addition of <code>ADJUST</code> where <code>j</code> is df row number:</p>
<pre><code> idx1 = np.where(img[j,] <0.3 + ADJUST[j] )[0]
idx2 = np.where(txt[j,]<0.17 + ADJUST[j] )[0]
idx3 = np.where( 0.62*img[j,] +txt[j,] <0.73 + 1.62*ADJUST[j])[0]
</code></pre>
<p>And the <code>ADJUST</code> is predicted from stage 2 model (that is trained locally on OOF predictions):</p>
<pre><code>def build_stage2():
inp1 = tf.keras.layers.Input(shape=(768,)) #XLM-RoBERTa embeddings
inp2 = tf.keras.layers.Input(shape=(1536,)) #EfficientNetB3 embeddings
x = tf.keras.layers.Concatenate(axis=-1)([inp1,inp2])
for k in range(6):
x = tf.keras.layers.Dense(48,activation='tanh')(x)
x = tf.keras.layers.Dense(1,activation='sigmoid')(x)
model = tf.keras.Model(inputs=[inp1,inp2],outputs=x)
opt = tf.keras.optimizers.Adam(lr=1e-5)
loss = tf.keras.losses.BinaryCrossentropy()
model.compile(optimizer=opt,loss=loss)
return model
</code></pre>
<p>And then</p>
<pre><code>model = build_stage2()
model.load_weights('../input/shopee-stage2/stage2_v1_fold0.h5')
ADJUST = model.predict([text_embeddings,image_embeddings])
ADJUST = (ADJUST - 0.5) / 2.0
</code></pre>
<p>The NN architecture has 6 hidden layers of </p>
<pre><code>x = tf.keras.layers.Dense(48,activation='tanh')(x)
</code></pre>
<p>I could have written this lines 6 times in a row. Instead I used a for loop. Making the NN deep with 6 hidden layers, increased validation accuracy. It allowed the NN to mix the image and text embeddings (and find more non-linear interactions between image and text features) which enabled a more accurate prediction.</p>
<p>Hi. There are 3 plots above. The first is done in PowerPoint. The second is done with <code>import matplotlib.pyplot</code> and uses the <code>plt.matshow()</code> command, and then uses the <code>plt.text()</code> to display the value of each matrix element in the plot. The last uses the <code>plt.imshow()</code> command.</p>
<p>The dot product for NumPy <code>image_embeddings.dot(image_embeddings[a:b,].T).T</code> is just matrix multiplication. Each row of the <code>image_embeddings</code> are one item in the test dataset. There are approximately 70,000 products (i.e. rows). The image embeddings have 1536 features (columns). Each row has been normalized to have length 1. To compute cosine similarity we take the dot product of one row with another row. This is accomplished via matrix multiplication. Afterward we take the transpose so that we have a matrix with <code>BATCH</code> rows and <code>70,000</code> columns. Therefore row <code>i</code> column <code>j</code> is the cosine similarity between test item <code>i</code> and test item <code>j</code>. Lastly to compute cosine distance we take 1 minus cosine similarity</p>
<p><img src="https://www.ccom.ucsd.edu/~cdeotte/Kaggle/img4_5_11_21.png" alt="image"></p>
<p><a href="https://www.kaggle.com/jacob34" target="_blank">@jacob34</a> Correct, we don't actually know why the stage 2 model does what it does. We just know that it maximizes F1 score by including more potential matches or including less. (I just made up the example of clothing and furniture to give the story a possible context).</p>
<p>My second Graph Analysis Post Process is when <code>product A</code> has matches that are part of a <strong>cluster</strong> but <code>product A</code> is not part of the <strong>cluster</strong> itself. For example, imagine there is a <strong>cluster</strong> of size 5 and only 3 are labeled as matches with <code>product A</code> and 2 are not labeled. There are 3 rules, if 70% or more of the cluster is labeled, then move the entire cluster into match with <code>product A</code>. If less than 40% of the cluster is labeled, them remove the entire cluster from match with <code>product A</code>. If between 40% and 70% is match, then leave as it.</p>
<p>yes <a href="https://www.kaggle.com/allohvk" target="_blank">@allohvk</a> , i remove false positives in my first Graph Analysis Post Process. The logic is as follows. Define a <strong>cluster</strong> as a group of products that have nearly identical images or nearly identical titles. </p>
<p>Imagine that we are finding matches for <code>product A</code> and <code>product A</code> is part of a cluster of size 2, 3, 4, 5, or 6 (but not 7 or larger). Say that <code>product A</code> original matches are <code>B, C, D, E, F and G</code>. Say that <code>product A</code> is part of a cluster of size 3 with <code>B and C</code>. We look at <code>B</code> and <code>C</code>'s matches to determine if we should remove any of <code>D, E, F, G</code> from <code>A</code> matches. If either <code>B</code> or <code>C</code> does not include any of <code>D, E, F, G</code> as matches then remove them from <code>product A</code> matches.</p>
<p>For clusters of size 2, 3, 4, 5, 6 means that <code>product A</code> has 1, 2, 3, 4, 5 members in cluster besides itself. We require that 1, 2, 2, 3, 4 of these members contain the original matches of <code>A</code> otherwise we remove them from being matches with <code>product A</code>.</p>
<p><a href="https://www.kaggle.com/soumya5891" target="_blank">@soumya5891</a> haha, so true. I could probably have trained an XGB model to determine the decision boundary, but it was fun looking at plots of probabilities and computing the equation for the decision boundary myself. In that way my model and i worked together 😃</p>
<p>Thanks. The best trick in any competition is to analyze your OOF from your CV. After doing lots of OOF EDA, i discovered that my simple boundary wasn't getting as many matches as it could.</p>
<p>Ask yourself questions like "what probability does a product need to be in order to include and increase F1 metric?", next ask yourself, "can i compute probabilities of matches from my OOF?". Afterward you have a matrix of probabilities and it is clear that your decision boundary can be improved.</p>
<p>Thanks <a href="https://www.kaggle.com/allvor" target="_blank">@allvor</a> Congrats on your great 5th place Gold finish!</p>
<p>Perhaps if i have time, i will make a post with pictures. In the meanwhile, i summarize it below in one of my responses to another comment if you scroll down. Both of my Graph Analysis post process take advantage of "clusters" which i define as groups of products that have basically the exact same image or exact same title.</p>
<p>So if you want to find the matches for <code>product A</code> and </p>
<ul>
<li>if <code>product A</code> is part of a "cluster", then you can use information from the other products in A's cluster to help refine A's matches. (Note if A and B have identical images, they may still have different titles. If A and B have identical titles they may have different images. So using cluster members helps).</li>
<li>if <code>product A</code> is not a part of a "cluster" but matches with products that form a "cluster", you can move the entire cluster into or out of match with <code>product A</code>.</li>
</ul>
<p>The most important thing is having an effective local validation that correlates with LB to experiment ideas. For local validation i made two folds. I split the <code>label_group</code> in half and put half in fold 0 and half in fold 1. The reason that 2 folds is better than 5 folds is that we need the validation subset to be large otherwise it is too easy to find groups since the embeddings will not overlap much with false positives.</p>
<p>Next i use my intuition to try what works. I was positive that using "clusters" can improve CV LB because they provide more information about <code>product A</code> and information helps. My first attempts didn't work but i kept refining it. For example, regarding "if product A is part of a cluster", that only worked when cluster size was 2, 3, 4, 5, 6. (and then i require that 1, 2, 2, 3, 4 cluster members have the same matches as product A) So the original code which ran on all cluster sizes and all members didn't work well.</p>
<p>So if your post process doesn't work and if you believe it in, don't give up. Refine it and change when it is applied. The post process of "if product A is part of a cluster then blah blah blah" was huge. If i remember correctly it may have boosted CV 0.010+ and LB 0.005+ after i refined it to execute on clusters 2,3,4,5,6 with requirement 1,2,2,3,4 members.</p>
<p>Congrats! well done</p>
<p>Thanks <a href="https://www.kaggle.com/zlanan" target="_blank">@zlanan</a> </p>
<p>Congratulations Moein, great job!</p>
<p>Yes. To that notebook, i added XLM-RoBERTa text and EfficientNetB0 384x384 image models. Then I add the decision boundary I describe above to achieve the LB 0.750. Then i add 5 post process to achieve LB 0.770.</p>
<p>Thank you Kaggle and Shopee for hosting a wonderful competition. This was one of my favorite competitions. This competition challenged us to build image models, text models, and combine everything in ways never done before.</p>
<h1>RAPIDS TfidfVectorizer</h1>
<p>My solution began with RAPIDS TfidfVectorizer and EfficientNetB0 384x384 images as presented in my public notebook <a href="https://www.kaggle.com/cdeotte/part-2-rapids-tfidfvectorizer-cv-0-700" target="_blank">here</a>. Next I ensembled with XLM-RoBERTa which has multilingual pretraining and I ensembled with EfficientNetB3 512x512 image.</p>
<h1>Best Public Notebook LB 0.730</h1>
<p>My <a href="https://www.kaggle.com/cdeotte/part-2-rapids-tfidfvectorizer-cv-0-700" target="_blank">public notebook</a> scored LB 0.700. Next <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> increased it to LB 0.730 adding ArcFace and cosine distance thresholds <a href="https://www.kaggle.com/ragnar123/shopee-inference-efficientnetb1-tfidfvectorizer" target="_blank">here</a>. Public notebooks make decisions as follows. If two products have image cosine distance less than 0.3 or text cosine distance less than 0.17, they are a match:</p>
<p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Mar-2021/simple2.png" alt="image"></p>
<h1>Better Decision Boundary LB 0.750</h1>
<p>Note how the plot above shows a simple decision boundary (of constant functions). By using our CV, we can calculate the probably that two products are a match based on their image cosine distance and text cosine distance and make a decision boundary using piecewise linear functions:</p>
<p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Mar-2021/match.png" alt="image"></p>
<p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Mar-2021/nonlinear2.png" alt="image"></p>
<h2>Add Matches With Middle Image and Text Similarity</h2>
<p>From the plot above, we see that we should add more matches than the public notebooks. For example if two products have <code>cosine image similarity = 0.4</code> and <code>cosine text similarity = 0.4</code> then we should consider them a match. These two products were not considered a match with the simple rule "image < 0.3 or text < 0.17".</p>
<h2>Sample Code</h2>
<p>In order to find these matches, we must compute image and text cosine distances simultaneously.</p>
<pre><code># NORMALIZE IMAGE EMBEDDINGS
s = np.sqrt(np.sum(np.multiply(image_embeddings,image_embeddings),axis=1))
image_embeddings = image_embeddings / np.expand_dims(s,axis=-1)
# NORMALIZE TEXT EMBEDDINGS
s = np.sqrt(np.sum(np.multiply(text_embeddings,text_embeddings),axis=1))
text_embeddings = text_embeddings / np.expand_dims(s,axis=-1)
pids = []
all_id = df.posting_id.values
# COMPUTE DISTANCES IN BATCHES
CT = int(np.ceil(len(df)/BATCH))
for k in range(CT):
a = k*BATCH
b = (k+1)*BATCH
b = min(len(df),b)
for j in range(b-a):
# COMPUTE CHUNK OF DISTANCES
img = 1-image_embeddings.dot(image_embeddings[a:b,].T).T
txt = 1-text_embeddings.dot(text_embeddings[a:b,].T).T
# DECISION BOUNDARY
idx1 = np.where(img[j,] <0.3)[0]
idx2 = np.where(txt[j,]<0.17)[0]
idx3 = np.where( 0.62*img[j,] +txt[j,] <0.73)[0]
idx = np.concatenate([idx1,idx2,idx3])
idx = np.unique(idx)
pids.append( all_id[idx] )
# PREDICTIONS
df['matches'] = pids
</code></pre>
<h1>Remove False Negative and False Postives LB 0.770</h1>
<p>The above decision boundary increases our CV LB to 0.750. To increase our LB to 0.770, we must do five additional techniques to remove false negatives and false positives (which increase metric F1 score). I will explain these further in part 2 of my solution.</p>
<ul>
<li>If a product has 0 matches, slowly increase boundary to find 1 match. (Remove false negative).</li>
<li>If a product has matches with a product of same image and/or same text, analyze all of those products' matches and then remove matches from current product if they don't appear often in the neighbors' neighbors. (Remove false positive via Graph analysis).</li>
<li>If a product has a match and that match is part of a group with same image and/or same text, then decide to include all of that cluster or remove all of that cluster. (Remove false positive and false negative via Graph analysis).</li>
<li>Build a stage 2 model that determines if we should increase threshold or decrease threshold of current boundary based on what product type we are looking at. (Stacking).</li>
<li>If a match has a different unit (50 ml) than the current product (100 ml), remove that match. (Remove false positive).</li>
</ul>
<p>Yes, <a href="https://www.kaggle.com/allohvk" target="_blank">@allohvk</a> also talked about it <a href="https://www.kaggle.com/c/shopee-product-matching/discussion/229495" target="_blank">here</a>. If you display the false positives from your CV OOF and inspect them by eye, this is the first pattern that you see.</p>
<p>Thanks. This boosted CV LB a lot. Maybe 0.005 i forget exactly.</p>
<p>That's the equation of the sloped line (boundary between blue and yellow) from this plot (each block is 0.1 by 0.1). Note that this is the 50% probability plot calculated from local validation. For CV, the line from the 30% plot worked best. I tried different percentage plots on public LB until i found the best.<br>
<img src="https://www.ccom.ucsd.edu/~cdeotte/Kaggle/nonlinear2.png" alt="image"></p> |
Shopee - Price Match Guarantee | 15th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you. I got around +0.019 with post-processing compared to naive-concat.<br>
mean : +0.005<br>
mean+dba : +0.015<br>
mean+dba+graph : +0.019</p>
<blockquote>
<p>weighted(logspace)</p>
</blockquote>
<p>This is numpy's function.</p>
<blockquote>
<p>np.logspace(0,-1.5,3)<br>
array([1. , 0.17782794, 0.03162278])</p>
</blockquote>
<p>new_X = X*1 + Y*0.177 + Z*0.031</p>
<h2>Summary</h2>
<ul>
<li>Image : CNN+GeM+Arcface</li>
<li>Title : TF-IDF and finetuned LaBSE</li>
<li>Post processing: Cross-mean and DBA/QE</li>
<li>Merge Image/Title embeddings as graph</li>
</ul>
<p><img src="https://i.imgur.com/aWV0Jn5.png" alt=""></p>
<h2>Image</h2>
<ul>
<li>resnest101</li>
<li>resnest200</li>
<li>eca_nfnet_l1</li>
</ul>
<p>Trained with Arcface and GeM. Input size is 512x512. <br>
Usual augmentation(LR flip, cutout, bright, shiftscalerotate, etc) by albumentations. <br>
512 dimension for each model, concatenate them to produce 1536 dimensional vector. </p>
<h2>Title</h2>
<ul>
<li><p>Preprocessing </p>
<ul>
<li>unicode handling</li>
<li>tolower</li>
<li>Unit cleaning <br>
250 gram, 250gr, 250gram, 250g, means almost same, <br>
so I change them to "250gram" as preprocessing. <br>
I did same other dozens of units(meter, yard, ml, kg, pcs, lembar, …) </li></ul></li>
<li><p>TF-IDF <br>
Fit and transform to test dataset. </p></li>
<li><p>LaBSE <br>
Finetuned with Arcface and GeM. <br>
Augmentation by EDA(Easy Data Augmentation)-like method. </p></li>
</ul>
<p>Finally concat TF-IDF and BERT vector for title embedding. </p>
<h2>Post Processing</h2>
<ul>
<li><p>Cross-mean embedding <br>
(I don't know there is proper term for this) <br>
If there are products(e.g. A,B and C) which have identical image, they are same product. (precision > 99.9%) <br>
I replace each *title* embedding to mean of title embedding: (A+B+C)/3 <br>
Same procedure for image embedding, mean by same title.</p></li>
<li><p>DBA/QE<br>
For image X, I got 3 nearest neighbor(e.g. X,Y,Z) by image embedding, then replace X's image embedding with weighted(logspace) sum of (X,Y,Z). <br>
Same for title.</p></li>
</ul>
<h2>Graph merge</h2>
<p>Now we have two embeddings: image and title. <br>
I Merge them as graph. </p>
<p>For each image or title embedding, I got 50 nearest neighbors and euclidean distance. <br>
One node represents one embedding. Edge weight is their euclidean distance. <br>
Then We can merge image and title graph. If there is duplicated edge, pick small(near) one. </p>
<p>I employed some kind of <a href="https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm" target="_blank">Dijkstra's algorithm</a> to form clusters. <br>
I chose final threshold from LB score. </p>
<p><img src="https://i.imgur.com/4OLmNIc.png" alt=""><br>
<img src="https://i.imgur.com/Fp2gsrD.png" alt=""></p>
<h2>Things that didn't work</h2>
<ul>
<li>Pseudo label: works pretty well in CV, but I can't do this within 2 hours. </li>
<li>uSIF, fasttext, glove</li>
<li>PorterStemmer, LancasterStemmer</li>
<li>Spell correction</li>
<li>CosFace, AdaCos</li>
<li>OCR: produced Title quality is not good</li>
</ul>
<p>Thank you !<br>
GeM stands for "generalized mean pooling"</p> |
Shopee - Price Match Guarantee | 19 place. Voting and similarity chain | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I hope for gold ))</p>
<h1>Models</h1>
<p>img: eca_nfnet_l0 + eff_b5<br>
txt: tfidf + bert(phrase multilang)</p>
<h1>Tricks and ideas:</h1>
<ol>
<li><p>Voting -- Image_score and text_score averaging <strong>+0.11</strong> (.739->.750) i<br>
For each product we create null array (dictionary), and add to cells results NN (nearest neighbor) search 1-cosine_nn_dist. for tfidf cosine_dist. Normalize by the number of models for text and for images</p></li>
<li><p>Always search for pair <strong>+0.05</strong> (.734->.739)</p></li>
<li><p>Own models for images <strong>+0.02</strong>. Other loss (r focal loss). More augmentations</p></li>
<li><p>Text cleaning <strong>+0.01</strong></p></li>
<li><p>Subtraction -1 in voting if postfixes do not match. gr, ml, pa. <strong>+0.02</strong> <br>
5.1. spf type prefixes<br>
5.2. Synonyms of postfixes (picture 1). And creating glued pairs of number-postfixes.<br>
5.3. Checking if postfixes have splitting power<br>
The postfixes and prefixes were collected from training sample, all text sequences which are written together with numbers 30meter spf50</p></li>
<li><p>Similarity chain (picture 2) <strong>+0.02</strong><br>
If a product is selected with high accuracy then we are looking for a product similar to the query product and a new product </p></li>
</ol>
<h1>Not worked</h1>
<ol>
<li>Learning TFIDF coefficients with cosine loss<br>
I wanted to use parametrization training on a test sample, but didn't have time to</li>
<li>Segmentation/detection. Segmented the image with silence detection/backround removal/ matting.<br>
2.1 I multiplied the obtained mask by the last feature map.<br>
Found a rectangular area, global pooling applied to it and got a lot of scene fragments, that I merged with the tagged image into NearestNeigbor</li>
<li>Extra Attention after the last layer. ECA, self-supervised</li>
</ol>
<h2>Visualisation</h2>
<p>first -- query, green -- matched, red -- miss matched, blue -- not found/matched<br>
<img src="https://i.imgur.com/lYGYO9u.jpg" alt=""><br>
pictire 1. Check postfix like gr, ml, pa</p>
<p><img src="https://i.imgur.com/tg5qNm6.jpg" alt=""><br>
picture 2. Similarity chain. Two last images not found, but images is similar to matched product</p> |
Shopee - Price Match Guarantee | 1st Place Solution - From Embeddings to Matches | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Because of batchnorm inside the model, some reported it is preferred to have batch size > 16 ( <a href="https://arxiv.org/pdf/1803.08494.pdf" target="_blank">https://arxiv.org/pdf/1803.08494.pdf</a> ). We just set batch size to maximum number such that it doesn't exceed GPU memory.</p>
<p>Yes <a href="https://www.kaggle.com/ilu000" target="_blank">@ilu000</a> is almost right. But we set 'min2 threshold', usually a large number(~0.8). When we process second most similar object (besides self match), there are 2 cases.</p>
<ul>
<li>If the distance is less than 'min2 threshold', we accept it. (<strong>common case</strong>)</li>
<li>Otherwise, we reject it. (<strong>uncommon case</strong>)</li>
</ul>
<blockquote>
<p>for the INB pipeline, the embeddings that you are using is the concatenation between image and text, right?</p>
</blockquote>
<p>Yes you're right! I'll add it to the post. Thanks.</p>
<blockquote>
<p>for (matches, similarity) pair is the union of combined, images, text using the similarities in this priority?</p>
</blockquote>
<p>Yes</p>
<p>Thanks!</p>
<p>we set 'min2 threshold', usually a large number(~0.8). When we process second most similar object (besides self match), there are 2 cases.</p>
<ul>
<li>If the distance is less than 'min2 threshold', we accept it. (common case)</li>
<li>Otherwise, we reject it. (uncommon case)</li>
</ul>
<p>We use fold0 of Group5Fold by label_group</p>
<ol>
<li>I trained one eca_nfnet_l1 in with my training code, and Limerobot trained one with his training code. Limerobot used increasing margin, while I used fixed margin. Also specific configurations like optimizer, scheduler are different.</li>
<li>"concat unnormalized embeddings->normalize" VS "normalize each embeddings->concat". Latter is more natural, since text model and image model embeddings are normalized individually in training pipeline, and it indeed worked better.</li>
<li>We found optimal settings with validation set, just replaced trainset with trainset+validset, then train.</li>
<li>Yes</li>
</ol>
<p>eca_nfnet_l1: 112, xlm-roberta-large: 152, when resolution=640, max_seq_len=118</p>
<p>Congratulations to all the winners, and thanks to the hosts for arranging this interesting dataset and Kaggle for arranging this competition! Also, great thanks to my teammate <a href="https://www.kaggle.com/limerobot" target="_blank">@limerobot</a> who always comes up with brilliant ideas and never gives up. I learn a ton from him.</p>
<p>Title is 'From Embeddings to Matches', since we experienced major CV/LB improvements from <em>properly utilizing image/text embeddings to search matches</em>. On the other hand, individual model improvement helped less, that is, 'from input to embedding' mattered less after we've got decent models. Even when CV for each image/text model increased quite a lot, CV for ensemble and LB didn't increase much.</p>
<p>I'll first share our history of public leaderboard scores. I'll only write things that improved our score significantly. Note that model/training optimization improved the score here and there, too.</p>
<ul>
<li>baseline: <strong>0.7</strong>(image only), <strong>0.64</strong>(text only)</li>
<li>concat img_emb & txt_emb -> normalize: <strong>0.724</strong></li>
<li><em>min2</em>: <strong>0.743</strong></li>
<li>normalize -> concat img_emb & txt_emb: <strong>0.753</strong></li>
<li>full data training: <strong>0.757</strong></li>
<li><em>union comb, img, txt matches</em> & tune threshold: <strong>0.776</strong></li>
<li><em>INB</em> & add diverse txt models: <strong>0.784</strong></li>
<li><em>use img, txt, comb emb at INB stage 1</em> & tune threshold jointly: <strong>0.793</strong></li>
</ul>
<p>Now I'll go into details of our solution.</p>
<h2>1. Model</h2>
<p>We used two <code>eca_nfnet_l1</code>s from <code>timm</code> for image encoders, and <code>xlm-roberta-large</code>,<code>xlm-roberta-base</code>, <code>cahya/bert-base-indonesian-1.5G</code>, <code>indobenchmark/indobert-large-p1</code>, <code>bert-base-multilingual-uncased</code> from <code>huggingface</code> for text encoders.</p>
<p>We used <code>ArcFace</code> to train the model. After pooling from image/text encoder, we applied batchnormalization and feature-wise normalization to output embedding. Following the <code>ArcFace</code> paper, the normalized embedding is multiplied with normalized weight matrix to produce cosine, then we apply arc margin and softmax.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1796795%2F3284376a5d6d7959aabcc1fd5a64a2fe%2F1.JPG?generation=1657687883332172&alt=media" alt="model"></p>
<h2>2. Tuning ArcFace</h2>
<p>It was the first time for both of us to use ArcFace, so at first we had a hard time tuning it. Sufficiently large margin was important for the quality of the embedding, but when we increased it, the model suffered from convergence issue. We found several ways to overcome this issue, and finally applied among these options. </p>
<ul>
<li>increase margin gradually while training</li>
<li>use large warmup steps</li>
<li>use larger learning rate for cosinehead</li>
<li>use gradient clipping</li>
</ul>
<p>Margin of 0.8~1.0 was optimal for image model, and 0.6~0.8 for text model. When using the increase-margin approach, we start from a margin of 0.2 and increase it to 1.0 for image model and 0.8 for text model.</p>
<p>Also, we tried class-size-adaptive margin, which was introduced in the google landmark recognition competition solution (<a href="https://arxiv.org/pdf/2010.05350.pdf)" target="_blank">https://arxiv.org/pdf/2010.05350.pdf)</a>. It is best when margin is set to <code>class_size^-0.1</code> for image model and <code>class_size^-0.2</code> for text model. However, the improvement was subtle since the class was less imbalanced than in google landmark recognition competition.</p>
<p>Using larger learning rate for cosinehead and using gradient clipping not only made the model converge, but also improved CV score a bit.</p>
<p>Adding extra fc layers after global average pooling hurts the model performance, while adding batchnorm before feature-wise normalization improved the score.</p>
<h2>3. Combining Image & Text Matches- concat & union</h2>
<p>We have image and text input in this competition, and it is crucial to utilize and combine these modalities well. These are the approaches that we tried.</p>
<ul>
<li>Produce text matches from text embedding, image matches from image embedding, then union text matches & image matches</li>
<li>Concatenate text embedding & image embedding -> produce comb matches</li>
<li><strong>Union comb matches & text matches & image matches</strong></li>
</ul>
<p>Second method was much better than the first method and the last method was much better than the second method.</p>
<p>We normalize image embedding and text embedding then concat them to calculate comb-similarities. Actually, it is the same as calculating similarity from image embedding and another similarity from text embedding, then averaging them. Thus, we can <em>interpret</em> the last method as image below.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1796795%2F039165ffc0cfbe4a5422482cdee59e41%2F2.JPG?generation=1657687989584979&alt=media" alt="combining image & text"></p>
<p>So we are accepting items that image embedding strongly suggests, items that text embedding strongly suggests, and items that image & text embedding both moderately suggests.</p>
<p>We also tried to jointly train image encoder and text encoder. However, multimodal model was always inferior to training separately, even when we initialized it with trained encoders.</p>
<h2>4. Iterative Neighborhood Blending (INB)</h2>
<p>Apart from combining image and text matches, it was also crucial to properly utilize embeddings to produce matches. We made a nontrivial pipeline for searching matches from embeddings.</p>
<p>Based on QE(Query Expansion) and DBA(DataBase-side feature Augmentation), we created a pipeline called <strong>INB(Iterative Neighborhood Blending)</strong>. Most of the ideas are shared with QE and DBA, but some details are different. INB pipeline consists of these components.</p>
<p>K Nearest Neighbor Search</p>
<p>We use faiss(<a href="https://github.com/facebookresearch/faiss" target="_blank">https://github.com/facebookresearch/faiss</a>) for knn search, and set k=51 (maximum 50 non-self matches + 1 self). We used inner product as similarity metric (the embedding is normalized so it is equivalent to cosine similarity)</p>
<p>Thresholding</p>
<p>We converted cosine similarity to cosine distance (<code>= 1-cosine similarity</code>) for some convenience in implementation, and obtained <code>(matches, distances)</code> pair that satisfies <code>distance < threshold</code>. For each item x, we call this <code>(matches, distances)</code> pair as "neighborhood of x".</p>
<p><strong>Min2</strong></p>
<p>When doing thresholding, we can ensure that there are at least 2 matches per query, since it is guaranteed by the competition description. We reject the second closest match <em>only if</em> the distance is over the min2-threshold, which we set high.</p>
<p><strong>Neighborhood Blending</strong></p>
<p>The intuition for neighborhood blending is straightforward. After knn and thresholding with min2, we obtain the <code>(matches, similarities)</code> pair for each item, we have a graph, where each node is an item, and the edge weight is the similarity between two nodes. Only the neighborhoods are connected. That is, nodes that didn't pass the threshold condition and min2 condition from the query node, are disconnected.</p>
<p>We want to use the neighborhood items' information to refine the query item's embedding and make the cluster clearer. In order to do that, we simply weighted-sum the neighborhood embeddings with similarity as weights and add it to the query embedding. So we <strong>blend neighborhood</strong> embeddings. We call it <strong>NB(Neighborhood Blending)</strong>.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1796795%2F6dbfb83750ce5ddafc1323717cad7bb9%2F3.JPG?generation=1657688033992733&alt=media" alt="NB"></p>
<p>Above image illustrates how one step of neighborhood blending is performed on a toy example. Let's look at node A. Its embedding is <code>[-0.588, 0.784, 0.196]</code> and its similarity to node B, C, D is 0.94, 0.93, 0.52 respectively. Red line means two nodes are neighbors, so they are connected. Dashed line means two nodes didn't pass the threshold, so are disconnected. We apply neighborhood blending, then: <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1796795%2F318acb9f2888b661deff2b9b618c1450%2F.PNG?generation=1657688162993501&alt=media" alt="nb equation"><br>
after applying this, hopefully like in the right graph in the image, we get more clustered, refined embeddings.</p>
<p>We can apply NB iteratively. After blending neighborhood for stage1 embeddings, we do knn search & 'thresholding with min2' to get stage2 <code>(matches, similarities)</code>. We apply NB again, to further refine the embeddings. We can iterate until the evaluation metric stops improving. This is where <strong>Iterative</strong> comes from.</p>
<p>Code is quite simple. (you need to have your <code>neighborhood_search</code> function implemented)</p>
<pre><code>def blend_neighborhood(emb, match_index_lst, similarities_lst):
new_emb = emb.copy()
for i in range(emb.shape[0]):
cur_emb = emb[match_index_lst[i]]
weights = np.expand_dims(similarities_lst[i], 1)
new_emb[i] = (cur_emb * weights).sum(axis=0)
new_emb = normalize(new_emb, axis=1)
return new_emb
def iterative_neighborhood_blending(emb, threshes):
for thresh in threshes:
match_index_lst, similarities_lst = neighborhood_search(emb, thresh)
emb = blend_neighborhood(emb, match_index_lst, similarities_lst)
return match_index_lst
</code></pre>
<hr>
<p>Full INB pipeline is summarized in the below image.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1796795%2Fd3f476fd80843946977558a6923ffca9%2F4.JPG?generation=1657688302455960&alt=media" alt="INB"></p>
<p>We apply 3stage INB.<br>
To get to the details, we first find <code>(matches, similarities)</code> pair from image embedding, text embedding and comb embedding, respectively. Then we union the matches. (for similarities, we use comb similarities, if not exists, image similarities, if not exists, text similarities)<br>
Using joined <code>(matches, similarities)</code> and comb embedding, we do neighborhood blending to obtain stage2 embedding. Then we find <code>(matches, similarities)</code> pair and apply NB to obtain stage3 embedding. We get stage3 matches from this embedding.<br>
We union it with matches found solely by image embedding, and matches found solely by text embedding to get final matches, which we submit.</p>
<p><strong>About Threhsold Tuning</strong><br>
Note that we have 10 different threshholds to tune. 3 for stage1 txt, img, comb thresholding, 1 for stage2 thresholding, 1 for stage3 thresholding, 2 for stage1 img, txt thresholding that directly connects to final union, 3 for stage1~3 min2 thresholding. But after we've tuned other thresholds to reasonable range, major score boost came from <strong>stage2 and stage3 threshold</strong>. (we got good results when fixing stage1 comb threshold to 0 - just use min2 from comb) So <strong>at last we only had to tune those 2 numbers</strong> against LB.</p>
<h4>Visualizations of Embeddings before/after INB</h4>
<p><em>2021/05/13 editted</em><br>
We made a notebook that visualizes the effect of INB on embeddings.<br>
<a href="https://www.kaggle.com/harangdev/shopee-embedding-visualizations-before-after-inb" target="_blank">https://www.kaggle.com/harangdev/shopee-embedding-visualizations-before-after-inb</a><br>
Here are some examples<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1796795%2F1d608204843780d923938f1a8a47c207%2Fex2.png?generation=1657688400733163&alt=media" alt=""><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1796795%2Fe70765f46f8466253b8fde64a24eae21%2Fex1.png?generation=1657688461363175&alt=media" alt=""></p>
<h2>5. ETC</h2>
<ul>
<li>for image model, cutmix with probability 0.1 helped</li>
<li>for image augmentation, using only horizontal flip was better</li>
<li>madgrad optimizer (<a href="https://github.com/facebookresearch/madgrad" target="_blank">https://github.com/facebookresearch/madgrad</a>) performed better or similar compared to Adam and SGD</li>
<li>Using full data for training improved performance</li>
<li>Limerobot dug into the triplet loss but he never beat the arcface loss 😿</li>
</ul>
<p>We tried efficientnets, swins, and other resnet variants, but eca_nfnet_l1 performed way better in CV.</p>
<p>\1. I actually came up with INB after encountering alphaQE, so between blend_neighborhood and alphaQE, idea is similar, but some details are different.</p>
<ul>
<li>it is applied simultaneously on database and query (query is included in database in this competition, so maybe DBA is closer)</li>
<li>alphaQE is applied with fixed neighborhood size, where we use threshold & min2 threshold for neighborhood generation</li>
</ul>
<p>Additionally, I tried to change alpha value but it turned out that alpha=1 is best in our INB pipeline. Also, iterating improved score.</p>
<p>\2. No, after each iteration, distance between close items becomes smaller, and we need to consider it when deciding threshold. We used <code>[[0(comb), 0.15(img), 0.15(txt)], 0.16, 0.12]</code> for thresholds. 0 for comb means we only used min2 for comb matches in stage1.</p>
<p>Thanks! It was a pleasure to compete with your team :) Yeah, I think there was lot of potential gain over just joining text-image matches in this competition, which can be achieved by DBA, second stage model, etc.</p>
<p>Thanks! Yes, CV increased 0.906->0.925 with INB pipeline. And actually, for toy example, I just made it to help understand the calculation (removing connections like A-C might be bad like you said)</p>
<p>Thanks a lot and congrats for your strong finish! Actually, QE & DBA, which is the base of INB pipeline, already can be interpreted with graph version, and there are several graph based understanding of reranking like in <a href="https://arxiv.org/abs/2012.07620" target="_blank">https://arxiv.org/abs/2012.07620</a> . So credit goes to those authors ;) </p>
<p>Well, I can tell that our CV and LB correlated quite well. CV score of joint multimodal model was worse than 'training image, text separately -> concatenate embeddings' for our case.</p>
<p>10 epochs, and it took about 1h30m</p>
<p>Nice to hear it improved your score. We've experienced CV boost 0.9225->0.9256 by iterating. (LB boost too, of +~0.002) Maybe you should tune the thresholds. First threshold and second threshold are set differently.</p>
<p>Other part is pretrained, but cosinehead W is initialized randomly. So it is kind of intuitive to set large lr for cosinehead. Especially for text models, where learning rate needs to be smaller and the model suffered convergence issue.</p>
<p>Output shape of global average pooling is <code>[batch_size, 1792]</code>.</p> |
Shopee - Price Match Guarantee | 26th Place Solution : Effective Cluster Separation and Neighbour Search | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> it feels great to be appreciated by you. We tested on both english and indonesian and concluded that there were more number of indonesian examples than english, hence the reverse</p>
<p>Hi , all <br>
Congratulations to all the winners , I have read some top solutions and my mind is blown , however I feel our solution is as unique as any other . Below are the details of our solution described in a step wise manner</p>
<h2>Model Training</h2>
<p>We trained our models with the same strategy as in all my shared kernels , we took the image and text separately passed it through respective architectures , then acrface then softmax to compare it with the given labels </p>
<h2>CV</h2>
<p>We made a single split stratified on labels one was used for training other for validation . For finding the best threshold and final CV we used the complete training set as suggested by chris in his kernels</p>
<h2>Inference Strategy</h2>
<p>The main and unique part of our strategy is the inference and how we prepared our submissions.<br>
For some reason I am not able to upload the picture here , please find the simple diagram of our solution <a href="https://drive.google.com/file/d/1EKqkUtvd9tBEPZVeheY6PfN3xnj6Lctw/view?usp=sharing" target="_blank">here</a></p>
<h4>Step 1 : Getting Image Embeddings</h4>
<p>We used two models for getting Image Embeddings :-</p>
<ul>
<li>NFnet L0</li>
<li>Efficient Net B0</li>
</ul>
<p>LB : 0.731 together</p>
<p>Both trained on 512 IMG SIZE , but the trick used here was we ran inference on<b> 544 IMG SIZE and got a good boost </b> . This idea of running on slightly increased Image size came from <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a> 's Google landmark winning Solution</p>
<h4>Step 2 : Getting Text Embeddings</h4>
<p>We used two models for text as well :-</p>
<ul>
<li>Indonesian DistillBERT</li>
<li>Paraphrase Mutlingual SBERT Model</li>
</ul>
<p>LB : 0.644 together</p>
<p>These both models were trained on translated indonesian data , the trick used while inference was <b>we converted the main english words in test to indonesian </b> via the dictionary shared by <a href="https://www.kaggle.com/rohit" target="_blank">@rohit</a> singh</p>
<h4>Step 3: The Preprocessing Layers : Effective Cluster Separation</h4>
<p>LB from 0.736 --- 0.752</p>
<p>When we were about to quit this competition my teammate <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> realized that we were not doing good on lb because simple concatenation of text and image predictions increases the number of predicted values and hence decreases the recall . Thus <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> came up with this idea to balance the prediction and recall </p>
<p><b>We take the embeddings from each image model and pass it through the pre-processing layer , in the pre-processing layer we take out top 3 neighbours for each row using knns and multiply it with decreasing weights on log space so as to shift these top 3 embeddings into another cluster on the embedding space , while the other embeddings remain the same </b> , (we tried many numbers instead of 3 but taking 3 best neighbours seem to work the best ) </p>
<p>We do the same with text embeddings and at the end we have 4 modified embeddings from four models</p>
<h4>Step 4: Post Processing Layer : Neighbourhood Search</h4>
<p>LB : 0.752 -- 0.759</p>
<p>We now have five different models and five different embeddings , we tried a lot of different ensemble strategies that didn't help , then <a href="https://www.kaggle.com/drthrevan" target="_blank">@drthrevan</a> came up with this idea :</p>
<p>We take each model and find 100 neighbours for each posting id and for each model , we make separate dataframes for each . For eg :- we take the nfnet model get 100 neighbours for each posting_id and form a dataframe , so now our dataframe has posting_id , indexes of neighbours and their distances . </p>
<p>Now we take all the 5 dataframes , they are not the same as different models have different neigbours for different posting_ids, thus we do outer join , so instead of concatenating the embeddings we concat the distances . Now we have our final dataframe , in this we search for the final predictions using some threshold .</p>
<h4>Step 5 : Final Threshold Trick</h4>
<p>LB : 0.759 -- 0.764 </p>
<p>We divide the final predictions into popular and unpopular posting ids , popular posting ids being the ids having more than 100 matches as predictions and unpopular posting ids are the ones having matches less than 2 matches</p>
<p><b>We give lower thresholds to unpopular pairs to increase precision and we give higher thresholds to popular pairs to increase reacall </b></p>
<p>Voila !! that makes our final submission</p>
<p>Thanks for reading I hope you all enjoyed the competition and our solution</p> |
Shopee - Price Match Guarantee | 2nd place solution (matching prediction by GAT & LGB) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><ol>
<li>No. We make pairs only union of 50 nearest neighbors using image similarity and text similarity.</li>
<li>Our final submission takes around 1h40m.</li>
</ol>
<p>Congrats to you guys too, we were thrilled to the fierce race with you!</p>
<p>We have published our submission code!<br>
You can check from here: <a href="https://www.kaggle.com/lyakaap/2nd-place-solution" target="_blank">https://www.kaggle.com/lyakaap/2nd-place-solution</a></p>
<p>Congrats to all winners and thank the host for such a nice competition!<br>
And I thank my teammate <a href="https://www.kaggle.com/tkm2261" target="_blank">@tkm2261</a> for his great contribution!</p>
<p>Our submission code is available at <a href="https://www.kaggle.com/lyakaap/2nd-place-solution" target="_blank">https://www.kaggle.com/lyakaap/2nd-place-solution</a></p>
<p><img src="https://user-images.githubusercontent.com/27487010/118588589-a3461f00-b7d9-11eb-8373-edd0134441a5.png" alt="pipeline"></p>
<h1>Summary</h1>
<p>This competition is an interesting competition with multi-modal data.<br>
Overview of our pipeline is </p>
<ul>
<li>1st stage: Train metric learning models to obtain cosine similarities of image, text, and image+text data</li>
<li>2nd stage: Train “meta” models to classify whether a pair of items belong to the same label group or not.<ul>
<li>Used LightGBM and GAT (Graph Attention Networks)</li></ul></li>
</ul>
<p>We found ensembling lyakaap and tkm models brings huge improvement. When we team-merged, our scores were 0.767 and 0.765 respectively. Surprisingly, the score after merged was 0.777. Actually, we are not sure why the score is so much improved.</p>
<p>We will write our solution details below. Please let us know if you find some possibilities :-).</p>
<h1>Features</h1>
<h3>Image similarity</h3>
<ul>
<li>Cosine similarities of NFNet-F0, ViT embeddings</li>
<li>Loss: CurricularFace (better than ArcFace and others)</li>
<li>Optimizer: SAM (better than Adam, SGD, and others)</li>
<li>Concatenate the similarities like <code>F.normalize(torch.cat([F.normalize(emb1), F.normalize(emb2)], axis=1))</code></li>
</ul>
<h3>Text similarity</h3>
<ul>
<li>Cosine similarities of Indonesian-BERT, Multilingual-BERT, and Paraphrase-XLM embeddings</li>
<li>TF-IDF as in many public kernels</li>
</ul>
<h3>Multimodal (image + text) similarity</h3>
<ul>
<li>Trained model with NFNet-F0 and Indonesian BERT (concatenated at final feature layers)</li>
</ul>
<h3>Graph features</h3>
<ul>
<li>Avg and std of top-K cosine similarities of each item<ul>
<li>K=5, 10, 15, 30, etc</li></ul></li>
<li>Normalized avg to be mean=0 and std=1<ul>
<li>To handle the distribution difference between train and test</li></ul></li>
<li>Pagerank</li>
</ul>
<h3>Misc.</h3>
<ul>
<li>Text length</li>
<li># of word</li>
<li>Levenshtein distance</li>
<li>Image file size</li>
<li>Width and height of image</li>
<li>Query Expansion<ul>
<li>Obtain an augmented embedding which weighted average neighbors.</li>
<li>Concatenate the original and augmented embeddings like <code>F.normalize(torch.cat([F.normalize(orig_emb), F.normalize(qe_emb)], axis=1))</code></li>
<li>It pushed our score by 0.001</li></ul></li>
</ul>
<h1>Graph Attention Networks</h1>
<ul>
<li>Construct a model that based on Graph Attention Networks (GAT)<ul>
<li>Why we choose GAT is its simplicity and customizability</li></ul></li>
<li>Use only 4 features: image/bert/multi-modal/tf-idf similarities</li>
<li>Apply graph attention to "other edges connected to the node connected to the target edge" as a neighborhood</li>
</ul>
<p><img src="https://user-images.githubusercontent.com/27487010/123567967-47ba7880-d7fe-11eb-9624-19fd8fd5d037.png" alt="GAT"></p>
<h1>Graph-Based Post-processing</h1>
<ul>
<li>Recursively remove a edge (item match) with the highest betweenness centrality</li>
<li>Intuition behind is<ul>
<li>Label groups should form a clique</li>
<li>If there is an abundant edge, it should bridge two cliques</li>
<li>Such an edge should have a higher betweenness centrality</li>
<li>Recursively remove such an edge until there is no such an edge or connected components become smaller than some threshold</li></ul></li>
<li>It pushed our score by about 0.001</li>
</ul>
<h1>Performance and Memory Tunings</h1>
<ul>
<li>Isolated all cells with “%%python” IPython magic.<ul>
<li>Communicate each cell by files</li>
<li>Very helpful not to be irritated with memory matters, e.g., pytorch does not lease memory.</li>
<li>It enabled block-base debugging if we cache the files</li></ul></li>
<li>CuDF, cupy, cugraph, and ForestInference<ul>
<li>CuDF and cupy are very efficient and we can utilize GPU memory, which is larger than main memory!</li>
<li>ForestInference is awesome. 40 min inference on CPU became 2 min!</li></ul></li>
</ul>
<h1>Model Training Tips</h1>
<ul>
<li>Sample weighting based on label group size<ul>
<li>It is important to predict smaller label groups for micro-F1 score.</li>
<li>Weighted sample with 1 / (label group size) ** 0.4</li></ul></li>
</ul>
<h1>Ensembling Methodology</h1>
<ul>
<li>Calculate the best threshold for each model on local data</li>
<li>Subtract each model’s prediction by its threshold</li>
<li>Sum up the subtracted prediction overall model predictions</li>
<li>If the summed-up score is >0, we regard that an item pair belongs to the same group</li>
<li>Filter out pairs which do not have a mutual edge<ul>
<li>If A-B remains but B-A not, we drop both</li></ul></li>
</ul>
<h1>Not Worked</h1>
<ul>
<li>Many Graph-based futures</li>
<li>Eigenvector centrality and jaccard</li>
<li>End-to-end model (extract embeddings and predict match or non-match at the same time)</li>
<li>Local feature matching</li>
<li><a href="https://www.kaggle.com/c/instacart-market-basket-analysis/discussion/37221" target="_blank">https://www.kaggle.com/c/instacart-market-basket-analysis/discussion/37221</a> </li>
</ul>
<h1>Score Timeline</h1>
<ul>
<li>After team-merge: 0.777<ul>
<li>TF-IDF and image embeddings</li>
<li>Simple LightGBM model on 2nd stage</li></ul></li>
<li>+ Indonesian-BERT similarities: 0.780</li>
<li>+ multi-modal similarities: 0.783</li>
<li>+ multilingual-BERT and Paraphrase-XLM and misc: 0.788</li>
<li>+ pagerank feature: 0.789</li>
<li>+ Query Expansion: 0.790</li>
<li>+ GAT model for ensembling: 0.792</li>
</ul> |
Shopee - Price Match Guarantee | 3rd Place Solution (Triplet loss, Boosting, Clustering) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes, I'm going to make simplified kernel representing my solution. So you will see all pipelines there. Speedup in kernels was very high, about 5x. Locally lower, comparing to 10 threads loader about 1.5x</p>
<p>Confirm that, exactly 0.764 on public LB. Private lower, 0.753. Only post processing I had here - if object has no candidate above threshold, add one closest candidate point</p>
<p>Hi everyone, here is 3rd place solution. First of all, thanks to Kaggle and Shopee for great competition. There was a lot of ways to improve and I believe, no one discover all of them, so there was no "perfect" solution. My solution consists of 3 main parts</p>
<h2>General approach and fast baseline</h2>
<p>Let's build different (not trainable) products representations: efficient net embeddings, BERTs, TfIdf .. everything, you think should perform well and search for nearest neighbours on fixed radius, threshold is selected by CV, commonly it's optimal F1 score threshold + some small margin. After that for each object we union candidate neighbours from all embeddings and build a sample of pair objects with a binary target - if pair is a real duplicate or not. In my case train sample was about 3M points pairs with 4% duplicate rate. On that sample I build simple GradientBoosting model (catboost in my case) using next features, calculates separately for each embedding:</p>
<ul>
<li>pairwise distances (cosine, euclidean and etc)</li>
<li>density around both points (frequency of points on different radiuses)</li>
<li>points ranks</li>
</ul>
<p>Finally I have about 500 features. After model is tranined, just threshold candidate points by probability of duplicate, that is searched by CV (for test I just search by public LB). This scheme gives around 0.76+ LB score, and takes about 30 min to inference using embeddings: Efficient Net B2, Indonesian BERT cahya/bert-base-indonesian-522M, Multi language BERT setu4993/LaBSE, multiple TfIdfs with different tokenizers and CLIP</p>
<h2>Finetune embeddings using metric learning</h2>
<p>I also added finetuned Efficient Net, ViT (DINO) and 2 BERT (cahya/bert-base-indonesian-522M, setu4993/LaBSE) models using the same approach:</p>
<ul>
<li>triplet loss (margin=0.1)</li>
<li>each epoch I create duplicate pairs from label_groups, in each batch one pair from label_group is inserted. Batch size 128 was used, so I get 64 duplicates per batch and I compare each of them with 5 random points</li>
<li>5 folds CV, when I calculate validation score, I was looking for candidates not only in validation fold, but in full train sample. This scheme reduces CV/LB gap and make it more correlated. My CV models have CV score around 0.71, BERTs - 0.67. Also this approach is good to construct out-of-fold prediction, which is important to fit 2nd level models (GBM described above in my case). For test set I perform inference of all 5 models from CV. Finally, I thought, that better way is to fit one model on full test with no validation for faster inference, but have no time for all of models (made it just for longest inference DINO model) </li>
</ul>
<p>I have also tried ArcFace approach, but models scored much less on CV, and it was surprise for me, because ArcFace was common way to finetune in this competition. Maybe I did something wrong here .. Adding finetuned models to general scheme gives me 0.781 LB score</p>
<h2>Postprocessing (clustering)</h2>
<p>I perform clustering, not over embeddings but over pairwise distances. Distance (better say - similarity) for me was GBM probability of duplicate estimation. If there was no forecast for product pair, assume probability of duplicate eq 0. I took the idea of agglomerative clustering and slightly modify it fot current task. We start from each single point as a cluster and merge them until average cluster size become equal threshold (for train sample avg cluster size eq 2.8, for LB best size was 2.6). If after clustering point stay single - merge it to nearest cluster. This algorithm boost me to 0.79 after a lot of parameter search</p>
<p>Few words about optimization: all my models was trained and inferred in half precision (torch AMP). For image model I used NVIDIA DALI for reading and resizing images, for GBM models inference I used Rapids ForestInferenceLibrary. Without everything it becomes impossible to inference all those models in 2 hours.</p>
<p>I was waiting for call with the host to ask if they don't mind sharing code parts and they said ok for some parts (not full solution). So here it is <a href="https://www.kaggle.com/btbpanda/parts-of-3rd-solution-with-lightautoml" target="_blank">https://www.kaggle.com/btbpanda/parts-of-3rd-solution-with-lightautoml</a></p>
<p>Thanks :) </p>
<p>Thanks for your feedback! Speaking about density features - yes, I understood problem, that you describe, and I had some ideas about how to fix it, for example dividing neighbours counts by total amount of observations. But I leave this features as is, reasons are 1) it steel works without any corrections 2) this fix can break models in common case, when object has 1 or 2 neighbours. </p>
<p>This is the first time I've used any of Rapids utils, and idea of FIL is great, but looks like it's quite new feature, because I spend a lot of time trying to make it work properly. Actually, the main reason is that it's made over treelite format, but it fails with latest versions of treelite installed (1.0 and later)</p>
<p>I mine hard examples in the next way: I sort all candidates for object by distance and take candidates that are 1) above median 2) have nonzero loss. From that list I randomly select 5, or less if there were less examples under these conditions.</p>
<p>I used single V100 32GB RAM GPU to train all my models (Image size 300, max seq len 160 for BERTs, and half precision). 32 GB was enough for that, but if you have 16 GB, I believe it's also possible to reduce batch size to 64 and get similar quality models</p> |
Shopee - Price Match Guarantee | 41th Place : Approach | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks a lot , it seems I have got another medal</p>
<p>A big thanks to Organizers for organizing such a nice competition, I like these types of competitions very much where there is less resource requirement(as I have only Kaggle Kernels and Colab😄) and many tricks can be applied.</p>
<p>Also Congratulations to all Winners , participants, contributors in this competitions. All the solutions I have read are so sophisticated that I was thinking should I post something.</p>
<p>This competition is very unique of its kind where both Image and Text processing can be applied, along with that some post processing tricks. Also to establish a good validation scheme.</p>
<p>Here is my simple one</p>
<h1>Approach</h1>
<h2>Fold Assignment</h2>
<p>GroupKFold with count of each group is stratified, so the count distribution of the label is almost similar between train and valid. Five folds have been assigned , among which four folds used for training and 1 fold for validation (80%-20%)</p>
<h2>Image Processing</h2>
<p>Efficientnet - B1 : 640x640 - only 1 fold<br>
tried another fold with Efficientnet B5 : 512x512 but the increase in LB : 0.001(very negligible) and that too in private, public LB was same<br>
Checkpoint using a custom callback where it calculates F1 on validation sets after doing a threshold search(cosine) after each epoch</p>
<h2>Text processing</h2>
<p>Tfidf Vectorization </p>
<h2>Threshold</h2>
<p>Threshold part was bit tricky as it has to be determined at the image text combination. If I calculate best threshold individually for image and text and then marry the result it's not optimum. In my case keeping threshold =0.45 (cosine) in both cases was optimum in the validation. In the submission it was just 0.2 less from the validation threshold (0.25). This has to be established after doing few submissions and this corelates with the validation then after properly.</p>
<h2>Combination</h2>
<p>Concatenate Image prediction and text prediction</p>
<h2>Post Processing -1</h2>
<p>Remove the FPs based on UOM</p>
<h2>Post Processing -2</h2>
<p>If the match is only 1 , concatenate it the closest product based on the distance (considering both image and text, whichever is less)</p>
<p>Also planned to apply few more ideas like BERT, reducing threshold and increase transitive relation etc etc, but didn't get any time due to some other priority stuff.</p>
<h2>Acknowledgement</h2>
<p><a href="https://www.kaggle.com/c/shopee-product-matching/discussion/229495" target="_blank">https://www.kaggle.com/c/shopee-product-matching/discussion/229495</a><br>
<a href="https://www.kaggle.com/ragnar123/shopee-efficientnetb3-arcmarginproduct/notebook" target="_blank">https://www.kaggle.com/ragnar123/shopee-efficientnetb3-arcmarginproduct/notebook</a><br>
<a href="https://www.kaggle.com/c/shopee-product-matching/discussion/224828" target="_blank">https://www.kaggle.com/c/shopee-product-matching/discussion/224828</a><br>
<a href="https://www.kaggle.com/c/shopee-product-matching/discussion/225093" target="_blank">https://www.kaggle.com/c/shopee-product-matching/discussion/225093</a></p>
<p>Also a BIG THANKS ( I think it's not enough though) to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for his contribution which influenced me a ton whenever I have participated any comp.</p>
<p>Thanks a lot for your wish, your concept of pp helped me a lot in this comp</p>
<p>Unit Of Measure</p>
<p>If you follow the fold assignment section you can see that there are five folds have been created, consider them as f0,f1,f2,f3,f4. Now for a regular 5 folds CV the same model is trained with five different combinations of datasets(from f0-4) and then ensemble, but here the time limit was only 2 hrs, so trained only one among those five. You can consider in my training I have used f1-4 for training and f0 for validation. So whichever place I have mentioned one fold training it's basically training on 80% data (any combination of four between f0-4 ) and validation on 20% data( any one between f0-4 which is not selected in training)<br>
Regarding your another question FP is basically False Positive.</p>
<p>I hope am able to explain it</p> |
Shopee - Price Match Guarantee | 48th Place Silver - Simple Baseline | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all participants! I am really glad to obtain my first silver medal and excited to become a competition expert :)</p>
<p>There is nothing fancy with my solution. After reading the other shared solutions, i think the only thing worth sharing is my model training methods (which is what differentiated me from public kernels).</p>
<h1>Summary:</h1>
<p>b6 and bert-base-multilingual, both with arc face layer. I followed the BN-512-BN architecture as described in the arcface paper. </p>
<p>I then normalized each embedding, concat them, and applied KNN using cosine metric to all 3 embeddings and took the union as matches.</p>
<p>My 'post-processing trick' was to combine similar matches (measured with Jaccard index) using majority voting. This trick, however, only boosted LB by +0.001.</p>
<table>
<thead>
<tr>
<th>Method</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>bert base multilingual uncased + TTA</td>
<td>0.636</td>
<td>0629</td>
</tr>
<tr>
<td>b6 effnet (noisy student)</td>
<td>0.696</td>
<td>0.688</td>
</tr>
<tr>
<td>b6 + bert + concat</td>
<td>0.750</td>
<td>0.739</td>
</tr>
<tr>
<td>with 'post processing'</td>
<td>0.751</td>
<td>0.740</td>
</tr>
</tbody>
</table>
<h3>Validation Scheme</h3>
<p>Group KFold with 2 splits then use KNN to get CV score. I then retrained the models using the full train data and made a submission to get my public LB.</p>
<p>I only accepted changes if there is an improvement to both CV and public LB.</p>
<h3>Model Training</h3>
<p>First, I choose the backbone which gave me the best CV/LB:</p>
<ul>
<li>Bert base multilingual uncased (out of XLM-Roberta and bert base)</li>
<li>B6 Noisy Student Version (out of B0-B6 AA/RA/NS variants)</li>
</ul>
<ol>
<li>"Pretraining" -> I first froze the backbone layer and then trained the embeddings/arc face layer for a few epochs before "finetuning" by unfreezing all layers and then training for a few more epochs.</li>
<li>Freezing all the batch norm layers for b6 backbone</li>
<li>GeM pooling layer worked for b6 while pooling did not help bert.</li>
<li>Image augmentation adopted from <a href="https://www.kaggle.com/dimitreoliveira/flower-with-tpus-advanced-augmentations?scriptVersionId=0" target="_blank">here</a></li>
<li>label smoothing helped, but impact was minimal</li>
</ol>
<h3>Inference</h3>
<ul>
<li>Snapshot ensemble (3x for bert and 2x for b6)</li>
<li>TTA for text inputs. I generated embeddings for original titles and a clean variant (unidecode, remove English/Indonesian stopwords, etc.) </li>
</ul>
<p>I ensembled each of them separately by taking the average, before normalizing them.</p>
<p>I simply concatenated the text (512) and image embeddings (512) to get a 'text + image embedding' (1024).</p>
<p>I applied KNN on all 3 embeddings separately and then took the union to get the ensembled matches.</p>
<h3>'Post Processing Trick'</h3>
<p>For each of the predicted matches, I tried to find similar predicted matches and merge them together to get my final matches.</p>
<ul>
<li>Using Jaccard Index as a similarity score, I collected all matches with Jaccard Index >= threshold then I kept only 'posting ids' which appeared >= 2.</li>
<li>I used different thresholds depending on the length of the ensembled matches.</li>
</ul> |
Shopee - Price Match Guarantee | 4th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>you can split the matrix, and make multiple SMALL_NUMBER x 70k cossim calculations. </p>
<p>Good point <a href="https://www.kaggle.com/mrinath" target="_blank">@mrinath</a> . Actually, the sentence was missing an information. We didn't square the ensembled cossim matrix but rather calculated the distances first on each model, yielding 3 cossim matrices, which we then normalized to [0, 1] finally squared the results and took a weighted average of the three cossim matrices. </p>
<p>simple MinMax norm</p>
<h2>Introduction</h2>
<p>Thanks to Kaggle and hosts for this very interesting competition with a tricky setup and lots of room for experimenting and tuning. As always, this has been a great collaborative effort and please also give your upvotes to <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a> and <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a>. In the following, we want to give a rough overview of our solution.</p>
<h2>Setup and CV</h2>
<p>While initially a split by label_group was doing quite well w.r.t. to LB/CV agreement, later on in the competition with better models and especially with ensembling and post-processing, it was getting harder and harder to find a good CV. <br>
Consequently, we were looking at three different KPIs: </p>
<ol>
<li>F1 score evaluated on oof, </li>
<li>F1 score evaluated on oof but also using infold embeddings as noise</li>
<li>F1 score evaluated on oof but also using infold embeddings as noise and weighting False Positives with a factor of 2</li>
</ol>
<p>2 and 3 are known to be leaky, as we are mixing in infold embeddings, which may be easier to differentiate from oof embeddings. Also, they will have better cross linking which adds additional leakage for specific post-processing methods. 1 is not leaky, but with a 5Fold validation, we are only evaluating on less than 7k rows (around 10% of test). Rational parameter tuning is not possible that way and some post-processing methods excel on small sets but fail on larger sets (think about e.g. always picking rank2 prediction). Hence, we generally looked at all those validation methods simultaneously and tuned threshold parameters mostly on method 3.</p>
<h2>Models</h2>
<p>In our final submissions, we used Bert based Text Encoder and Image Encoders with different backbones (mainly nfnet, effnet). As many other teams, we did see some improvement in single models when tuning the training and the backbone, but in the blend this effect was almost vanishingly small and we were kind of reaching a plateau there. After the backbone, GeM and Avg pooling was used. The neck is comprised of a linear layer that is used for dimension tuning, followed by batch normalization and PReLu activation. Our embeddings usually had a length around 1024 with larger embeddings yielding a slightly better score on CV/LB. We also added Tfidf (char and word) to the mix of embeddings and even though CV didn't change much it seemed to help a few points on LB. As Tfidf generates very large embeddings and we are constraint by RAM and VRAM in the kernel, we used Random Sparse Projection to reduce the dimensions.</p>
<p><img src="https://i.imgur.com/StMQPku.png" alt="single_models"></p>
<p>We trained our image and text models with an ArcFaceLoss. ArcFace margin needed quite some tuning depending on the model and we also saw batch size to matter during training. Some models seemed to be very sensitive to the learning rate and as others reported, gradient clipping may have also helped to stabilize the training.</p>
<h2>Ensemble</h2>
<p>In our ensemble, we concatenated and normalized image embeddings, text embeddings (bert based) and tfdif embeddings. On each of those vectors, we calculated pair wise cosine similarity and received three matrices (cossim image, cossim bert, cossim tfidf). We combined those three matrices by first squareing them and then taking a weighted average. </p>
<p><img src="https://i.imgur.com/JPxrzfy.png" alt="ensemble"></p>
<h2>Post Processing</h2>
<p>While the averaged cossim matrix already gave us a very competitive score in the gold region when using a proper threshold, we still applied several post processing steps to squeeze out a few more points. The biggest contributors were:</p>
<ul>
<li>Thresholding</li>
<li>Rank2 matching (if A has B on rank2 and B has A on rank2, add them to each other)</li>
<li>Rank2 and rank3 difference is large -> add rank2 id</li>
<li>If there is a group of X members, and we have another row with the X preds on rank 1-X, make one large group.</li>
<li>At least one other match (except the cosine similarity of rank2 is extremly low)</li>
<li>Query Expansion</li>
<li>Rematching unmatched rows</li>
</ul>
<p>Exactly. It was one of the challenges in this competition to set up a proper validation scheme for tuning.<br>
Leaderboard has limited feedback (5 submissions per day).</p>
<p>Thank you for your questions, I'll try to answer them here:</p>
<blockquote>
<p>1 - How do you choose weights?</p>
</blockquote>
<p>First by our validation methods 1-3 and then we usually got good agreement with LB score. Final weights were [0.3 0.2 0.5] for bert, tfidf, img</p>
<blockquote>
<p>2 - In short, what do you mean by this?</p>
</blockquote>
<p>We implemented a custom search function that looks for neighbors of neighbors and adds them to the original anchor node (thresholded by the edge weights).<br>
Example:<br>
A has B and C as neighbors<br>
B has A, C and D<br>
-> Add D to A if the connection from A -> B and B -> D is strong enough.<br>
We also experimented with deeper searches and iterative methods, but they scored worse on CV.</p>
<blockquote>
<p>3 - How many models did you use in your final blending for images and texts?</p>
</blockquote>
<p>6 Bert<br>
2 Tfidf (char and word)<br>
4 Image<br>
We would have used more, but were limited by kernel runtime</p>
<blockquote>
<p>4 - What do you mean by rank2?</p>
</blockquote>
<p>Rank2 refers to the highest ranked prediction after the selfmatch (note: that rank2 prediction may be below the threshold)</p>
<p>We used a bit of spelling correction and html tag removal for bert and tfidf. Additionally, for tfidf, we merged quantity together with the unit. This improved CV a bit (really just one or two points), for LB we didn't alway check the influence but it wasn't more than 0.001 last time I checked.</p> |
Shopee - Price Match Guarantee | 58th Solution - Dynamic Thresholding | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you Hosts, Kaggle and all participants for such a joyful competition.</p>
<p>As I haven't seen the idea of choosing threshold I used in this comp, I decided to put my solution to word as well even though it is 58th place.</p>
<p>From the beginning the idea of somehow construct thresholds dynamically was very appealing to me.</p>
<p><strong>Models</strong><br>
I should note, I used 4 image models, one bert and tfidf model for final sub. I did concatenation of embeddings for image models and for text models before matching. I didn't use stacking ensembles, so only one level of models.</p>
<p><strong>Dynamic threshold selection procedure:</strong><br>
From the beginning I have noticed it was a real improvement even comparing with LB adopted more "strict" static threshold.</p>
<ol>
<li>Predict embeddings and compute cosine similarities</li>
<li>Find statistic (mean) from every 6 best similarities in the row - 6 best was chosen because the majority of the training data has no more than 6 members in the group</li>
<li>Compute [0.3, 0.6, 0.9] quantiles of this best6 averages, split similarities in groups according to what quantile they are in</li>
<li>For those groups make threshold to be equal to the quantile they belong to - it helps not only find matches which are really close (with similarity is around 0.9 quantile) but also find matches for groups with lousy related members.</li>
<li>In the end I noticed that the smaller the similarities the less probable they are to be a match, so I add "coefficient of restriction". For the group belonging to 0.9 quantile, th = q09 * 0.9, for the group in 0.6 quantile , th = q06 * 0.95, and for the group from 0.3 quantile, th = q03 * 0.99. So, the smaller are similarities the more restrictive are thresholds. </li>
</ol>
<p>Hope, my wordy explanation is clear enough :)<br>
I am very happy about my potential first silver medal just in the next competition after I get expert in competitions badge :)</p>
<p>It was great to learn with you!<br>
Cheers</p>
<p>Thank you.<br>
No, they are not thresholds. As I described in the short write-up, I did as follows:</p>
<ol>
<li>I have some embeddings for the data points from my model</li>
<li>compute cosine similarity matrix </li>
<li>for every row find 6 most similar (highest) values - best_6 (e.g. [0.99, 0.95, …, 0.81])</li>
<li>for all rows compute mean of those - mean(best_6)</li>
<li>finally this values [0.3, 0.6, 0.9] are quantiles - I compute those quantiles for my mean(best_6)</li>
<li>so, in other words I split the data into 3 bins according to computed quantiles and all rows in a bin will have the same threshold</li>
<li>threshold assignment - for bin [ 0.6]: threshold = quantile_09 * 0.9</li>
<li>Those multipliers are coefficients of how restrictive threshold should be, as if we want to include more predictions with confident similarities (like those in most confident bin)</li>
</ol>
<p>I hope this is more clear explanation :)</p>
<p>Cheers.</p> |
Shopee - Price Match Guarantee | 5th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="https://www.kaggle.com/fomdata" target="_blank">@fomdata</a> and congrats with your solo performance! Btw let me know if you want to join Kaggle Turkey Slack.</p>
<p>Thanks. We used label_group as target for ArcFace models.</p>
<p>Congrats to all medal winners and thanks to my teammates <a href="https://www.kaggle.com/tereka" target="_blank">@tereka</a> <a href="https://www.kaggle.com/allvor" target="_blank">@allvor</a> for this nice teaming experience. I will share our team's solution. We basically had 2 kernels, which are very similar but not the same. They also scored almost the same but due to time limitations, we couldn't combine them. I will first explain the kernel that I mostly worked on ( <a href="https://www.kaggle.com/aerdem4/shopee-v03" target="_blank">https://www.kaggle.com/aerdem4/shopee-v03</a> ), then I will state the difference with <a href="https://www.kaggle.com/tereka" target="_blank">@tereka</a>'s kernel.</p>
<p><strong>CV Scheme</strong><br>
We have split the data into 2 folds. First fold is used for training representation learning models (arcface), second fold is used for evaluating them. We have also trained 2nd stage models on the second fold with GroupKFold.</p>
<p><strong>Representation Learning</strong><br>
From the beginning, I have tried to train a model that can generate vector representations for images and texts in the same space. I thought this approach can have good regularization effect. I managed to do it but there was no significant improvement for our validation score. Therefore we ignored this idea and focused on having separate models for images and titles.</p>
<p>English Distilbert and Indonesian Distilbert models are trained for obtaining the title vectors. For the image vectors, ViT and Swin Transformers are trained on size 384 with relatively heavy augmentations and also EffNet B4 on size 512. We ensemble these models with vector concatenation.</p>
<p><strong>Weighted Database Augmentation</strong><br>
We have matched the vectors and get the closest match for each posting id. We updated the vector representations as weighted mean of themselves and their closest matches. Weights are based on cosine distances of the matches. If a match is far for a vector, then that vector stays unchanged. I believe this approach works really well for regularization of the vectors utilizing the information that each posting has at least one match.</p>
<p><strong>Matching</strong><br>
We used cuML's NearestNeighbors for matching the vectors and used a threshold for filtering.</p>
<p><strong>2nd Stage Model</strong><br>
We have extracted some features for matched unique pairs and fed them to XGB model which runs on GPU. The features are "img_dist", "text_dist", "dist", "dist_rank", "cos_sim", "cos_sim2". Basically, vector distances, their ranking within each posting id, 2 different tfidf cosine similarity with different parameters.</p>
<p>Since test set size was larger than our one fold size, some features were going to have different distribution on the test set due to higher possibility of False Positive matches. Therefore another XGB model is trained on percentage rank features and ensembled with the previous.</p>
<p><strong>FP Features</strong><br>
Since test set and train set have no match, one can use training set for determining how easy it is for a posting to have FPs. We used closest match distances from training set as features. This method improved our CV score significantly but improved LB relatively less for me and no improvement for <a href="https://www.kaggle.com/tereka" target="_blank">@tereka</a>'s kernel. I believe this is again due to the size difference between the train and test.</p>
<p><strong>Agglomerative Clustering</strong><br>
Once we have match probability predictions from XGB models, we can use a threshold and match the postings. But we did something a bit more complex. We sorted all pairs by their match probabilities and started matching from the most likely match. Each match above 0.8 probability, merges their clusters. Each posting with no cluster can be matched to a cluster if the probability is above 0.7. Each posting with no cluster can be matched with each other if the probability is above 0.3. This method allows to have confident clusters and not-so-confident many pairs.</p>
<p>Besides above, <a href="https://www.kaggle.com/tereka" target="_blank">@tereka</a> had some different image models and MLP model for title char2grams. There is a video explaining our solution in Japanese: <a href="https://youtu.be/vtI8P-ttPrk?t=1915" target="_blank">https://youtu.be/vtI8P-ttPrk?t=1915</a></p> |
Shopee - Price Match Guarantee | 62th Place Solution (Stacking Logistic Regression) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I didn’t try lots of algorithms because inference time issue and there are only last 5 submissions remaining when I decided to use stacking.</p>
<p>In my case, I got ~0.005 improvement in public lb (0.742 to 0.747) compared to setting 2 threshold on img and text or logistic regression (no poly transform ).</p>
<p>Thank you!</p>
<p>Thanks for replying.<br>
I can share the course i learned but that is in Mandarin😊<br>
I think just searching with keywords “ensemble stacking …” is enough.</p>
<p>Congrats to all winners and thanks to Kaggle and Shopee for this interesting competition!<br>
And, most importantly, I need to thank all contributors for those "WONDERFUL" public notebooks! I really learned a lot from them. Truly appreciates it !!!</p>
<p><strong>Overview</strong><br>
<img src="https://i.postimg.cc/xdQfs3P6/SHOPEE.png" alt="outline"></p>
<h2>Image models</h2>
<pre><code> eca-nfnet-l0 (arcface)
eca-nfnet-l0 (curricularface)
eca-nfnet-l1 (arcface)
</code></pre>
<h2>Text models</h2>
<pre><code> sbert (arcface)
TF-IDF
</code></pre>
<h2>Ensemble</h2>
<p>Concat all image models' outputs to get final image embedding and calculate cosine similarity (X) of all pairs (total N x N pairs). And do the same thing to get text cosine similarity (Y).</p>
<pre><code> X: image cosine similarity
Y: text cosine similarity
</code></pre>
<p><strong>Downsampling</strong><br>
For each product, I used all products with the same label_groub and sampled an equal amount of negative data from top products sorted by X+Y.</p>
<p><strong>Polynomial transformation (degree=2)</strong><br>
use polynomial transformation (1,X,X^2,Y,Y^2,XY)</p>
<p><strong>Logistic regression</strong> (half data to train image & text models, half to train stacking regression)<br>
use Logistic regression to get final decision boundary.<br>
<img src="https://i.postimg.cc/25Zv8Qyf/fig-boundary.png" alt="boundary"></p>
<p>Finally, train models on whole data and used the same stacking regression model. </p> |
Shopee - Price Match Guarantee | 6th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi S. Tomizawa! congrats on the gold!<br>
Big improvement! This trick was very nice for us! :D</p>
<p>Thanks for questions!</p>
<p>1 -> I answered in another comment.</p>
<p>2 -> below the example code.<br>
We got best LB to use this voting approach.<br>
At first, we voting 4 models only.<br>
We gain +0.01 LB to make 3 embeddings (text, img, concat) and 2 distances to increase the number of votes. We think more votes make better score!</p>
<pre><code>def voting(x, vote):
x = np.concatenate(x)
x, counts = np.unique(x, return_counts=True)
ret_idx = counts >= vote
return x[ret_idx]
pred_X = ["a", "b"]
pred_Y = ["a", "b", "c"]
pred_Z = ["a", "b", "c", "d"]
voting(x=[pred_X, pred_Y, pred_Z], vote=1) # ["a", "b", "c", "d"]
voting(x=[pred_X, pred_Y, pred_Z], vote=2) # ["a", "b", "c"]
voting(x=[pred_X, pred_Y, pred_Z], vote=3) # ["a", "b"]
</code></pre>
<p>3 -> I spent about 1 month to experiment about 300~400. My teammate <a href="https://www.kaggle.com/kiccho11" target="_blank">@kiccho11</a>'s experiment id reached 490. (Do I answer your question correctly?)</p>
<p>We trained CNN and BERT simulateously, with different learning rate.</p>
<p>Sorry for confused explanation, <br>
If posting_id's prediction length is 1, we add a secone one similar to the posting_id.</p>
<p>This competition, label_group length is 2 or more and <br>
If this post-processing succeeds with a probability of 33% or more, the total F score will be positive.</p>
<p>This approach improves LB about + 0.01</p>
<p>the code is below:</p>
<pre><code> embeddings = normalize(embeddings)
print("norm avg:", np.linalg.norm(embeddings, axis=1).mean())
embeddings = torch.tensor(embeddings).cuda()
for j in tqdm.tqdm(range( CTS )):
a = j * CHUNK
b = (j+1) * CHUNK
b = min(b, len(df))
cts = torch.matmul(embeddings, embeddings[a:b].T).T
for k in range(b-a):
IDX = torch.where(cts[k,]>threshold)[0]
## [start] If there is one prediction, we made it two.
if len(IDX) < min_n:
IDX = torch.argsort(cts[k, ])[-min_n:]
## [end] If there is one prediction, we made it two.
IDX = IDX.detach().cpu().numpy()
preds.append(posting_id[IDX])
del cts
gc.collect()
torch.cuda.empty_cache()
</code></pre>
<p>Thank you! I will share my training code later!</p>
<p>Our concatenate idea is written by some paper.<br>
<a href="https://arxiv.org/pdf/1809.01610.pdf" target="_blank">https://arxiv.org/pdf/1809.01610.pdf</a><br>
etc..</p>
<p>One of the key point to success bimodai approach is different learning rate for text, image, full-connect layer.<br>
IF the learning rate is the same, it will not be learned well.</p>
<p>We also survey and experiment the transformer approach to concatenate text and image feature maps, but it not worked. <br>
<a href="https://arxiv.org/pdf/2102.10772v1.pdf" target="_blank">https://arxiv.org/pdf/2102.10772v1.pdf</a><br>
<a href="https://arxiv.org/pdf/2104.12763.pdf" target="_blank">https://arxiv.org/pdf/2104.12763.pdf</a></p>
<p>I'm sorry, I didn't experiment single model LB score for all final models.<br>
A week ago experiments, the LB score for the single model was about 0.75x.</p>
<p>First of all, we'd like to appreciate the hosts for giving us such a great opportunity to learn. <br>
We'd also like to appreciate great participants who shared their thoughts and knowledge, <br>
especially <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> and <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> made a great contribution to the competition. Thank you!</p>
<h1>Summary</h1>
<p><img src="https://github.com/kurupical/shopee/blob/master/model_strucure.JPG?raw=true" alt="model_strucure"><br>
<img src="https://github.com/kurupical/shopee/blob/master/ensemble.JPG?raw=true" alt="ensemble"><br>
<br></p>
<h1>Models</h1>
<table>
<thead>
<tr>
<th></th>
<th>#1</th>
<th>#2</th>
<th>#3</th>
<th>#4</th>
</tr>
</thead>
<tbody>
<tr>
<td>cv</td>
<td>0.877</td>
<td>0.875</td>
<td>0.877</td>
<td>0.873</td>
</tr>
<tr>
<td>img_size</td>
<td>224</td>
<td>224</td>
<td>560</td>
<td>560</td>
</tr>
<tr>
<td>img_backbone</td>
<td>swin_large_patch4_window7_224</td>
<td>swin_large_patch4_window7_224</td>
<td>efficientnet_b3</td>
<td>efficientnet_b3</td>
</tr>
<tr>
<td>text_backbone</td>
<td>xlm-roberta-base</td>
<td>bert-base-multilingual-uncased</td>
<td>bert-base-multilingual-uncased</td>
<td>bert-base-uncased</td>
</tr>
<tr>
<td>cnn_lr</td>
<td>4e-5</td>
<td>4e-5</td>
<td>2e-4</td>
<td>2e-4</td>
</tr>
<tr>
<td>bert_lr</td>
<td>1e-5</td>
<td>1e-5</td>
<td>5e-5</td>
<td>5e-5</td>
</tr>
<tr>
<td>fc_lr</td>
<td>5e-4</td>
<td>5e-4</td>
<td>2e-4</td>
<td>2e-4</td>
</tr>
<tr>
<td>batch_size</td>
<td>16</td>
<td>16</td>
<td>16</td>
<td>16</td>
</tr>
<tr>
<td>scheduler</td>
<td>linear_schedule_with_warmup</td>
<td>ReduceLROnPlateau</td>
<td>CosineAnnealingWarmRestarts</td>
<td>CosineAnnealingWarmRestarts</td>
</tr>
<tr>
<td>loss</td>
<td>ArcFace(m=0.5, s=32)</td>
<td>ArcFace(m=0.5, s=32)</td>
<td>ArcFace(m=0.5, s=50)</td>
<td>ArcFace(m=0.5, s=50)</td>
</tr>
<tr>
<td>optimizer</td>
<td>AdamW(weight_decay=0.1)</td>
<td>Adam</td>
<td>Adam</td>
<td>Adam</td>
</tr>
<tr>
<td><br></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<h1>Similarity</h1>
<p>Use euclidean distance and cosine similarity to get nearest neighbors.<br>
We use threshold for submission, <br>
<br></p>
<h1>Threshold adjust</h1>
<p>CV+0.2 for cosine similarity threshold and CVx0.7 for euclidean distance.<br>
<br></p>
<h1>Ensemble Method</h1>
<p>4 models * 3 output(concat, text, img) * 2 distances = 24 votes<br>
<br></p>
<h1>PostProcess</h1>
<h2>Force prediction one to two</h2>
<p>If there is one prediction, we made it two.<br>
By doing this, we increased CV/LB score by +0.01<br>
<br></p>
<h2>Remove unmatch units</h2>
<ul>
<li>thanks to discussion -> <a href="https://www.kaggle.com/c/shopee-product-matching/discussion/229495" target="_blank">https://www.kaggle.com/c/shopee-product-matching/discussion/229495</a></li>
<li>We removed the data that have unmatched units. For Example:</li>
</ul>
<pre><code>prediction data title: "Anmum Lacta 400gr"
-----------------------
[no_postprocess]:
Anmum Lacta 400gr
Anmum Lacta Menyusui 400gr
Anmum Materna 200gr Cokelat
Anmum Emesa 200Gr
Anmum Emesa Chocolate 200 gr
Anmum Lacta 200gr Cokelat
-----------------------
[postprocess]:
Anmum Lacta 400gr
Anmum Lacta Menyusui 400gr
-----------------------
[target]:
Anmum Lacta 400gr
Anmum Lacta Menyusui 400gr
-----------------------
<< f1 >>: 0.5 -> 1.0
</code></pre>
<p><br></p>
<h1>validation</h1>
<ul>
<li>groupKfold(5), group by label_group<br>
<br></li>
</ul>
<h1>worked for us</h1>
<ul>
<li>torch.cat(img_embeddings, text_embeddings) -> nn.Linear -> arcface</li>
<li>output 3 embeddings (concat, image, text) [LB + 0.01]</li>
<li>voting ensemble</li>
<li>different learning rate for cnn/bert/fc<br>
<br></li>
</ul>
<h1>not worked for us</h1>
<ul>
<li>multimodal approach(image + text)<ul>
<li>transformer</li>
<li>convolution</li></ul></li>
<li>ensemble<ul>
<li>averaging cosine similarity</li>
<li>concatenate embedding -> calculate cosine similarity</li></ul></li>
<li>pretrain cnn and bert respectively</li>
<li>bert<ul>
<li>CLS layer</li></ul></li>
<li>cnn<ul>
<li>efficientnet_b4, b5, b6, or larger models</li>
<li>gem(pooling)</li></ul></li>
<li>loss<ul>
<li>AdaCos</li>
<li>SphereFace</li>
<li>TripletLoss</li>
<li>AdaptiveFace</li></ul></li>
<li>activation layer(Swish, GeLU, …)</li>
<li>large batch size(>=32)</li>
<li>progressive learning</li>
<li>tf-idf for ensemble</li>
</ul> |
Shopee - Price Match Guarantee | 70th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to the top solutions and the winners! 😊</p>
<p>And thanks all who contributed during this competition, a global teamwork to solve a problem.<br>
What a fun journey and now my 6th silver medal, and as in every competition, the knowledge one gets is priceless.</p>
<p>The solution was pretty simple, did only use Image and TfidfVectorizer prediction, not Bert etc. Here goes a short version of it:</p>
<p>Training:</p>
<p>Trained 6 eca_nfnet_l0(timm org) with diff fold + Arc + GEM for ~8 epochs.<br>
Used AdamP and for aug. only randomcrop and cutout no flip, rotation etc. No TTA on inference only centercrop.</p>
<p>As train and val. set I use random shuffle split on group labels and saved the best model based in competition metric score and threshold calculation after every epoch.</p>
<p>I also implemented an own SWA script that tested an ensemble of the ~3 previous saved epochs weights and run a new metric validation, in case of better score it used the ensemble model in next epoch etc., else it used the previous best known single model.</p>
<p>Inference:</p>
<p>Used float16 in every part/calculation of the code, as memory mgmt.</p>
<p>I concatenate the models’ predictions as follow</p>
<pre><code> for ii in range(6):
feat = vars()['model'+str(ii)](img,label)
vars()['image_embeddings'+str(ii)] = feat.detach().cpu().numpy().astype(np.float16)
vars()['embeds'+str(ii)].append(vars()['image_embeddings'+str(ii)])
for ii in range(6):
del vars()['model'+str(ii)]
vars()['image_embeddings'+str(ii)] = np.concatenate(vars()['embeds'+str(ii)])
print('Our image embeddings shape is', print('image_embeddings'+str(ii)), vars()['image_embeddings'+str(ii)].shape)
del vars()['embeds'+str(ii)]
return np.concatenate((vars()['image_embeddings'+str(0)], vars()['image_embeddings'+str(1)],
vars()['image_embeddings'+str(2)], vars()['image_embeddings'+str(3)],
vars()['image_embeddings'+str(4)], vars()['image_embeddings'+str(5)]), axis=1)
</code></pre>
<p>Before using tfidf I cleaned the titles from stop words and word len <2 but kept numbers as they can help filter some products by ml kg mg etc. I also translated Ind-Eng with a 2161 dictionary, a dict. updated from different public sources/topics, thanks for that.</p>
<p>Cosine SD was used on both image and text separately. I also used my own medicine and the Post Process technique posted in this public topic</p>
<p><a href="https://www.kaggle.com/c/shopee-product-matching/discussion/234927" target="_blank">https://www.kaggle.com/c/shopee-product-matching/discussion/234927</a><br>
<a href="https://www.kaggle.com/c/shopee-product-matching/discussion/233626" target="_blank">https://www.kaggle.com/c/shopee-product-matching/discussion/233626</a></p>
<p>I also started creating a deeper version that calculates threshold for different level of predicted labels, with an idea that the more predicted labels the more restrictive threshold value, the 1 version worked well on validation and with some models, but it was too untested to use in a final submission and didn’t have time for more testing.</p>
<pre><code> th100 = dataset_th(0.36, 34250, 70000)
for k in tqdm(range(embeddings.shape[0]),disable=True):
ids = np.array([])
for iiii in np.arange(th100,th100+0.26, 0.02):
if ids.shape[0] <= 1:
idx = np.where(distances[k,] < iiii)[0]
ids = indices[k,idx]
reduc = th100*threshold
for iiii in np.arange(reduc,reduc+0.01, 0.01):
if ids.shape[0] >= 5 and ids.shape[0] < 10:
idx = np.where(distances[k,] < iiii)[0]
ids = indices[k,idx]
reduc = reduc*threshold
for iiii in np.arange(reduc,reduc+0.01, 0.01):
if ids.shape[0] >= 10 and ids.shape[0] < 15:
idx = np.where(distances[k,] < iiii)[0]
ids = indices[k,idx]
reduc = reduc*threshold
for iiii in np.arange(reduc,reduc+0.01, 0.01):
if ids.shape[0] >= 15 and ids.shape[0] < 20:
idx = np.where(distances[k,] < iiii)[0]
ids = indices[k,idx]
reduc = reduc*threshold
for iiii in np.arange(reduc,reduc+0.01, 0.01):
if ids.shape[0] >= 20 and ids.shape[0] < 25:
idx = np.where(distances[k,] < iiii)[0]
ids = indices[k,idx]
reduc = reduc*threshold
etc ..........................
</code></pre>
<p>Finally, I joined the unique labels from text and image predictions.</p>
<p>Here one could see that rows with the same labels wasn’t identical 😉 a filter sorting out false positive/negatives had to be used. <br>
The time for creating one was too tight this time for me so I used a more quick already implemented solution. I used the tfidf with smooth_idf=False, on the whole final predicted labels set, not the embeddings as before but as words, find the most similar rows to use for them all. <br>
Not a bulletproof filter but it worked and was easy implementing/reuse.</p>
<p>Other ideas that where planned or didn’t work:</p>
<p>Started for a while creating a solution from the landmark competition and using global and local scores and keypoints etc, and also using a Delg model. Did manage to get my own model/code from landmark comp. to work here but didn’t have time to complete it all.</p>
<p>I also tried to train a solution completely based on image with the selected titles printed in the bottom of the images in a filled rectangle.</p>
<hr>
<p>Thats all, thanks for reading and Happy Kaggling!</p>
<p>Thanks! :)</p> |
Shopee - Price Match Guarantee | 7th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Alpha query expansion is an improved version of average query expansion.<br>
Please refer to 2 papers below.</p>
<ul>
<li>Average QE (see 4.1. Methods): <a href="https://www.robots.ox.ac.uk/~vgg/publications/papers/chum07b.pdf" target="_blank">Total Recall: Automatic Query Expansion with a Generative Feature Model for Object Retrieval</a></li>
<li>Alpha QE (see 3.5 Image representation and search): <a href="https://arxiv.org/abs/1711.02512" target="_blank">Fine-tuning CNN Image Retrieval
with No Human Annotation</a></li>
</ul>
<p>Please see <a href="https://www.kaggle.com/a2015003713/shopee-nfnetf0-dsitilbert-soft-voting?scriptVersionId=62280624" target="_blank">submission notebook</a>. Sorry for my messy code and this is slightly different from my final submitted one.</p>
<p>I tried NFNet (F0,F1,F2,L1), Effnets, Swin-Transformer. Swin's CV was best. I tried ensembling F0, L1, Swin, but it did not improve LB.</p>
<p>Thank you ! I used Google Colab Pro. </p>
<p>First of all, I thank organizers for hosting this competition. Congrats for all medal winners. Let me share my solution.</p>
<h3>Overview</h3>
<p><img src="https://i.imgur.com/M1YSUFi.png" alt=""><br>
<img src="https://i.imgur.com/Pki4Dvo.png" alt=""></p>
<h3>Models</h3>
<h4>Train/validation split</h4>
<p>GroupKfold(5 folds) with stratification by distribution of number of samples in label_group</p>
<h4>Image model</h4>
<p>Backbone : NFNet-F0 with GeM pooling (p=3(fixed) for training, p=4 for inference)<br>
Input size : 420x420<br>
Epochs : 30<br>
Optimizer : madgrad(momentum=0.9, weight_decay=1e-5) [1]<br>
Learning rate scheduler : cosine annealing (1e-5 --> 1e-8)<br>
Distance : cosine similarity<br>
Loss : multi-similarity loss (alpha=2, beta=50, base=0.5) with XBM(memory_size=1024) [2]<br>
Batch size : 32, mini-batch is generated by randomly sampling pairs of images in same label_group<br>
Data augmentation : Rotate, ColorJitter, RandomBrightnessContrast, RandomGamma, HorizontalFlip, CoarseDropout</p>
<h4>Text model</h4>
<p>Backbone : DistilBERT (<a href="https://huggingface.co/cahya/distilbert-base-indonesian" target="_blank">distilbert-base-indonesian</a>)<br>
Epochs : 30<br>
Optimizer : madgrad(momentum=0.9, weight_decay=1e-5)<br>
Learning rate scheduler : cosine annealing (1e-4 --> 1e-8)<br>
Distance : cosine similarity<br>
Loss : multi-similarity loss (alpha=2, beta=50, base=0.5)<br>
Miner : multi-similarity miner (epsilon=0.1)<br>
Batch size : 256, mini-batch is generated by randomly sampling pairs of titles in same label_group<br>
Data augmentation : OneOf([random_delete, random_swap, random_swap+random_delete, random_swap*2]) at p=0.1</p>
<ul>
<li>random_delete : randomly delete a word if len(title) > 2</li>
<li>random_swap : randomly swap two words if len(title) > 2</li>
</ul>
<h4>CV scores</h4>
<p>5 fold averaged F1s are as follows.<br>
Parameters of alpha-QE were separately optimized in each fold.</p>
<table>
<thead>
<tr>
<th>alpha-QE</th>
<th>image</th>
<th>text</th>
<th>concat image/text</th>
</tr>
</thead>
<tbody>
<tr>
<td>No</td>
<td>0.83031</td>
<td>0.80803</td>
<td>0.88777</td>
</tr>
<tr>
<td>Yes</td>
<td>-</td>
<td>-</td>
<td>0.91179</td>
</tr>
</tbody>
</table>
<h3>Threshold for private LB</h3>
<p>I plotted number of images vs best threshold curve and observed exponential decay of best threshold as number of images is getting bigger, so I decresed threshold for final submission. But this worsened private LB score (0.762, 0.757). If I didn't change threshold tuned in public LB, my model could get 0.765.</p>
<h3>What worked</h3>
<h4>Common in image/text model</h4>
<ul>
<li>Multi-similarity loss (better than CircleLoss(2nd best), TripletLoss, ArcFace, CosFace, SphereFace, ProxyAnchor and SupConLoss)</li>
<li>Smart mini-batch generation as explained above</li>
<li>madgrad optimizer (better than SGD-momentum, Adam and RAdam)</li>
<li>alpha query expansion<br>
The distributions of true/false positives before/after alpha-QE on validation set are as below.<br>
<img src="https://i.imgur.com/hi1Pbpn.png" alt=""></li>
<li>relax threshold if number of matches < 2</li>
<li>soft-voting (concat all embeddings over 5 folds, better than hard-voting) <br>
If each embedding is L2-normalized, the cosine similarity of concatenated embeddings is equal to the simple mean of cosine similarities of each embedding. <br>
<img src="https://i.imgur.com/ssIuq0g.png" alt=""></li>
</ul>
<h4>Image model</h4>
<ul>
<li>NFNet-F0 (better than EfficientNets and larger NFNets(F1, F2))</li>
<li>GeM pooling (p=3 for training, p=4 for inference)</li>
<li>Cross-Batch memory (XBM)<br>
This improved convergence by up to 5 times faster and pushed CV from 0.82x to 0.83x.</li>
</ul>
<h4>Text model</h4>
<ul>
<li>DistilBERT (better than BERT)</li>
<li>Text data augmentation</li>
</ul>
<h3>What didn't work</h3>
<h4>Common in image/text model</h4>
<ul>
<li>Stochastic weight averaging (improved only CV)</li>
<li>Tuning weight_decay in madgrad optimizer (improved only CV)</li>
<li>Cosine annealing with warm-up</li>
<li>Applying ROBUST scaler to embeddings as in [3]</li>
<li><a href="https://github.com/zhunzhong07/person-re-ranking" target="_blank">k reciprocal re-ranking</a> / <a href="https://github.com/Xuanmeng-Zhang/gnn-re-ranking" target="_blank">GNN re-ranking</a></li>
<li>Other cosine similarity-like metrics (correlation (de-meaned cosine similarity), adjusted cosine similarity)</li>
<li>Merging label_groups of samples with same phash but different label_group</li>
<li>Jointed embedding (concat image/text embedding --> FC layer)</li>
<li>Adaptive thresholding that optimzes expected row-wise F1 (improved only CV)</li>
</ul>
<h4>Image model</h4>
<ul>
<li>RandAugment</li>
<li>Combined global descriptor (improved only CV) [4]<br>
combination of GeM and GMP was best.</li>
<li>Applying Mixup/CutMix/ResizeMix to pair of images in same label_group to augment data.</li>
<li>Swin transformer(base), ECA-NFNet-L1 (improved only CV)</li>
<li>External data (Stanford online products dataset)</li>
</ul>
<h4>Text model</h4>
<ul>
<li>Title length-dependent augmentation probability [5]</li>
<li>XBM (I guess BS=256 is large enough and effect of XBM is limited)</li>
<li>Concatenating last layer's CLS and 2nd-to-last layer's CLS (improved only CV)<br>
I tried concat, average, average with trainable blending weight, concat was best.</li>
<li>Differential entropy loss (improved only CV) [6]</li>
</ul>
<h3>Reference</h3>
<p>[1] <a href="https://github.com/facebookresearch/madgrad" target="_blank">madgrad optimizer</a><br>
[2] <a href="https://arxiv.org/abs/1912.06798" target="_blank">Cross-Batch Memory for Embedding Learning</a><br>
[3] <a href="https://arxiv.org/abs/2101.03771" target="_blank">Investigating the Vision Transformer Model for Image Retrieval Tasks</a><br>
[4] <a href="https://arxiv.org/abs/1903.10663" target="_blank">Combination of Multiple Global Descriptors for Image Retrieval</a><br>
[5] <a href="https://www.aclweb.org/anthology/D19-1670/" target="_blank">EDA Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks</a><br>
[6] <a href="https://arxiv.org/abs/1806.03198" target="_blank">Spreading vectors for similarity search</a></p> |
Shopee - Price Match Guarantee | 8th Place Solution Overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks!</p>
<p>I would recommend the following papers.</p>
<p><a href="https://arxiv.org/abs/1610.07940" target="_blank">End-to-end Learning of Deep Visual Representations for Image Retrieval</a><br>
<a href="https://arxiv.org/abs/1711.02512" target="_blank">Fine-tuning CNN Image Retrieval with No Human Annotation</a></p>
<p>The difference between QE and QE+DBA in our solution is whether the index is created with the original embeddings or with query expansioned embeddings.</p>
<p>In the case of QE+DBA (our solution), query expansioned embeddings are used as the query, and the index is created with the query expansioned embeddings.</p>
<p>In the case of QE only, query expansioned embeddings are used as the query, and the index is created with the original embeddings.</p>
<p>(Maybe it is different from the original definition of the QE and DBA.)</p>
<p>Thanks!</p>
<p>We thank all organizers for this very exciting competition.<br>
Congratulations to all who finished the competition and to the winners.</p>
<h2>Summary</h2>
<p><img src="https://user-images.githubusercontent.com/6754944/117793616-59b07e00-b287-11eb-8f2f-b9994952a494.png" alt="shopee_overview"></p>
<ul>
<li>Validation<ul>
<li>GroupKFold(k=5), group by label_group</li></ul></li>
<li>Embeddings<ul>
<li>Concatenated Image, TF-IDF and Title embeddings (21208dim)</li>
<li>Image: CNN(ResNet101/152) + GeM + CosFace (3328dim)</li>
<li>TF-IDF: TfidfVectorizer (10200dim)</li>
<li>Title: BERT(distilbert-base-indonesian) + ArcFace (7680dim)</li></ul></li>
<li>Brute-force kNN by Faiss on embeddings converted to fp16<ul>
<li>Use αQE + DBA [1][2]</li>
<li>0.59 Threshold for cosine similarity (local CV best threshold +0.15)</li></ul></li>
<li>Post processing<ul>
<li>Only if no pair is found for an product, the threshold is ignored and the product is paired with its nearest neighbor.</li></ul></li>
</ul>
<h2>Model details</h2>
<p>(Add more details later)</p>
<ul>
<li><p>Image Embedding</p>
<ul>
<li>ResNet152x2, ResNet101x3 with GeM Pooling (5fold)</li>
<li>Loss: CosFace</li>
<li>Optimizer SGD lr=1e-3 WarmupCosineAnnealing LR Scheduling</li>
<li>Input size 512x512</li>
<li>Embedding dimension 512(ResNet152), 768(ResNet101)</li></ul></li>
<li><p>TF-IDF</p>
<ul>
<li>Use scikit-learn's TfidfVectorizer</li>
<li>Preprocessing<ul>
<li>Unicode handling</li>
<li>NFKC normalize</li></ul></li>
<li>Embedding dimension 10200</li></ul></li>
<li><p>Text Embedding</p>
<ul>
<li>distilbert_base_indonesian (5fold)</li>
<li>Concatenate mean of 4,5,6 Layer, CLS and mean of token embeddings (total 3840dim)</li>
<li>Add FC and Tanh activation to reduce embedding dimension from 3840 to 1536</li>
<li>Loss: ArcFace</li>
<li>Optimizer: AdamW lr=1e-4 WarmupLinear LR Scheduling</li>
<li>Embedding dimension 1536</li></ul></li>
</ul>
<h2>Change in Private/Public LB</h2>
<table>
<thead>
<tr>
<th>Embeddings</th>
<th>αQE+DBA</th>
<th>Private LB</th>
<th>Public LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>Image</td>
<td></td>
<td>0.706</td>
<td>0.714</td>
</tr>
<tr>
<td>Image+TF-IDF</td>
<td></td>
<td>0.738</td>
<td>0.749</td>
</tr>
<tr>
<td>Image+TF-IDF+Title</td>
<td></td>
<td>0.748</td>
<td>0.759</td>
</tr>
<tr>
<td>Image</td>
<td>✔</td>
<td>0.717</td>
<td>0.725</td>
</tr>
<tr>
<td>Image+TF-IDF</td>
<td>✔</td>
<td>0.751</td>
<td>0.766</td>
</tr>
<tr>
<td>Image+TF-IDF+Title</td>
<td>✔</td>
<td>0.761</td>
<td>0.775</td>
</tr>
</tbody>
</table>
<h2>Reference</h2>
<p>[1]: <a href="https://arxiv.org/abs/1610.07940" target="_blank">End-to-end Learning of Deep Visual Representations for Image Retrieval</a><br>
[2]: <a href="https://arxiv.org/abs/1711.02512" target="_blank">Fine-tuning CNN Image Retrieval with No Human Annotation</a></p> |
Shopee - Price Match Guarantee | 99th solution (threshold calculation) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Image embeddings:</strong></p>
<ul>
<li>Arcface model architecture </li>
<li>Ensemble learning: efn b1 and b5 are used and outputs are concatenated to feed into the softmax layer. <a href="https://www.kaggle.com/lhagiimn/shopee-pretrained-models-v1/output?select=EfficientNetB1_EfficientNetB5_512_0.h5" target="_blank">training notebook</a></li>
<li>Single fold efn b6: <a href="https://www.kaggle.com/lhagiimn/shopee-pretrained-models" target="_blank">training notebook</a></li>
</ul>
<p>Thanks <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> and his very useful notebooks. </p>
<p><strong>Text embedding</strong></p>
<ul>
<li>TfidfVectorizer</li>
</ul>
<p>Thanks <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> and his contribution in this and other competitions. I have been learning a lot from you. </p>
<p><strong>Threshold calculation</strong></p>
<ul>
<li>KNN model with cosine distance used to measure similarities. </li>
<li>I tried to calculate adaptive threshold for each image. My threshold calculation: threshold =0.38</li>
</ul>
<pre><code>tr = np.min(distances[k,]) + threshold * (np.max(distances[k,]) - np.min(distances[k,]))
idx = np.where(distances[k,] < tr)[0]
if len(idx)<=1:
tr = np.min(distances[k,]) + 1.55*threshold * (np.max(distances[k,])-np.min(distances[k,]))
idx = np.where(distances[k,] < tr)[0]
</code></pre>
<p>Inference notebook is here: <a href="https://www.kaggle.com/lhagiimn/shoppee-inference-final-concat-models?scriptVersionId=62111720" target="_blank">https://www.kaggle.com/lhagiimn/shoppee-inference-final-concat-models?scriptVersionId=62111720</a></p>
<p><a href="https://www.kaggle.com/usharengaraju" target="_blank">@usharengaraju</a> thank you :)</p> |
Shopee - Price Match Guarantee | One silver solution - XBM + Query Expansion + k-reciprocal | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello all the competitors!<br>
I'm congratulating all who reached the top of the private leaderboard!</p>
<p>Although my solution scored as the 73rd place of private LB, the ideas might be interesting for those who will work with similar tasks in future.</p>
<p>TLDR of the final submission:<br>
<strong>73.2% LB</strong> - EfficientNet-B4 (512x512) with CGD head trained with MultiSimilarity Loss + XBM<br>
<strong>73.7% LB</strong> - alpha Query Expansion (image model embeddings weighted by TF-IDF scores)<br>
<strong>73.8% LB</strong> - filtering farest items by image model scores<br>
<strong>74.4% LB</strong> - k-reciprocal sets of clear predictions added to original predictions</p>
<p>Continue reading to get more details.</p>
<h2>Cross-validation</h2>
<p>For CV I tried two approaches:</p>
<ol>
<li>Make 5-fold split, use OOF sets only for validation (query and contraposed items are both in validation sets only)</li>
<li>Make 5-fold split, use query items from OOF sets while contraposed items are from the whole dataset</li>
</ol>
<p>While the first approach is more fair than the second one, it gives too optimistic scores that are hard to compare with the LB (e.g. 83.9% CV gives 72.7% LB only). Thus, I decided to use the second approach that closes the gap between CV and LB better (e.g. 76.4% CV gives 72.7% LB). Starting from here every submit was preceded by 5-fold validation (yes, I trained different image/text models for each fold), that's why my private LB jumped up by 8 positions.</p>
<h2>Images</h2>
<p>Since I'm more experienced in Computer Vision than in NLP I focused more on getting maximum from items' photos.<br>
The main approach which others didn't mention (UPD: it was in <a href="https://www.kaggle.com/c/shopee-product-matching/discussion/238295" target="_blank">7th place solution</a>) consists of training pairwise losses with some hard negative mining approach. I believed in this more than training ArcFace-like losses because for ArcFace there are too many classes (11014 label groups) while the dataset is quite small (34250 very diverse images). It's also known that the most popular cases are 2-3 items per label group. So, I trained MultiSimilarity with <a href="https://arxiv.org/abs/1912.06798" target="_blank">Cross-Batch Memory</a> as a hard negative miner.</p>
<h3>Backbones</h3>
<p>As for backbones I trained the following ones present in the order of decreasing CV score:</p>
<ul>
<li>EfficientNet-B4</li>
<li>ResNet-50</li>
<li>NFNet-L0</li>
<li>HRNet-W32</li>
</ul>
<p>All the models were trained on 224x224 and then fine-tuned on 512x512 images.</p>
<h3>Heads</h3>
<p>Since this is mostly information retrieval task, I used heads designed specifically for image retrieval. One of such heads is <a href="https://arxiv.org/abs/1903.10663" target="_blank">Combination of Global Descriptors (CGD)</a>. It consists of combining GAP, <a href="https://arxiv.org/abs/1711.02512" target="_blank">GeM</a>, <a href="https://arxiv.org/abs/1510.07493" target="_blank">SPoC</a> and <a href="https://www.researchgate.net/publication/313465134_MS-RMAC_Multiscale_Regional_Maximum_Activation_of_Convolutions_for_Image_Retrieval" target="_blank">RMAC</a> which were previously known to be suitable for IR problems. It tried combinations of (GAP, GeM, SPoC) and (GAP, GeM, RMAC) - they performed similarly well on CV while option with RMAC was better on my internal mAP@5 metric.</p>
<p>Since there were too few data to train a good image model I have also added the pretrained dino_deits8 model proposed recently as <a href="https://github.com/facebookresearch/dino/" target="_blank">DINO</a>.</p>
<h2>Text</h2>
<p>The only thing that actually worked for me with text was TF-IDF <a href="https://www.kaggle.com/cdeotte/part-2-rapids-tfidfvectorizer-cv-0-700" target="_blank">baseline</a> proposed by Chris Deotte. Thanks him for sharing this nice notebook with the use of cuML.</p>
<h2>Re-ranking</h2>
<p>The single model of ResNet-50 with CGD head and naive thresholding gave 73.2% LB.</p>
<p>To improve this I applied <a href="https://arxiv.org/abs/2007.08019" target="_blank">alpha Query Expansion</a> where image embeddings were weighted by TF-IDF scores. This led to 73.7% LB.</p>
<p>Next, for each query I filtered out predictions which were too far by one of the models. In my case the best filtering was achieved by image model which scored as 73.8% LB.</p>
<p>Further, I reengineered the <a href="https://arxiv.org/abs/1701.08398" target="_blank">k-reciprocal</a> approach. The whole re-ranking didn't work, so I took only the k-reciprocal sets of queries - the item from top-k is treated to be k-reciprocal to the query if the query itself lies in top-k to this item. Based on the fact that we mostly have 2-3 items per label group I used 2/3-reciprocal sets and added them to the predictions as is. This reached 74.4% LB.</p>
<h2>What didn't work</h2>
<ul>
<li>Second-level classification of k-nn retrieved items. This is what guys from the <a href="https://www.kaggle.com/c/shopee-product-matching/discussion/238022" target="_blank">2nd place</a> mentioned - they trained LightGBM model for second-stage filtering and didn't share the LB scores without it. So, my approach was to train TabularNLPAutoML from <a href="https://github.com/sberbank-ai-lab/LightAutoML" target="_blank">LAMA</a> on hard positives and hard negatives obtained from the k-nn predictions. It actually didn't improve the result, so I decided to don't spent more time on this. Probably, the reasons are that I chose only basic features omitting image embeddings. Also, may be on the first stage my embeddings gave already the best results which I could get from them, so no classifier could improve this.</li>
<li>External data: I also trained image models on <a href="https://cvgl.stanford.edu/projects/lifted_struct/" target="_blank">SOP</a> data but it performed a bit worse than models trained on Shopee data</li>
<li>PCA whitening: common approach in image retrieval that didn't work for me</li>
<li>SimCLR: I was training the 1st stage of <a href="https://github.com/google-research/simclr" target="_blank">SimCLR</a> on Shopee data. It performed slightly worse than supervised approach by my internal mAP@5 measurements, but still it's quite good for the unsupervised setting</li>
<li>Joined model: I combined ResNet50 and CLIP text embeddings extracted from the titles with the use of MLP. It also scored worse on my CV</li>
<li>paraphrase-xlm-r-multilingual-v1, IndoBERT: I don't know how people applied those embeddings but for me they didn't improve the score</li>
<li>translate English to Indonesian: it took time to find a good english-indonesian translation model which finally led me to using the offline dictionary provided by <a href="https://www.kaggle.com/nickthenick23/shopee-fast-indonesian-english-translator-offline" target="_blank">this</a> notebook. Since there are only 2000 words while on the test set there might be additional thousands of words it was risky to apply the translation, but I tried and it didn't improve even the CV score</li>
<li>filtering bundle/not-bundle: thinking that some products are in bundle with other products we can try to keep only those predictions which have / don't have in common words like "pcs", "set", "pack", "perangkat", "pasangan", etc. No improvement from this, probably it produced more FN than removed FP</li>
<li>intersection of k-reciprocal sets produced by different models: no improvement while the idea of k-reciprocal set worked good before</li>
<li>voting: weighted averaging of scores from different models (effnet-resnet-dino-tfidf) didn't improve the score, probably I'm not good at ensembling</li>
</ul>
<h2>Some image modeling tricks which gave minor improvements in mAP@5 but I didn't put them into submissions</h2>
<ul>
<li><a href="https://arxiv.org/abs/1909.13719" target="_blank">RandAugment</a></li>
<li>MixUp</li>
<li><a href="https://arxiv.org/abs/1708.04896" target="_blank">Random Erasing</a></li>
<li><a href="https://arxiv.org/abs/1603.09382" target="_blank">Stochastic Depth</a></li>
</ul> |
Shopee - Price Match Guarantee | Part of 85th place solution: Add 2 classification | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>notebook:<a href="https://www.kaggle.com/h053473666/2class-infer-batch-20?scriptVersionId=62305047" target="_blank">https://www.kaggle.com/h053473666/2class-infer-batch-20?scriptVersionId=62305047</a></p>
<p>My method is strange. I manually divided the data of this competition into two categories. One is the clothes, the other one is the rest.</p>
<p>Clothes: clothes, pants, skirts, underwear, bags, shoes, masks, sheets, pillows, towels, dolls, hats, some accessories(3282)<br>
Other: rest(11014 - 3282)</p>
<p>I look at the picture from the point of view of the machine. I think those clothes can get better performance when put together,and I think they are similar. <br>
When many people sell clothes, they will put pictures of people on them. So there will be many people in my clothes class.</p>
<p>I trained a two-class model efnb0. Its val_accuracy 5 fold average is 0.95. Then I trained two nfnet-l0 models (clothes, other). Finally, my ensemble method is to use this notebook.<a href="https://www.kaggle.com/ankursingh12/shopee-ensemble-b3-nfnet-l0-inference" target="_blank">https://www.kaggle.com/ankursingh12/shopee-ensemble-b3-nfnet-l0-inference</a><br>
b3(all) / 3 + nfnetl0(all) / 3 + nfnetl0(other) / 3<br>
b3(all) / 3 + nfnetl0(all) / 3 + nfnetl0(clothes) / 3</p>
<p>Finally, I would like to thank my teammates, the host of the competition and the Kaggler who provided excellent notebooks.</p> |
Shopee - Price Match Guarantee | Public 13th / Private 16th solution 💪 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you !</p>
<p>Shopee is my first kaggle competition from start to finish, and also the competition I worked hardest. In this challenge, images, text and post-processing are both important factors for achieving a good result, therefore it is difficult to optimise all them at once. Fortunately, my teammates <a href="https://www.kaggle.com/meliao" target="_blank">@meliao</a> <a href="https://www.kaggle.com/ywenlu" target="_blank">@ywenlu</a> <a href="https://www.kaggle.com/dandingclam" target="_blank">@dandingclam</a> <a href="https://www.kaggle.com/mathurinache" target="_blank">@mathurinache</a> provided great help.</p>
<p>Also, congratulations to <a href="https://www.kaggle.com/ywenlu" target="_blank">@ywenlu</a> for becoming a competition master !</p>
<h2>1. Summary</h2>
<p>Our main solution is based on chris' famous baseline and ragnar's arcface training notebook. <br>
We spent more than half of the time focusing on the optimization of one single image model and TF-IDF but blocked at around 0.740.</p>
<p>Until two weeks before the end of the competition, we finally discovered the importance of post-processing. Because we found that nearly 4000 rows did not match any other products by observing local matches, and this case does not exist in the target.</p>
<p>With the help of pp, we achieved 0.763 using only a B3 and TF-IDF. Then we ensembled other models to achive 0.771 on public.</p>
<p><img src="https://i.postimg.cc/fL7ywrRK/shopee-architect.png" alt=""></p>
<h2>2. Image</h2>
<p>Based on ragnar's stratified dataset, we use 1/3 data to train and check the val loss on the other 2/3. This helped us to quickly identify useful augmentations and parameters. I published here my results : <a href="https://www.kaggle.com/vicioussong/shopee-image-benchmark" target="_blank">Shopee Image Benchmark</a></p>
<p>Below the models we used at final stage :</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Image size</th>
<th>Augmentations</th>
<th>Val loss</th>
</tr>
</thead>
<tbody>
<tr>
<td>efficientnet b3</td>
<td>512</td>
<td>central_crop<br>random_crop<br>random_flip_left_right</td>
<td>14.2</td>
</tr>
<tr>
<td>eca_nfnet_l1</td>
<td>512</td>
<td>RandomResizedCrop<br>CenterCrop<br>RandomBrightnessContrast<br>OneOfBlur</td>
<td>14.31</td>
</tr>
<tr>
<td>efficientnet b4 cosine LR</td>
<td>384</td>
<td>central_crop<br>random_crop<br>random_flip_left_right<br>random_hue<br>random_saturation<br>random_contrast<br>random_brightness</td>
<td>14.34</td>
</tr>
<tr>
<td>efficientnet b4</td>
<td>456</td>
<td>central_crop<br>random_crop<br>random_flip_left_right<br>random_flip_up_down<br>random_hue<br>random_saturation<br>random_contrast<br>random_brightness<br>mosaic_augmentation</td>
<td>14.3</td>
</tr>
</tbody>
</table>
<h2>3. Text</h2>
<h3>3.1 TF-IDF</h3>
<p><strong>Pre-processing</strong></p>
<ul>
<li>remove emojis, punctuations, e-commerce stop-words (like 'ready', 'stock', 'free', 'gift', 'jaring', 'sabun', 'siap', 'kirim', 'diskon', I spent 3 hrs to identify them manually)</li>
<li>lemmetization using English & Indonisian lemmetizer (Indonisian lemmetizer can be found here <a href="https://www.kaggle.com/vicioussong/shopee-simplemma/" target="_blank">shopee_simplemma</a>, source : <a href="https://github.com/adbar/simplemma/" target="_blank">https://github.com/adbar/simplemma/</a>)</li>
<li>split number and unit : 150ml => 150 ml</li>
<li>split attached n-grams : fresh care => freshcare</li>
</ul>
<p><strong>Feature</strong><br>
I am not sure this is a good choice but I did use all features including single caracters : </p>
<pre><code>TfidfVectorizer(token_pattern=r"(?u)\b\w+\b", binary=True)
</code></pre>
<h3>3.2 Sbert</h3>
<p>Used public kernel <a href="https://www.kaggle.com/tanulsingh077/reaching-0-612-with-text-only-shopee" target="_blank">Reaching 0.612 with Text Only : Shopee</a></p>
<h2>4. Post-processing</h2>
<h3>4.1 Strategies</h3>
<ol>
<li>Recall 50 neighbours using KNN 51 and filter using the rules below :<ul>
<li>if img_score > th_top</li>
<li>if txt_score > th_top</li>
<li>if img_score > th_img and txt_score > th_min</li>
<li>if txt_score > th_txt and img_score> th_min</li>
<li>if avg(img_score, txt_score ) > th_mean</li></ul></li>
<li>Recall the closest neighbour using KNN 2 based on img_score + txt_score. If one row predicts only itself, we replace its result to this.</li>
<li>Closest neighbour propagation<ul>
<li>Considering the great improvement of step 2, we propagate the new matched items</li></ul></li>
<li>Label propagation with a strict threshold KNN 51 on rows where nb_match=2<ul>
<li>if A-B and B-C then A-B-C</li></ul></li>
</ol>
<h3>4.2 OOM problem solving</h3>
<p>Instead of concatnating image embeddings with text embeddings using numpy, we converted them to torch tensor and use the following fonction to calculate the cosine similarity to avoid OOM problem.</p>
<pre><code>def text_knn_2(df, tensor1, tensor2, K, th, chunk = 128):
out_preds = []
for i in tqdm(list(range(0, df.shape[0], chunk)) + [df.shape[0]-chunk]):
arr = tensor1[i : i + chunk] @ tensor1.T + (tensor2[i : i + chunk] @ tensor2.T)
if len(df) > 3:
indices = torch.nonzero((arr > th) & (arr >= arr.sort(descending=True).values[:,K-1].reshape(arr.shape[0],-1)))
else:
indices = torch.nonzero(arr > th)
preds = dict()
for k in range(arr.shape[0]):
preds[k] = []
for ind in range(indices.size(0)):
preds[indices[ind, 0].item()].append(indices[ind, 1].item())
out_preds.extend([(df.iloc[k].posting_id, df.iloc[v].posting_id.tolist()) for k, v in preds.items()])
return out_preds[:df.shape[0]]
</code></pre>
<p>Thanks for all Kagglers, I have learnt a lot in this competition.</p>
<p>Hope to see you again in the next journey 😏</p>
<p>Thanks ! Let's keep going.</p>
<p>Thanks. Yes, I concat them to a longer vector.</p> |
Shopee - Price Match Guarantee | Public 16th / Private 10th Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I created the dataset by outer join of each results of KNN (k=50). </p>
<p>Thanks to all the participants for their hard work. I am new to this field and only scored about 0.7 for the first 50sub or so. However, by studying from similar competitions and papers in the past, I was able to win the gold medal ! This competition was also a good opportunity for me to learn about engineering because of the tight limits on inference time and memory.</p>
<h1>Overview</h1>
<p><img src="https://user-images.githubusercontent.com/24289602/117750721-d7a46300-b24e-11eb-8056-a97f2fbd6cd5.png" alt="shopee"></p>
<h1>Details</h1>
<h2>image models</h2>
<p>First, I cleaned up the labels, since images that look almost identical can become noisy when used to train an image model. In other words, I re-labeled labels that are higher than a certain level of similarity as the same label. <br>
Then, I trained each models by ArcFace and extracted the embeddings.</p>
<h2>title models</h2>
<p>After processing the escape string, I trained <code>indobenchmark/indobert-base-p2</code>, <code>indobenchmark/indobert-large-p2</code> and TFIDF. <code>indobenchmark/indobert-base-p2</code> was better than <code>bert-base-uncased</code>.</p>
<h2>Query Expantion</h2>
<p>This competition guaranteed that there would be more than one label for each product.<br>
Therefore, the αQE with n=2 and normalized similarity was very effective. </p>
<h2>MetaModel</h2>
<p>After training the above models and QE, I trained lightGBM model by similarity feature in the pairwise dataset.<br>
However, trying to predict LightGBM normally on the kernel is very time consuming, so I used cuml's ForestInference to predict it. That was insanely fast.<br>
Finally, I reapplied αQE by using prediction of lightGBM (so-called Discriminative QE). <br>
As it turns out, this DQE wasn't very useful.</p>
<h2>Threshold searching</h2>
<p>First, I used only similarity of abcde' embdding, the score was relatively low. (But abcde' embdding is high CV)<br>
So, I explored this cause. In the image below, you can see that some of the predictions of abcde'(cnn_bert_distance) are not as confident as when using abc' (cnn_all_distance). <br>
I combined multiple predictions by carefully observing the scatter plot in this way.<br>
<img src="https://user-images.githubusercontent.com/24289602/117752623-0ff97080-b252-11eb-9930-729d78499edc.png" alt="image"></p>
<p>Also, by experimenting with local, I confirmed that tuning the threshold in public does not affect the results in private, so we adjusted the threshold in public.</p>
<h2>Didn't work</h2>
<ul>
<li>Some diffusion technique.</li>
<li><a href="https://filipradenovic.github.io/publications/grb20.pdf" target="_blank">Attention-based query expantion</a>.</li>
<li>MLP trained by embedding based pairwise dataset.</li>
<li>Unsupervised model (like CLIP)</li>
<li>Arcface with triplet loss</li>
</ul>
<hr>
<h2>Late Submission</h2>
<p>I didn't try it because I didn't have time to do it since I came up with the idea of using LightGBM about a day before yesterday, but I got the following scores with LightGBM only prediction (lgbm > 0.8). So, all I need is metamodel.<br>
<img src="https://user-images.githubusercontent.com/24289602/117801623-8072b280-b28f-11eb-90de-9c27e432f6ee.png" alt="late sub"></p>
<p>Thank you Biel! Your solution is unique and great.<br>
If we had put a team together, we could have finished higher.</p>
<p>I think chapter 2 of this paper will be helpful.<br>
<a href="https://arxiv.org/pdf/2007.08019.pdf" target="_blank">https://arxiv.org/pdf/2007.08019.pdf</a></p> |
Shopee - Price Match Guarantee | Public 39th / Private 37th Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First Thanks to Kaggle and hosts for holding this exciting competition! I've learned many things in this competition. </p>
<p>my solution</p>
<p><strong>overview</strong><br>
<img src="https://i.imgur.com/PvFbU5D.png" alt=""><br>
<img src="https://i.imgur.com/WfGXffi.png" alt=""></p>
<p>I found that there are different thresholds depending on the number of correct answers in the prediction</p>
<p>CV best threshold<br>
1-10 threshold 0.36<br>
11-20 threshold 0.55<br>
21-30 threshold 0.65<br>
31-40 threshold 0.70<br>
41-50 threshold 0.70</p>
<p>When predict with only one model<br>
About 40% is only predicted to be 1</p>
<p>I worked on reducing the result of one prediction as much as possible<br>
use this <a href="https://www.kaggle.com/takusid/how-to-make-prediction" target="_blank">notebook</a> method</p>
<p>All flows have published this <a href="https://www.kaggle.com/takusid/37th-place-solution-version-76?scriptVersionId=62411802" target="_blank">notebook</a>.<br>
<a href="https://www.kaggle.com/takusid/37th-place-solution-version-76?scriptVersionId=62411802" target="_blank">https://www.kaggle.com/takusid/37th-place-solution-version-76?scriptVersionId=62411802</a></p> |
Shopee - Price Match Guarantee | Public 56th / Private 57th Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for your comment!<br>
We separately got the distances of each model by using nearest neighbors algorithm, instead of concatenating or averaging the feature maps before getting the distances.<br>
In fact, my teammate yyama(<a href="https://www.kaggle.com/yosukeyama" target="_blank">@yosukeyama</a>), not I, implemented this ensemble algorithm, and he can explain more clearly, so he will reply to this thread later!! </p>
<p>Thank you for your feedback!<br>
The reason why we used cosine similarity in tfidf is just the public notebook (made by Chris) used cosine similarity. I think cosine similarity and cosine distance have no critical difference in theory. Just if a factor is close to 1, former means the two product is very similar and latter means vice versa. We can change it, but then we have to search for an ideal threshold on CV and LB, so we decided to leave it as was.</p>
<h2>Acknowledgements</h2>
<p>Thanks to Kaggle and hosts for holding this exciting competition! I've learned many things in this competition. Also thanks to all participants especially my teammate.</p>
<h2>Overview</h2>
<p><img alt="スクリーンショット 2021-05-11 15 44 51" src="https://user-images.githubusercontent.com/70050531/117770433-de8f9d80-b26f-11eb-9cd7-35c8dc92f5f7.png"></p>
<h2>Models</h2>
<p><strong>coming soon…</strong></p>
<h2>Threshold</h2>
<p>In this competition, all of the data have some pairs, so our model should choose at least one product as pair, instead of choosing no one. Thus we increased the threshold if models predict no data as pair like this (This is the case of tfidf.);</p>
<pre><code>for k in range(b-a):
IDX = cupy.where(cts[k,]>thresh)[0]
if len(IDX) == 1:
IDX = cupy.where(cts[k,] > (thresh-0.04))[0]
if len(IDX) == 1:
IDX = cupy.where(cts[k,] > (thresh-0.08))[0]
o = df.iloc[cupy.asnumpy(IDX)].posting_id.values
preds.append(o)
</code></pre>
<h2>Ensemble</h2>
<p>We ensembled predictions of each model by taking an average of distances instead of concatenating or averaging feature maps.<br>
The reason is that each model places feature maps on a high-dimensional space based on its decision criteria, so concatenating or averaging each feature map means to disrupt the arrangement. Instead, distance is an absolute measure (coordinates are a relative one), semantically closest to probability. </p>
<h2>Postprocess</h2>
<p>In this competition, all of the data have some pairs, so our model should choose at least one product as pair, instead of choosing no one.<br>
We took two postprocesses to choose a pair of data that have no pair in 1st stage. First, We predicted extra pair of not match data by Tfidf after 1st stage prediction. (In fact, We would like to use image models and BERT, but we faced a submission error and can't solve it finally.) Second, we interpolated pairs of each data. For example, let's say the data A and B are a pair. If our model chooses the data B as the pair of data A, and no one as the pair of data B. In this case, data A must be chosen as a pair of data B, so we should add it. We conducted this interpolation to all of the data.</p>
<h3>Worked</h3>
<ul>
<li>SAM optimizer</li>
<li>fine-tuning without augmentation (paper: <a href="https://arxiv.org/pdf/1909.09148.pdf" target="_blank">https://arxiv.org/pdf/1909.09148.pdf</a>)</li>
<li>distance averaging</li>
<li>sbert</li>
<li>indonesian stopwords for Tfidf</li>
</ul>
<h3>Not worked</h3>
<ul>
<li>Dual attention head (paper: <a href="https://arxiv.org/pdf/1809.02983.pdf" target="_blank">https://arxiv.org/pdf/1809.02983.pdf</a>)</li>
<li>Focal loss(and other loss functions like symmetric cross-entropy loss)</li>
<li>OCR</li>
<li>increasing the image size larger than 512×512</li>
<li>text augmentation</li>
<li>text preprocess</li>
<li>text translate</li>
</ul>
<h3>inference code</h3>
<p><a href="https://www.kaggle.com/dolphin15/shopee-ensemble-infer-new-models-tfidfemsemble?scriptVersionId=62447650" target="_blank">https://www.kaggle.com/dolphin15/shopee-ensemble-infer-new-models-tfidfemsemble?scriptVersionId=62447650</a></p>
<p>If you have a question about this solution, feel free to ask!<br>
Thank you for reading!!!<br>
(I'll add information about models soon.)</p> |
Shopee - Price Match Guarantee | Silver Medal (80th place) after 6 years of procrastination | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hey Folks,</p>
<p>Just wanted to express my gratitude to the Kaggle platform for hosting such an interesting competition . <br>
Kudos to the cloud ops team (53,632 entries : GPU enabled VMs with almost negligible downtime is a big thing.)</p>
<p>While I have been following Kaggle for so many years now. Over the years I would regularly check kernels and discussions for new ideas and to keep up with latest research. But, I had never imagined someday I would be actively participating in a competition and end up at a medal position. You see I almost always have a consulting project besides my regular job , so I never had required bandwidth.</p>
<p>Anyways, A big shout out the the Kaggle community and all the contributors for sharing cool ideas. My favorite being the ArcFace and Meta Embeddings .</p>
<p>Special shoutout to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> <a href="https://www.kaggle.com/tanulsingh077" target="_blank">@tanulsingh077</a></p>
<p>Link to kernel : <a href="https://www.kaggle.com/rahul15495/meta-and-mean?scriptVersionId=61590910" target="_blank">https://www.kaggle.com/rahul15495/meta-and-mean?scriptVersionId=61590910</a></p> |
Shopee - Price Match Guarantee | Threshold solution (105th) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks.<br>
I counted the items that belonged to each label group in the training data and used that as the objective variable.</p>
<p>I expected and tested that the test data would work if there was a similar trend, which contributed to the increase in scores.</p>
<p>First I would like to thank the competition host and all Kagglers.<br>
It was a very difficult competition for me, but I enjoyed working on it.</p>
<p>I don't have enough score to share my solution, but I used a solution that others haven't shown, so I'd like to share a little about it.</p>
<p>One of the reasons for the decline in the f1 score was the large number of false positives.<br>
In order to suppress this, I predicted the number of items in the label group and adjusted the threshold so that it would not be exceeded.</p>
<p><img src="https://i.imgur.com/P7fqmFu.png" alt="Threshold solution"></p>
<p>I implemented a gradual widening of the threshold so as not to exceed the predicted number of items.</p> |
Shopee - Price Match Guarantee | [129th solution] Fork great code and Add a few lines of code [really simple trick] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to a great pioneer's notebook and a little inspiration (and with a little luck), I got bronze medal. </p>
<p>These 2 steps are what I did.</p>
<h1>1. Fork Code</h1>
<p>Fork <a href="https://www.kaggle.com/prashantchandel1097/ensemble-of-multiple-models-lb0-733" target="_blank">this wonderful code</a> (thank you very much <a href="https://www.kaggle.com/prashantchandel1097" target="_blank">@prashantchandel1097</a> 🙏🙏. Your code was really helpful.)</p>
<h1>2. Add post processing code to avoid obvious mistake</h1>
<p>In this competition, one group has 2 or more items.<br>
Therefore, if group size = 1, threshold should be eased to find same group item.<br>
So I added code like this</p>
<pre><code># Main prediction
df["image_predictions"] = get_image_predictions(df_image, image_embeddings1, threshold = 0.3)
# supplemental prediction ( normaly, I don't use this predection)
# eased threshold prediction
df["image_predictions2"] = get_image_predictions(df_image, image_embeddings1, threshold = 0.4)
df["image_predictions3"] = get_image_predictions(df_image, image_embeddings1, threshold = 0.5)
# This function will be called, if answer size = 1 to avoid alone.
def bigger_than_one(row):
ans = row["matches"]
cands = ["image_predictions2","image_predictions3"]
for col in cands:
if len(ans) > 1:
return ans
else:
ans = ans | set(row[col])
return ans
# apply
df["matches"] = df.apply(bigger_than_one,axis = 1)
</code></pre> |
Shopee - Price Match Guarantee | [31st Place] Object Detection approach | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for your support <a href="https://www.kaggle.com/alvarozaera" target="_blank">@alvarozaera</a> ! Happy to read that you found it interesting </p>
<p>Thank you for your kind comment, Rasheed!</p>
<p>Hi!</p>
<p>Thank you for your question, it is a very good one! </p>
<p>Yes, cropped images were incorporated to the "images" pipeline. This means that embeddings were computed and new relations were created if the distance between these embeddings was below a threshold; just the same process (high-level described) followed for the rest of the images. </p>
<p>Thanks to this, we were able to catch new relations that were being missed because of background noise, as the example pictures in the Topic show. If you would like to see more examples of images cropped, you can check <a href="https://www.kaggle.com/atmguille/shopee-object-detection-crop-images/output" target="_blank">the output of the notebook</a> referenced at the end of the Topic, where all the bottles of the training dataset were cropped.</p>
<p>Incorporating the cropped images was a challenge, as not all elements had detected objects in their images. </p>
<p>Thanks again for your question and congrats for your bronze medal in this competition!</p>
<p>Hi there!</p>
<p>First of all, I would like to congratulate all of you for the incredible talent that was shown during this competition. Moreover, I would like to thank my teammates (<a href="https://www.kaggle.com/miguelbm" target="_blank">Miguel</a>, <a href="https://www.kaggle.com/talavantecodes" target="_blank">Pablo</a> and <a href="https://www.kaggle.com/carlosbort" target="_blank">Carlos</a>) for all the things they have taught me along the way and that resulted in a <strong>silver medal</strong> in my <strong>first ever Kaggle competition</strong>! Finally, I would like to personally congratulate my teammate <a href="https://www.kaggle.com/alelafe" target="_blank">Alejandro Lanaspa</a> for reaching the <em>Kaggle Master</em> tier after this silver medal! Having mentioned that, let's dig into the main purpose of this topic:</p>
<p>I created this topic to share one of the approaches that my team used during this competition and that I was in charge of implementing: Object Detection. As its name suggests, this approach was based on detecting objects in the image of each product, with the objective to have another item of comparison between different elements.</p>
<p>From the technical point of view, YOLOv5m was used as the model. The library where this model is implemented can be found <a href="https://github.com/ultralytics/yolov5" target="_blank">here</a>; and an effort was made to use the pretrained model offline so it was available during submissions (see <a href="https://www.kaggle.com/atmguille/yolov5-git" target="_blank">the dataset</a> that I created for that purpose). In terms of the information that this model provides, its output is not only a list of objects. In fact, the following data is produced:</p>
<ol>
<li>Names of the objects detected.</li>
<li>Confidence for each detection.</li>
<li>Coordinates of the bounding box where each object is detected.</li>
</ol>
<p>The example code that generates this data for the Shopee train set can be found in <a href="https://www.kaggle.com/atmguille/shopee-object-detection-generate-data" target="_blank">this notebook</a> (list format) and <a href="https://www.kaggle.com/atmguille/shopee-object-detection-generate-data-one-hot" target="_blank">this notebook</a> (one-hot format). I also uploaded the output of these notebooks to <a href="https://www.kaggle.com/atmguille/shopee-train-with-objects" target="_blank">this dataset</a> so any of you could directly use it without executing anything.</p>
<p>Now, let’s analyze what further details were obtained from these items:</p>
<ol>
<li>This is the obvious output, but thanks to it we could filter objects that were relevant to the competition. By default, the model tries to detect 80 different types of objects, which are listed <a href="https://www.kaggle.com/atmguille/shopee-train-with-objects?select=objects_names.txt" target="_blank">here</a>. In this list, one can find very different objects from each other, from ‘baseball bat’ to ‘elephant’, that are clearly unrelated to what should be detected in this case. After observing the results and computing different statistics (number of different elements with an object, number of different groups, percentage of completeness of a group with an object… See <a href="https://www.kaggle.com/atmguille/shopee-object-detection-analysis" target="_blank">this notebook</a> for the complete analysis) it was determined that two of the most relevant ones were ‘person’, in images of people advertising clothes, and ‘bottle’, in cosmetics and food items.</li>
<li>Although at the beginning we thought of including objects that the model was very sure of, we detected that it was missing some important objects that were detected with a lower confidence level. Also, so as to have as many options as possible to select from in the algorithm that is described at the end of the following point, we were interested in setting a low confidence level that obviously outputs more detected objects.</li>
<li>The following information was obtained from the coordinates:<ul>
<li>By having these coordinates, what we really have are the four vertices of a rectangle. Thanks to this, one can estimate the size of the detected object by computing the area of this rectangle. After that, bigger objects could be taken into account with higher priority.</li>
<li>Another factor to consider when assigning importance to the different objects in an image is how centered they are. In fact, the products that are advertised (the ones we are more interested in) are normally located in the middle of the image. Based on this, it would be interesting to compute how far an object is from being at the middle of the picture. Can that be done with the available information? Sure! In a more graphical way, the value that we are looking for is the length of the red line shown below: <img src="https://i.postimg.cc/wBf6YVnQ/distance.png" alt=""></li>
<li>Once we have an idea of how big objects are and how centered they are, apart from using this information to prioritize some objects over others; it could be used to remove background noise, focusing just on the advertised product. Based on this idea, an approach that we took was to crop the most important detected object from the image. To select this important object, the biggest and the most centric object were obtained, and a simple algorithm selected the best option considering how centric was the biggest object and how big was the most centric one, with prevalence for the most centric. To get an idea on how useful this crop could be, see the following examples obtained from the Shopee dataset where the bounding boxes are colored in red. Which equality should be easier to check, the first or the second one? <br>
<img src="https://i.postimg.cc/j5DRxfb0/q1.png" alt=""> <br>
The answer is clear, isn’t it? If you are not convinced yet, see the following example with totally different backgrounds: <img src="https://i.postimg.cc/7LYxRctL/q2.png" alt=""><br>
<em>Note</em>: the cropped images were not obtained by hand but by the algorithm explained before. The code that actually does these crops can be found in <a href="https://www.kaggle.com/atmguille/shopee-object-detection-crop-images" target="_blank">this notebook</a></li></ul></li>
</ol>
<p>Hope you enjoyed this post and found my high-level description interesting. I will be very happy to answer any question or comment anything you would like. Furthermore, I would love to also read any feedback that you could have, since as I mentioned above this is my first experience with Kaggle and I am willing to improve! 😄</p>
<p>Thanks for your feedback Victor, it's very much appreciated</p>
<p>Thank you <a href="https://www.kaggle.com/ericmorales" target="_blank">@ericmorales</a>! Looking forward to working together</p>
<p>Thanks for your great feedback <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>! </p>
<p>This great result would not have been possible without the great team work and guidance of my outstanding colleagues. We all did a great job!</p>
<p>Looking forward to the next challenge!</p>
<p>Hi <a href="https://www.kaggle.com/davidedwards1" target="_blank">@davidedwards1</a> </p>
<p>Thank you for your comment! I am glad you liked the explanation. I also hope the notebooks and datasets that I published could be useful for you to see how I implemented this approach.</p>
<p>Should you have any doubts, please do not hesitate to contact me! I will be very happy to help you.</p>
<p>Thank you for your support Miguel! Glad you liked it :)</p> |
Shopee - Price Match Guarantee | simple trick to silver for any open source code! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Shopee - Price Match Guarantee
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>infer the train dataset,and check the match numbers, and you'll find the distribution are very different from the real match numbers</p>
<p>sorry for a mistake:pick 2 when enlarged the threshhold…</p>
<p>for both image and text, what u need to do is enlarge and decrease the threshold.</p>
<p>for image, maybe 0.36 is a good threshold, but actually:</p>
<p><strong>1.</strong>there are still <strong>lots of noise for the matches large than 1</strong> . so, what u need to do is <strong>decreased the threshold</strong> and run again (as for me, decreased to 0.28), and check if the matches is still large than 1. if true, then use the low threshold matches(<strong>pick 2</strong>).</p>
<p><strong>2.</strong>if the <strong>matches is 1</strong>, what u need to do is <strong>enlarge the threshold</strong> and run again (as for me, enlarged to 0.51):</p>
<p>code are like this:</p>
<pre><code>predictions = []
for k in range(df.shape[0]):
idx_image = np.where(image_distances[k,] < 0.36)[0]
ids_image = image_indices[k,idx_image]
posting_ids = df['posting_id'].iloc[ids_image].values
if len(posting_ids) > 1:
idx_image_again = np.where(image_distances[k,] < 0.28)[0]
ids_image_again = image_indices[k, idx_image_again]
posting_ids_again = df['posting_id'].iloc[ids_image_again].values
if len(ids_image_again) > 1:
predictions.append(posting_ids_again)
else:
predictions.append(posting_ids)
else:
idx_image_recall = np.where(image_distances[k,] < 0.51)[0]
ids_image_recall = image_indices[k, idx_image_recall]
posting_ids_recall = df['posting_id'].iloc[ids_image_recall].values
predictions.append(posting_ids_recall[:2])
</code></pre>
<p>for text, the 3 thresholds are: 0.66__0.775__0.8, codes are like this:</p>
<pre><code>cts = cupy.matmul(text_embeddings, text_embeddings[a:b].T).T
for k in range(b-a):
IDX = cupy.where(cts[k,] > 0.775)[0]
o = df.iloc[cupy.asnumpy(IDX)].posting_id.values
if len(o) > 1:
IDX_again = cupy.where(cts[k,] > 0.80)[0]
o_again = df.iloc[cupy.asnumpy(IDX_again)].posting_id.values
if len(o_again) > 1 :
preds.append(o_again)
else:
preds.append(o)
else:
IDX = cupy.where(cts[k,] > 0.66)[0]
o = df.iloc[cupy.asnumpy(IDX)].posting_id.values
preds.append(o[:2])
</code></pre>
<p>with this simple trick, i tried a lot of open source code,any of these codes are impoved to silver medal zone, <strong>no matter public(0.744) or private(0.732)</strong>….</p> |
Sorghum -100 Cultivar Identification - FGVC 9 | 1st Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Sorghum -100 Cultivar Identification - FGVC 9
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I didn’t use instance normalisation and AWP train.Data enhancement strategies are also slightly different.</p>
<p>Thanks to Kaggle and the hosting team for an interesting competition.</p>
<h1>summary</h1>
<p>My method is very similar to that of the third place(<a href="https://www.kaggle.com/competitions/sorghum-id-fgvc-9/discussion/328593" target="_blank">sorghum-id-fgvc-9/discussion/328593</a>).</p>
<p>The main differences are as follows:</p>
<ol>
<li>I used convnext base network as backbone.</li>
<li>I trained 5-folds models and ensembled them.</li>
<li>I combined this contest data with fgvc8data to generate a new dataset with 223 categories instead of 100.</li>
</ol>
<h1>results</h1>
<table>
<thead>
<tr>
<th>step</th>
<th>method</th>
<th>Public Score</th>
<th>Private Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>convb+100categories</td>
<td>0.953</td>
<td>0.952</td>
</tr>
<tr>
<td>2</td>
<td>convb+223categories</td>
<td>0.955</td>
<td>0.954</td>
</tr>
<tr>
<td>3</td>
<td>ensemble(step1+step2)</td>
<td>0.959</td>
<td>0.957</td>
</tr>
<tr>
<td>4</td>
<td>step3+pseudo label</td>
<td>0.963</td>
<td>0.962</td>
</tr>
<tr>
<td>5</td>
<td>step4+5folds</td>
<td>0.965</td>
<td>0.965</td>
</tr>
</tbody>
</table> |
Sorghum -100 Cultivar Identification - FGVC 9 | 2nd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Sorghum -100 Cultivar Identification - FGVC 9
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We really appreciate organizers of this competition. <br>
Congratulations to all the winners!</p>
<h1><strong>Dataset</strong></h1>
<p>Only the data released in the competition of this year is used.</p>
<p>We believe that high resolution is beneficial to the Fine-Grained Classification. However, we still test on some lower resolutions because of the limited computation resources. Finally, 960 cropped from 1024 is used in our best solution.</p>
<h1><strong>Algorithm</strong></h1>
<ol>
<li>RegNetY-16.0GF is used.</li>
<li>AutoAugment, RandomCrop, RandomHorizontalFlip are used.</li>
<li>Ensemble is performed as usual when different training parameters are used, such as initial learning rate, number of epochs, dropout ratio, etc…</li>
<li>Pseudo Label is also helpful to improve the accuracy iteratively.</li>
<li>TTA is performed during the test.</li>
<li>Dropout is used to avoid overfitting.</li>
</ol>
<h1><strong>Results</strong></h1>
<table>
<thead>
<tr>
<th>Methods</th>
<th>Public Score</th>
<th>Private Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>Base(RegNetY-16.0GF, 512x512)</td>
<td>85.2</td>
<td>84.1</td>
</tr>
<tr>
<td>512x512->960x960</td>
<td>92.1</td>
<td>91.9</td>
</tr>
<tr>
<td>Pseudo Label</td>
<td>95.5</td>
<td>95.1</td>
</tr>
<tr>
<td>Dropout</td>
<td>95.7</td>
<td>95.3</td>
</tr>
<tr>
<td>Ensemble</td>
<td>96.2</td>
<td>95.9</td>
</tr>
</tbody>
</table> |
Sorghum -100 Cultivar Identification - FGVC 9 | 3rd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Sorghum -100 Cultivar Identification - FGVC 9
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for following our solutions! The full experimental setup is 45 epochs. Our final solution is consistent with the table, and most of the methods are gradually improved based on experiments. Because our time is not enough, a small part of the improved methods did not do ablation experiments carefully. </p>
<p>Our solution does not use Kaggle's notebook, the gpu we use is 3090. The specific time of a round is forgotten, and the training time is long. Because of the cosine restart learning rate used, you can run 20 epochs and the scores should be similar.</p>
<p>Congrats to all the winners.<br>
Thanks to Kaggle and the hosting team for an interesting competition. <br>
I appreciate my team member <a href="https://www.kaggle.com/SisuoLYU" target="_blank">@SisuoLYU</a> <a href="https://www.kaggle.com/haogood666" target="_blank">@haogood666</a> !</p>
<h1><strong>Dataset</strong></h1>
<p>Through data set analysis, it can be found that the data of this question has the following classification difficulties.<br>
1 There is a high degree of inter-class visual similarity between different categories, and fine-grained classification is difficult.<br>
2 Outdoor pictures on different dates and at different times of the day have large differences in lighting, and there are many exposed images.<br>
3 The training set and the test set are from sorghum grown on two different fields, and there is a problem of domain adaptation.<br>
4 With the growth time, the state and height of plants are changing.<br>
Finally, our training size is:1024x1024. Dataset We convert png format to jpeg format, which improves storage space. Thanks to @<br>
Mithil Salunkhe <br>
<a href="https://www.kaggle.com/competitions/sorghum-id-fgvc-9/discussion/313266" target="_blank">https://www.kaggle.com/competitions/sorghum-id-fgvc-9/discussion/313266</a> for the open source jpeg data.<br>
It is mentioned here that we perform image histogram equalization preprocessing for both training and testing, which improves the problem of exposed images. Here, we are very grateful to@Jun-Ming Chen <br>
<a href="https://www.kaggle.com/code/leoooo333/lb-0-885-sorghum-higer-accuracy" target="_blank">https://www.kaggle.com/code/leoooo333/lb-0-885-sorghum-higer-accuracy</a> open source notebook. We referenced many of their ideas and proposals.</p>
<h1><strong>Model</strong></h1>
<p><img src="https://i.postimg.cc/MKPbBNwp/3.jpg" alt=""></p>
<p>Baseline: The initial debugging is trained on 512x512 images, resnet50 model, CrossEntropyLoss, cosine restart learning rate plan, AdamW optimizer.<br>
Summarize:</p>
<table>
<thead>
<tr>
<th>methods</th>
<th>scores</th>
</tr>
</thead>
<tbody>
<tr>
<td>base(resnet50,512x512)</td>
<td>0.73</td>
</tr>
<tr>
<td>histogram equalization</td>
<td>+0.03</td>
</tr>
<tr>
<td>Ibn-resnet50</td>
<td>+0.05</td>
</tr>
<tr>
<td>bnn-neck</td>
<td>+0.005</td>
</tr>
<tr>
<td>The last CNN layer stride is changed to 1</td>
<td>+0.005</td>
</tr>
<tr>
<td>Arcface(s=30,m=0.3)</td>
<td>+0.015</td>
</tr>
<tr>
<td>mixup</td>
<td>+0.01</td>
</tr>
<tr>
<td>cutmix</td>
<td>+0.005</td>
</tr>
<tr>
<td>Awp adversarial training</td>
<td>+0.01</td>
</tr>
<tr>
<td>512x512->1024x1024</td>
<td>+0.04</td>
</tr>
<tr>
<td>fgvc8data</td>
<td>+0.03</td>
</tr>
<tr>
<td>tta</td>
<td>+0.02</td>
</tr>
<tr>
<td>pse_udo</td>
<td>+0.01</td>
</tr>
<tr>
<td>total</td>
<td>≈0.957</td>
</tr>
</tbody>
</table>
<p>Ibn-resnet:IBN-Net integrates BN and IN, which can improve the learning ability and generalization ability of the model. The design principle is: use both IN and BN in the shallow layers of the network, and only use BN in the deep layers of the network. In this problem, since the training set and the test set come from two different geographical locations, the addition of IN greatly improves the model performance. <br>
<img src="https://i.postimg.cc/K817Dfcr/20220531110258.png" alt=""></p>
<p>Arcface:The topic of this competition is fine-grained visual image classification, with high similarity between classes. Ordinary Softmax Loss does not explicitly optimize feature Embedding to enhance the similarity of intra-class samples and the inconsistency of inter-class samples. ArcFace is an additive angular margin penalty that adds an additive angular margin m between the feature and target weights to enhance both intra-class compactness and inter-class dissimilarity.</p>
<p>Awp:Adversarial training is used in nlp to improve the performance of the model, such as fgm, pgd, but because it is the disturbance added in the text emb, in the image field, we choose the general awp confrontation training, which can add disturbance to the model weight and input at the same time, This increases the robustness of the model.</p>
<h1><strong>Others</strong></h1>
<p>Data augmentation mainly combines the characteristics of this data illumination change. In addition to mixup and cutmix, the following data augmentation:</p>
<pre><code>train_transform = A.Compose([
A.CLAHE(clip_limit=40, tile_grid_size=(10, 10),p=1.0),
A.Resize(CFG.image_size, CFG.image_size),
A.HorizontalFlip(p=0.5),
A.VerticalFlip(p=0.5),
A.ShiftScaleRotate(shift_limit=0.2, scale_limit=0.2, rotate_limit=20, interpolation=cv2.INTER_LINEAR, border_mode=cv2.BORDER_REFLECT_101, p=0.5),
A.OneOf([A.RandomBrightness(limit=0.1, p=1), A.RandomContrast(limit=0.1, p=1)]),
A.Normalize(),
ToTensorV2(p=1.0),
])
val_test_transform = A.Compose([
A.CLAHE(clip_limit=40, tile_grid_size=(10, 10),p=1.0),
A.Resize(CFG.image_size, CFG.image_size),
A.Normalize(),
ToTensorV2(p=1.0),
])
</code></pre>
<p>Another thing worth noting in this question is that we used the fgvc9 and part of the fgvc8 training dataset. Since the image size of the fgvc8 dataset is 480x480, we spelled 1024x1024. </p>
<h1><strong>Some failed attempts</strong></h1>
<p>1 The model is changed to efficient, vit, etc.<br>
2 The model fusion has not been scored, it may be that the late score is already very high, and the model fusion has not been improved.<br>
3 The various improvements of loss, such as focalloss and softCrossEntropy, have not changed much.</p>
<h1><strong>Code Links</strong></h1>
<p><a href="https://github.com/xinchenzju/CV-competition-arsenal/tree/main/CVPR2022-FGVC9-top3" target="_blank">https://github.com/xinchenzju/CV-competition-arsenal/tree/main/CVPR2022-FGVC9-top3</a></p> |
Springleaf Marketing Response | Beating the Benchmark ;) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Springleaf Marketing Response
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I got a lot of requests during the competition for a benchmark script. So, as promised, here it is.... After all, its all about learning :D</p>
<p>The script will easily give you 0.80+ in Public LB. In Private LB, you can easily get a top 10% rank using this script. With some modifications, this script gives top 30 rank out of 2200+ competitors.... not bad ha?! ;)</p> |
Springleaf Marketing Response | Solution Sharing | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Springleaf Marketing Response
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations Asian Ensemble, .baGGaj. and Merging the Mundane and Magic teams. Congrats to all other top finishers as well. Really interested in knowing the top solutions. </p>
<p>We have tried quite a few algorithms but none of them helped when we tried to blend with GBMs. So finally we just blended three GBMs together in our model. I know from the forum posts that even a single GBM can take us well above 0.8 in the public LB. Eager to know what went (feature engineering and parameters) into such GBM models. Also will be very helpful to know what other algorithms gave a good boost alongside GBM. </p>
<p>Thanks to all in advance for sharing your solutions.! </p>
<p>Happy Kaggling :) </p> |
Springleaf Marketing Response | sharing the submission files | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Springleaf Marketing Response
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>[quote=Alexander Larko;96754]</p>
<p>Sylvanas2013@</p>
<p>I think it is permissible! And ready to share files! My e-mail address - [email protected]</p>
<p>[/quote]</p>
<p>I simply apply a harmonic average with your file and get private board 0.80167, slightly improvement!</p>
<p>[quote=Comfortably Numb;96758]</p>
<p>More than that, would like to learn from your approaches! :)</p>
<p>[/quote]
sure, I will share my experience later today although my ranking is not very top :)</p>
<p>Thanks Alexander!
I have send you email attached my file :)</p>
<p>Could we share our submission file to each other?
I am curious about what would happen if we make just a simple combination of two team's result. I don't know whether it is allowable to share files here.</p>
<p>@Michael
with yours, I got 0.80435 on PB and would rank first.
Thanks!</p>
<p>@Bibimoune
ensemble yours and Alexander and mine, I got 0.80398 on PB with would rank us 2nd!
It is just a very simple average, without weight tuning. What a pity we didn't merge!</p> |
Stable Diffusion - Image to Prompts | 11th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>P.S. <a href="https://www.kaggle.com/yoichi7yamakawa" target="_blank">@yoichi7yamakawa</a> became Competitions Grandmaster through this competition. Congrats!!!</p>
<p>Thanks!<br>
I'm sorry, but our dataset is not organized well (some are missing, duplicated, randomly selected, etc.) and is too large to upload (~3TB)… I updated the description in more detail and added a link to each dataset, so please take a look.</p>
<p>Thanks to the Kaggle staff and host for organizing the competition and congratulations to the winners. I appreciate my teammates <a href="https://www.kaggle.com/charmq" target="_blank">@charmq</a> and <a href="https://www.kaggle.com/yoichi7yamakawa" target="_blank">@yoichi7yamakawa</a>!</p>
<h2>Overview</h2>
<p>We took the approach of training the image model to directly predict the target embeddings. For almost the entire duration of the competition, we used the image encoder of OpenCLIP-ViT/H as a backbone, which is a counterpart of the text encoder used in Stable Diffusion v2. In this competition, it was much stronger than usual ImageNet pre-trained models, such as EfficientNet or ConvNeXt. However, we realized that OpenCLIP-ViT/bigG and BLIP-2 worked better in the last 1-2 days, which we didn't have enough time for tuning and training fully. Our final submission was an ensemble of OpenCLIP-ViT/H, OpenCLIP-ViT/g, OpenCLIP-ViT/bigG, and BLIP-2.</p>
<h2>Dataset</h2>
<p>We downloaded and generated 4.5M+ images in total and used a part of them as a validation set. The validation scores correlated a lot with LB scores, although they were much higher (0.7+). The datasets we used are as follows. We extracted the prompts with cosine similarity away from each other. Also, we randomly concatenated <a href="https://github.com/krea-ai/open-prompts/tree/84d1628e7b3d97a45d237311579faf39692f2a02/modifiers" target="_blank">modifiers from Open Prompts</a> to a part of the dataset.</p>
<ul>
<li><a href="https://poloclub.github.io/diffusiondb/" target="_blank">DiffusionDB</a></li>
<li><a href="https://huggingface.co/Gustavosta/MagicPrompt-Stable-Diffusion" target="_blank">MagicPrompt-Stable-Diffusion</a></li>
<li><a href="https://cocodataset.org/#home" target="_blank">Microsoft COCO captions</a></li>
<li><a href="https://www.kaggle.com/competitions/stable-diffusion-image-to-prompts/discussion/398529" target="_blank">gpt_generated, hardcoded (Kaggle)</a></li>
<li><a href="https://ai.google.com/research/ConceptualCaptions/" target="_blank">Conceptual Captions</a></li>
<li><a href="https://nocaps.org/" target="_blank">nocaps</a></li>
<li><a href="https://shannon.cs.illinois.edu/DenotationGraph/" target="_blank">Flickr30k</a></li>
</ul>
<p>We first used the <a href="https://github.com/Stability-AI/stablediffusion/blob/cf1d67a6fd5ea1aa600c4df58e5b47da45f6bdbf/scripts/txt2img.py" target="_blank">official script</a> and mainly used the Diffusers library to generate images. The default parameter of <code>guidance_scale</code> differed from the official script, but we used the default parameters since it did not affect the performance a lot.</p>
<h2>What worked</h2>
<p>The backbones we used can only get images of (224, 224). We split a (448, 448) image into 4x (224, 224) patches and concat each patch (+full image) embeddings. This improved the performance but slowed down the speed of training and inference by 5x times, so we used it with normal models with (224, 224) input.</p>
<p>The following small techniques worked a lot to fine-tune large models.</p>
<ul>
<li>freezing of first layers</li>
<li>layer-wise learning rate decay</li>
<li>pre-trained weight decay</li>
</ul>
<p>Also, contrastive loss using text encoders worked for CLIP models.</p>
<h2>What did not work</h2>
<ul>
<li>Make use of Stable Diffusion's weight, such as encoder, decoder, unet</li>
<li>Training from Stable Diffusion's latent</li>
<li>Image2text</li>
<li>Auxiliary loss, such as CLIP text embeddings or prompt prediction</li>
<li>Decrease the step size in image generation (we thought it might work as data augmentation)</li>
<li>knn with trained models</li>
<li>CLIP Interrogator</li>
</ul>
<h2>Acknowledgement</h2>
<p>We deeply acknowledge great OSS such as PyTorch, PyTorch Lightning, Hugging Face, etc. We would also like to appreciate Preferred Networks, Inc for allowing us to use computational resources.</p> |
Stable Diffusion - Image to Prompts | 12th place solution - GPUs go Brrr | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="https://www.kaggle.com/trasibulo" target="_blank">@trasibulo</a> !</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3076212%2Fcd59e11f814393fda802c8d225b02635%2FKaggle%20Days%20post-competition.001.jpeg?generation=1684439846271607&alt=media" alt=""></p>
<p>YAY this is my first solo gold medal! Thanks Kaggle for organizing this! As always it was a fun but demanding competition. Also huge thanks to my employer, <a href="http://freepik.com" target="_blank">freepik.com</a> for the compute. I would have never made it with my own equipment!</p>
<p>My strategy all along was to amass as many images and train as many models as possible in the 3 months span of the competition.</p>
<h1>Data</h1>
<p>I generated roughly 4.4M images. Initially I used the script referred to by the organizer, but a few weeks into the competition I adopted a few optimizations from pytorch 2.0 and the diffusers library, such as memory efficient attention and <code>torch.compile</code> and rolled out my own generator largely based on Pedro Cuenca's <a href="https://pytorch.org/blog/accelerated-diffusers-pt-20/" target="_blank">accelerated diffusers blog post</a>. I used the same generation procedure as the organizer, or at least the bits that were public: generate 768px images then rescale to 512px, same CFG, noise sampler, etc. It would have been much faster to get 512px images straight out of the model but I hypothesized that the former would approximate the test set domain better. I kept the images in lossless format (WebP). Using JPGs was much more manageable but resulted in slightly worse LB.</p>
<p>I tried to gather prompts as diverse as possible including real image captions from LAION, CoCo, etc. and scraped prompts (DB2, OP, MP):</p>
<ul>
<li>Conceptual Captions (~2M)</li>
<li>Open Prompts (~1.2M)</li>
<li>CoCo (~0.4M)</li>
<li>Magic Prompt (~0.4M)</li>
<li>DiffusionDB2 (~0.4M)</li>
<li>LAION aesthetics (~0.1M)</li>
</ul>
<p>I used <a href="https://www.kaggle.com/xiaozhouwang" target="_blank">@xiaozhouwang</a> 's idea to <a href="https://www.kaggle.com/competitions/stable-diffusion-image-to-prompts/discussion/398529" target="_blank">filter out images with cosine similarities >.9</a> as many prompts, especially those scraped from discord, etc. are simply a rephrasing of another prompt. I also discarded prompts longer than 77 tokens. Deduplication had a significant impact in both CV and LB.</p>
<p>2023-08-14 EDIT: <a href="https://huggingface.co/datasets/jamarju/sd-4.4M" target="_blank">Link to the full train/validation dataset</a>.</p>
<h1>Single models</h1>
<p>I tried many architectures with a 100K subset distributed more or less uniformly from the Open Prompts database, and then selected the following models for the final train:</p>
<ul>
<li>eva_giant_patch14_336 (1B params)</li>
<li>eva02_large_patch14_448 (300M params)</li>
<li>eva02_large_patch14_448.mim_m38m_ft_in22k (300M params)</li>
<li>vit_huge_patch14_clip_224.laion2b (630M params)</li>
<li>vit_large_patch14_clip_336.openai_ft_in12k_in1k (300M params)</li>
</ul>
<p>I used timm's pretrained weights with a single <code>Linear(x, 384)</code> head and negative cosine similarity loss. Some of those models were released in the middle of the competition, so enormous thanks go to Ross Wightman for his titanic and continuous job.</p>
<p>I trained for 3 epochs with OneCycleLR schedule and lr=1e-6 for most of the models and Adam, 8-bit Adam from bitsandbytes or 8-bit Lion (also from bitsandbytes) depending on the model and deepspeed stage 2 for distributed training. Overall 8-bit training worked fine but some models were only competitive when trained with regular Adam + AMP (bfloat16). Each architecture was trained with a different random seed. In the last couple of weeks I retrained 10 of the weaker models using constant LR + warm up as some models improved both CV and LR over their OneCycleLR counterparts, but that was not always true so I kept the OneCycleLR checkpoint where it was better.</p>
<p>The total compute I used to train all 25 models was ~215 gpu*days plus another ~85 gpu*days taken by failed/suboptimal/interrupted trains. GPUs were mostly RTX 3090.</p>
<h1>Validation strategy</h1>
<p>To tame the CV/LB correlation, I clustered the embeddings into 100 clusters using KMeans and ran K-fold validation with 20% holdout samples grouped by cluster. My CVs were consistently between 9% and 17% higher than the LB for single fold models, 13% on average. </p>
<h1>Ensemble</h1>
<p>For the final ensemble I trained a very simple <code>nn.Linear(5, 1, bias=False)</code> to find the optimum blending coefficients. I tried other sophisticated sample-wise strategies, such as a transformer encoder with or without architecture hinting or an MLP, but blending linearly worked better.</p>
<p>In the very last hours I had 1 spare sub, so I submitted just the EVAs dropping the two (weaker) ViTs and that turned out to be my best scoring submission.</p>
<h1>Pain points</h1>
<p>Keeping track of so many models was the hardest part for me in this competition, as I am not an overly organized person. Also, some models training lasted for as long as 9 days! Murphy's law held true most of the time and I experienced many kinds of interruptions:</p>
<ul>
<li>power outages, </li>
<li>NVMe errors, </li>
<li>disk full at checkpoint save (ouch),</li>
<li>Linux OOM killing sprees, </li>
<li>GPU thermal throttling, </li>
<li>higher prio GPU tasks (ie. other people wanting to use the computer for real work),</li>
<li>even tmux has blown up in my face twice</li>
</ul>
<p>If I had to chose my least favorite crash that would probably be tmux's.</p>
<p>I can't imagine what months long training of massive LLMs must be!</p>
<p>I used pytorch lightning + checkpoint callbacks every epoch AND every hour. Unfortunately, the hourly checkpoints (or any mid-epoch checkpoint for this matter) are not properly resumable because the dataloader is not deterministically fast-forwarded to the exact same step. That results in some images being run twice and some others not run at all, and therefore a lower CV.</p>
<h1>Things that didn't work</h1>
<ul>
<li>LORA (trains fast but less competitive)</li>
<li>Many other archs (ConvNext XL, XXL, efficientnet v2, smaller ViTs, BEiT, BEiTv2, vanilla resnets, etc).</li>
<li>Patch dropout (trains really fast but less competitive)</li>
<li>Tensor-rt (what a nightmare)</li>
<li>Contrastive loss instead of -cosine similarity loss.</li>
<li>GPT-3.5/-4 prompt generation (problems of obedience to the prompt and repetition of themes, but I didn't try too hard)</li>
</ul>
<h1>Thanks</h1>
<p>Again, Ross Wightman for timm, <a href="https://www.kaggle.com/ptrblck" target="_blank">@ptrblck</a> at the pytorch forums, my employer <a href="http://freepik.com" target="_blank">freepik.com</a> (currently hiring, btw!) and the organizers.</p>
<p>You are right, here is the relevant code:</p>
<pre><code> (nn.Module):
():
().__init__()
self.linear = nn.Linear(n_models, , bias=)
torch.nn.init.constant_(self.linear.weight, /n_models)
():
self.linear(x)
</code></pre>
<p>Optimize the -cosim for x <code>(N, 384, 5)</code> using SGD. Then get the coefficients from <code>coefficients = net.linear.weight.squeeze().detach().cpu()</code></p> |
Stable Diffusion - Image to Prompts | 1st Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi, the logic is to make sure that cosine similarity from every two prompts is less than 0.8.</p>
<p>hi <a href="https://www.kaggle.com/ataracsia" target="_blank">@ataracsia</a>, thanks, here is my answer to your question.</p>
<ol>
<li>Using a larger image size can provide some improvement, but it is not as significant as using a larger model like ConvNeXt-XXLarge that has been pretrained on PROMPT_LQ. I finished training this model in last two days and given more time, its score could be further improved. However, excessively large image sizes are unnecessary since they typically do not add much value in understanding an image's content. 2. The image size 224 and 336 were choosed according to the settting of pretrained models, for example, ViT-Large's pretrained weights is from openai/clip-vit-large-patch14-336, it's input size is 336. </li>
</ol>
<p>Congrats to all the winners, and thank you to Kaggle for organizing such an interesting competition. Big thanks to all the other Kagglers out there who shared their ideas and codes throughout the competition. Apologize for the slight delay as I spent some time conducting ablation experiments and late submissions to showcase the performance improvements from various mechanisms.</p>
<h1>Solution Summary</h1>
<p>I generated approximately 10.6 million images from around 8.6 million prompts. Various methods were utilized to accelerate the image generation process. These images and prompts were used to train ViT-Large, ConvNeXt-XXLarge, and BLIP-2 models. Modifications were made to these models, such as removing the Large Language Model (LLM) component of BLIP-2 and employing LORA. My final submission involved a weighted ensemble of embeddings from the three aforementioned model types.</p>
<h1>Datasets</h1>
<p>In this competition, having diverse and high-quality data is important. To achieve this, I first utilized DiffusionDB_2M and prioritized datasets relevant to image captioning tasks, including COCO Captions, VizWiz, Open Image, ADE20K, Flickr30K, and TextCaps. Duplicates with a similarity score >0.8 were removed. Finally, prompts generated by ChatGPT were also added into the dataset with stricter rules, resulting in a dataset of around 2 million high-quality prompts that we refer to as "PROMPTS_HQ" In the following text. For every prompt, I generated 2 images using different random seeds. </p>
<p>COYO-700M is a large-scale dataset that contains 747M image-text pairs, I attempted to select and add some of them to the PROMPT_HQ dataset, it did not improve the score. Therefore, I utilized this dataset for pretraining and found it to be highly effective(+0.008~0.01). From this big dataset, I selected approximately 6.6 million samples that met specific criteria, such as ensuring the similarity between the image and text was greater than 0.3 and removing duplicates. I then utilized the text as prompts, feeding them into Stable Diffusion.This dataset is referred to as "PROMPTS_LQ". </p>
<h1>Image Generation</h1>
<p>Generating 10.6M images using Stable Diffusion is not an easy task for me, generating an image using scripts provided is about 15s, to speed up the generation, many measures were taken:<br>
1.XFormer, 10s/img<br>
2.FP16, 5s/img <br>
3.The output image size from 768x768 to 512x512 and the steps from 50 to 25, 2s/img</p>
<h1>Models</h1>
<p>In the early stages, I spent several days designing models for this task by understanding and applying the mathematical principles of stable diffusion, which motivated me to enter this competition. Unfortunately, I did not found efficient and effective way. Therefore, I used vision models which were supervised using target vectors produced by the all-MiniLM-L6-v2 model.</p>
<p>My final submission included the ViT-Large, ConvNeXt-XXLarge, and BLIP-2 models. Although I also tried SWIN, DinoV2, and other models, they did not demonstrate an advantage within the limited time and resources available.</p>
<p>The Large Language Model(LLM) component in BLIP-2 was removed, not solely due to efficiency concerns, but primarily because both grammar and sequence of prompts in this task were unimportant or even a little harmful if supervised by the them. However, if time permits, the characteristics of the prompts and dedicated losses designed for them may bring improvements.</p>
<p>LORA was adopted to finetune the models, I used LORA for the bottom layers and it bring some improvement.</p>
<p>To my surprise, adding another large fully connected layer before the output layer brought significant improvement to the CLIP model.</p>
<pre><code>ebd_dim =
fc_dim = *
self.head = nn.Sequential(
nn.Linear(ebd_dim, fc_dim),
nn.BatchNorm1d(fc_dim),
nn.ReLU(),
nn.Linear(fc_dim, ),
)
</code></pre>
<p>I trained my models in two stages: first, pretraining on PROMPTS_LQ, and then finetuning them on PROMPTS_HQ.</p>
<p>When training, images from the same prompt were randomly selected once per epoch. Other options were also tested, such as training different models and ensembling, but they performed nearly the same.</p>
<p>I found that using larger image sizes led to higher scores, but I did not attempt to use sizes beyond 336x336 as I believed computation resources should be invested in other more important options.</p>
<p>No augmentation was used when training CLIP models and default augmentation was used when training BLIP-2 model.</p>
<p>Other training details and their respective contributions are listed in the table 1.</p>
<table>
<thead>
<tr>
<th><strong>Model</strong></th>
<th><strong>Stage</strong></th>
<th><strong>Epochs</strong></th>
<th><strong>Warmup</strong></th>
<th><strong>Initial lr</strong></th>
<th><strong>Final lr</strong></th>
<th><strong>EMA</strong></th>
<th><strong>Aug</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>ViT-Large</td>
<td>pretraining</td>
<td>2</td>
<td>1</td>
<td>1e-4</td>
<td>8e-5</td>
<td>✓</td>
<td></td>
</tr>
<tr>
<td>ViT-Large</td>
<td>finetuning</td>
<td>3</td>
<td>1</td>
<td>1e-4</td>
<td>5e-5</td>
<td>✓</td>
<td></td>
</tr>
<tr>
<td>ConvNeXt-XXLarge</td>
<td>pretraining</td>
<td>2</td>
<td>1</td>
<td>1e-4</td>
<td>8e-5</td>
<td>✓</td>
<td></td>
</tr>
<tr>
<td>ConvNeXt-XXLarge</td>
<td>finetuning</td>
<td>3</td>
<td>1</td>
<td>1e-4</td>
<td>5e-5</td>
<td>✓</td>
<td></td>
</tr>
<tr>
<td>BLIP-2</td>
<td>pretraining</td>
<td>3</td>
<td>1</td>
<td>1e-4</td>
<td>4e-5</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>BLIP-2</td>
<td>finetuning</td>
<td>4</td>
<td>1</td>
<td>1e-4</td>
<td>2e-5</td>
<td>✓</td>
<td>✓</td>
</tr>
</tbody>
</table>
<p>Table1. Training details of models, all models were trained in two stages</p>
<h1>Results</h1>
<p>The results are shown in table 2.</p>
<table>
<thead>
<tr>
<th><strong>Index</strong></th>
<th><strong>Model</strong></th>
<th><strong>Samples</strong></th>
<th><strong>Image size</strong></th>
<th><strong>Trained params</strong></th>
<th><strong>LORA</strong></th>
<th><strong>+FC</strong></th>
<th><strong>Private</strong></th>
<th><strong>Public</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>ViT-Large</td>
<td>0.2M</td>
<td>224</td>
<td>25%</td>
<td></td>
<td></td>
<td>0.5579</td>
<td>0.5573</td>
</tr>
<tr>
<td>2</td>
<td>ViT-Large</td>
<td>0.2M</td>
<td>336</td>
<td>25%</td>
<td></td>
<td></td>
<td>0.5612</td>
<td>0.5617</td>
</tr>
<tr>
<td>3</td>
<td>ViT-Large</td>
<td>0.2M</td>
<td>336</td>
<td>30%</td>
<td></td>
<td>✓</td>
<td>0.5674</td>
<td>0.5687</td>
</tr>
<tr>
<td>4</td>
<td>ViT-Large</td>
<td>0.2M</td>
<td>336</td>
<td>47%</td>
<td>✓</td>
<td>✓</td>
<td>0.5725</td>
<td>0.5737</td>
</tr>
<tr>
<td>5</td>
<td>ViT-Large</td>
<td>2M</td>
<td>336</td>
<td>47%</td>
<td>✓</td>
<td>✓</td>
<td>0.6368</td>
<td>0.6386</td>
</tr>
<tr>
<td>6</td>
<td>ViT-Large</td>
<td>6.6M+2M</td>
<td>336</td>
<td>47%</td>
<td>✓</td>
<td>✓</td>
<td>0.6459</td>
<td>0.6461</td>
</tr>
<tr>
<td>7</td>
<td>ConvNeXt-XXLarge</td>
<td>6.6M+2M</td>
<td>256</td>
<td>47%</td>
<td>✓</td>
<td>✓</td>
<td>0.6505</td>
<td>0.6518</td>
</tr>
<tr>
<td>8</td>
<td>BLIP-2</td>
<td>6.6M+2M</td>
<td>224</td>
<td>20%</td>
<td>✓</td>
<td></td>
<td>0.6458</td>
<td>0.6489</td>
</tr>
<tr>
<td>9</td>
<td>Ensemble(6+7+8)</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>0.6620</td>
<td>0.6634</td>
</tr>
<tr>
<td>10</td>
<td>Ensemble(11 diverse predictions)</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>0.6632</td>
<td>0.6645</td>
</tr>
</tbody>
</table>
<p>Table2. Models, strategies and their scores </p>
<p><a href="https://www.kaggle.com/code/bestfitting/en-3models-vit-blip-convnext-notebook" target="_blank">This notebook contains an ensemble of three models that achieved scores of 0.6620/0.6634.</a><br>
<strong><em>During the writing process of this solution, ChatGPT was utilized for proofreading purposes, including this sentence. :)</em></strong></p>
<p>Yes! The image size is also the same.</p>
<p>I learned a lot from your solution, congrats and thanks!</p>
<p>Sorry, perhaps my reply was a bit misleading. What I meant was that using gradient descent or a similar method to solve the guidance text would have taken too long. Therefore, I decided to use my current solution instead.</p>
<p>My intent is to solve the guidance using some mechanism by gradient descent or approaching this problem by treating guidance as a constant in stochastic differential equations as discussed in these papers, it's not easy to find a good solution under limited time and we must finish submission in 9 hours.<br>
SCORE-BASED GENERATIVE MODELING THROUGH STOCHASTIC DIFFERENTIAL EQUATIONS<br>
CLASSIFIER-FREE DIFFUSION GUIDANCE</p>
<p>Hi ramensuki, sorry! I I have no plans to release it. But I will share the inference pipeline. </p>
<p>hi <a href="https://www.kaggle.com/yassinealouini" target="_blank">@yassinealouini</a>, I will share inference part of codes in recent days, so you can find the details.</p>
<p>Freeze the bottom 40% layers, use LoRa for the middle 40% layers, and make the top 20% layers trainable. The number of trainable parameters is 47%. XXLarge has a training batch size of 320 and uses fp16.</p> |
Stable Diffusion - Image to Prompts | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes. OID means "Open Images Dataset"</p>
<p>The training of the vision model was very stable and Nan never occurred.</p>
<p>Generating text in BLIP-2 is heavy, but not so heavy if you only forward the vision model once. If the resolution is 224, it takes about 30 minutes to an hour.</p>
<p>Stable Diffusion 2.0-v is a <a href="https://arxiv.org/abs/2202.00512" target="_blank">v-prediction</a> model that originally had the ability to produce high quality samples in a fairly small number of steps. The quality of the generation is a bit lower with my setup, but I think it is insignificant compared to recent image augmentation methods.</p>
<p>Congrats to all the winners, and thanks to kaggle for hosting such an interesting competition. This competition was a great learning experience for me. I am really looking forward to reading everyone's solutions.</p>
<p>My solution is basically the same as the ViT-based method in the public notebook. I created my own dataset by running Stable Diffusion and trained a model to predict sentence embeddings in a supervised manner. The overview is shown in the figure below.</p>
<p><img src="https://user-images.githubusercontent.com/8179588/238365771-166ab184-9639-49e1-bd67-9d6e8fd05c90.png" alt="image"></p>
<h2>Dataset Generation</h2>
<h3>Preparation</h3>
<p>The following modifications were made to 4x speed up the generation with fp16.</p>
<ul>
<li>Change the scheduler from DDIM to DPMSolver++ (diffusers.DPMSolverMultistepScheduler) and change the number of steps from 50 to 16.</li>
<li>Use xformers</li>
</ul>
<p>There was concern that reducing the number of generation steps would cause a shift in the mapping from prompt to image, but there was no significant qualitative difference, and the advantage of increased generation speed was judged to be greater, so I chose this method.</p>
<p>The xformers led to a speedup of about 30%. xformers were also used in the training of the model, which is described after.</p>
<p>Note that the random numbers used in the generation have a significant impact on the overall image generated, so it is necessary to generate images using different seeds. At first I fixed the seed to 0 to ensure reproducibility, but I realized this mistake and used a different seed for each prompt, which greatly increased the score. I was generating the seed using zlib.adler32, the name of the prompt set, and the number of the prompt.</p>
<p>As for the guidance scale, I had been using the default value of 7.5 from diffusers until I realized that 9.0 seemed to be the correct value (<a href="https://github.com/Stability-AI/stablediffusion/blob/main/scripts/txt2img.py#L142" target="_blank">link</a>), so I used data generated with either value. LB scores showed that there was no significant difference between 7.5 and 9.0, but scores with 3.0 were significantly lower.</p>
<h3>DiffusionDB</h3>
<p>To deal with the vocabulary specific to Stable Diffusion, we generated images using prompts from <a href="https://huggingface.co/datasets/poloclub/diffusiondb" target="_blank">DiffusionDB</a>. Excluding duplicates, there are approximately 1.8M Prompts, and images were generated for all of them.</p>
<h3>COCO Captions</h3>
<p>Captions from <a href="https://arxiv.org/abs/1504.00325" target="_blank">Microsoft COCO Captions</a> were used to generate the images. 500k images were generated from approximately 600k captions in train and validation.</p>
<h3>Open Images</h3>
<p>To create even more diverse prompt sets, I utilized the natural images in <a href="https://github.com/openimages/dataset/blob/main/READMEV3.md" target="_blank">Open Images Dataset V3</a> (OID) and Image-To-Text pretrained models. Images from OID are input to the Image-To-Text model, and the resulting caption is input to StableDiffusion as a prompt. OID contains 9M natural images, of which the first 5M were used to generate.</p>
<p><a href="https://huggingface.co/Salesforce/blip2-flan-t5-xxl" target="_blank">Salesforce/blip2-flan-t5-xxl</a> was used as the Image-To-Text model. I used Nucleus sampling with <code>do_sample=True</code> and <code>top_p=0.9</code>.</p>
<p>The sample prompts in the competition consist of multiple objects and multiple sentences, but captions output by BLIP-2 tends to produce very concise sentences by default. Therefore, by chaining BLIP-2's caption generation, I made sure that sentences with a certain level of complexity are generated. The procedure for chaining is as follows</p>
<ol>
<li>generate a sentence following <code>"a photo of"</code> as usual (denote the generated sentence as A)</li>
<li>generate a sentence following <code>"a photo of" + A</code> (denote this as B)</li>
<li>Output <code>A + B</code> as the final caption</li>
</ol>
<p>The Prompt set created is as follows.</p>
<table>
<thead>
<tr>
<th>Prompt set name</th>
<th>Subset</th>
<th># of images</th>
<th>Chain</th>
</tr>
</thead>
<tbody>
<tr>
<td>OID1</td>
<td>train0</td>
<td>1M</td>
<td>2</td>
</tr>
<tr>
<td>OID2</td>
<td>train1</td>
<td>1M</td>
<td>3</td>
</tr>
<tr>
<td>OID3</td>
<td>train2</td>
<td>1M</td>
<td>1*</td>
</tr>
<tr>
<td>OID4</td>
<td>train3</td>
<td>1M</td>
<td>2</td>
</tr>
<tr>
<td>OID5</td>
<td>train4</td>
<td>1M</td>
<td>2</td>
</tr>
</tbody>
</table>
<p>For OID3, in order to realize a combination of objects that do not exist in natural images, prompts were created by connecting two short captions created with chain=1 with commas. In concrete terms, the short captions created in chain=1 were converted to sentence embeddings using SentenceTransformer, and divided into 1000 clusters using Spherical KMeans. Then, for every combination of two clusters, we randomly selected one caption for each cluster and connected them with a comma.</p>
<h3>ChatGPT (gpt-3.5-turbo)</h3>
<p>This is a dataset that has utilized the ideas from <a href="https://www.kaggle.com/code/safavieh/chatgpt-generated-prompts/notebook" target="_blank">this notebook</a>. When I tried generating prompts using ChatGPT, I found that using the same prompt only yielded similar generation results. Altering parameters such as temperature did somewhat improve the situation, but it also made it more likely to produce broken sentences or excessively long ones. Therefore, I considered modifying the prompt sent to ChatGPT utilizing the previous generated prompts, and implemented it with the following code.</p>
<pre><code>command = (
)
sample_set = [
,
,
,
,
,
,
,
]
generated_prompts: [] = []
_ (num_trials):
(generated_prompts) < :
examples = (sample_set)
:
examples = (np.random.choice(sample_set, , replace=)) + (
np.random.choice(generated_prompts, , replace=)
)
np.random.shuffle(examples)
prompt = command + .join([ e examples])
generated_prompts = send_to_chat_gpt_and_cleansing(prompt)
</code></pre>
<p>In this method, I created about 1M prompts, inputted them into Stable Diffusion after removing duplicates, and produced about 1M images.</p>
<p>I also tested a few other LLMs, especially focusing on those with open weights, but I was unable to obtain a prompt set equivalent or superior to gpt-3.5-turbo using the same method, based on my qualitative evaluation. Typical behaviors of such LLMs are starting some sort of explanation instead of answering in the desired format or only being able to produce slightly modified examples. I have not tested LLaMA and its derivative models because I was unable to obtain LLaMA's weights in a license-compliant manner.</p>
<h2>Model and Training</h2>
<h3>CLIP Models</h3>
<p>In the initial, simplistic experiments, the pretrained models with CLIP performed better than those for ImageNet classification. As a result, I focused on fine-tuning CLIP models.</p>
<p>The following four models were used in the final submission.</p>
<ul>
<li>ConvNeXt xxlarge (<a href="https://huggingface.co/laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg-rewind" target="_blank">CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg-rewind</a>)</li>
<li>BLIP-2 VisionModel (EVA01-g?, <a href="https://huggingface.co/Salesforce/blip2-opt-2.7b" target="_blank">Salesforce/blip2-opt-2.7b</a>)</li>
<li>EVA02-L (<a href="https://huggingface.co/timm/eva02_large_patch14_clip_336.merged2b_s6b_b61k" target="_blank">timm/eva02_large_patch14_clip_336.merged2b_s6b_b61k</a>)</li>
<li>EVA02-e (<a href="https://huggingface.co/timm/eva02_enormous_patch14_plus_clip_224.laion2b_s9b_b144k" target="_blank">timm/eva02_enormous_patch14_plus_clip_224.laion2b_s9b_b144k</a>)</li>
</ul>
<h3>Linear-Probing</h3>
<p>Regarding the fine-tuning of CLIP, it has been suggested that Linear-Probing (LP), which trains only the last linear layer, and <a href="https://arxiv.org/abs/2202.10054" target="_blank">LP-FT</a>, which fine-tunes the entire model after LP, are effective to generalize. Indeed, these methods were effective in my experiments as well.</p>
<h3>Resolution</h3>
<p>Increasing the resolution also had a significant effect. CLIP models are typically trained at a resolution of 224, but I believe that 224 was disadvantageous for recognizing complex contexts containing multiple objects.</p>
<p>ViT can infer with only a slight degradation in performance for inputs at a higher resolution than during training, by interpolating positional encoding. For instance, you can use functions like <a href="https://github.com/huggingface/pytorch-image-models/blob/v0.9.2/timm/layers/pos_embed.py#L17" target="_blank">resample_abs_pos_embed</a> implemented in timm. By using this, you can use a model trained at a lower resolution as a good initial value when training at a higher resolution.</p>
<p>Until the middle of the competition, I was using models trained at 224, but towards the end, I improved my score by using those as initial values and training models at 336 or 448.</p>
<h3>Q-former</h3>
<p>SentenceTransformer's sentence embeddings are the normalized sum of token embeddings. I considered leveraging these token embeddings as additional supervision signals.</p>
<p>I implemented an architecture using the Q-former from BLIP-2 to convert the patch embeddings output by the vision model into token embeddings with a length of 77. I then compared the output of that model to the token embeddings from the SentenceTransformer, padded with zeros until they reached a length of 77, and trained the model based on this comparison. Simply taking the mean squared error (MSE) would be heavily influenced by word order, so I used the Hungarian method to find the matching with the minimum MSE and calculated the MSE between those matches. I also used the cosine similarity between the sentence embeddings.</p>
<p>Simultaneous training of the Q-former and the vision model was unstable, and I could not achieve success during the competition period. However, by freezing the pretrained vision model and training only the added Q-former, I achieved a performance improvement of +0.001 to +0.003 on the leaderboard (LB) for individual models. Moreover, since the vision model is frozen, I could obtain its standalone output of the vision model with almost no additional cost when predicting with the Q-former. It provided a slight benefit in ensembling.</p>
<p>The effectiveness of the Q-former was not particularly pronounced in this problem setting, but I believe it would be useful in the original sense of an inversion task, where the goal is to reproduce images.</p>
<h3>Weighted Averaging</h3>
<p>For ensembling, I used weighted averaging. The weights were optimized using Adam and validation set from my ChatGPT dataset.</p>
<p>I'm sorry. I do not plan to publish the code at this time, but I will share a snippet of the code and hope that will be helpful.</p>
<p>The loss implementation used in Q-former is as follows</p>
<pre><code> scipy.optimize linear_sum_assignment
pred_token_emb = model(pixel_values)
pred_token_emb.shape == (batch_size, , )
torch.no_grad():
gt_token_emb = st_model.forward(st_inputs)[]
gt_token_emb.shape == (batch_size, , )
attention_mask = st_inputs[]
gt_token_emb = torch.where(
attention_mask[..., np.newaxis] == , gt_token_emb, torch.zeros_like(gt_token_emb)
)
mse_mat = ((gt_token_emb[:, np.newaxis, :, :] - pred_token_emb[:, :, np.newaxis, :]) ** ).mean(dim=-)
mse_mat.shape == (batch_size, , )
mse_mat_np = mse_mat.detach().cpu().numpy()
index = []
i (batch_size):
row_ind, col_ind = linear_sum_assignment(mse_mat_np[i])
index.append(row_ind * + col_ind)
index_tensor = torch.from_numpy(np.asarray(index)).to(device=mse_mat.device)
set_mse = mse_mat.view(batch_size, * ).gather(dim=, index=index_tensor)
set_mse.shape == (batch_size, )
set_mse = set_mse.mean()
</code></pre>
<p>The spherical k-means simply uses faiss.KMeans(spherical=True), so please refer to <a href="https://github.com/facebookresearch/faiss/wiki/Faiss-building-blocks:-clustering,-PCA,-quantization" target="_blank">the document</a>.</p>
<p>Thanks. Sadly, I wasn't able to ultimately achieve first place 😅</p>
<ol>
<li>Yes.</li>
<li>I'm sorry. My code contains some hard codes for our company's private clusters, and it is hard to clean those up for sharing. Therefore, I don't have plans to publish the code at this time. If there is a specific process you want to know about, please let me know and I'll post a snippet of that part.</li>
</ol>
<blockquote>
<p>I'm curious about how the "weighted averaging" was implemented. I tried using a linear layer to ensemble 4 models, but the result was not better than manual weights. Thanks!</p>
</blockquote>
<p>I used the module bellow. I did not do anything special. I think it is more important which dataset to fit than the implementation. You need to fit with a dataset that is not used for training and that is as close as possible to the test set. If you are not using the dataset shared <a href="https://www.kaggle.com/competitions/stable-diffusion-image-to-prompts/discussion/402146" target="_blank">here</a>, you may want to try fitting with this dataset.</p>
<pre><code> (nn.Module):
():
().__init__()
self.weight = nn.Linear(n, , bias=)
torch.no_grad():
self.weight.weight[:] =
():
y_hat = self.weight(x)[..., ]
y_hat = torch.nn.functional.normalize(y_hat, dim=-)
cos_sim = (y * y_hat).(dim=-)
loss = -cos_sim.mean()
loss
</code></pre>
<p>It is also possible that the manual weights are already very good 😄</p>
<blockquote>
<p>I've got one more question about the high-resolution fine-tuning. Is it that we can fine-tune a small-resolution model, change the positional embedding layer and then just use the higher resolution for fine-tuning under the same model architecture? </p>
</blockquote>
<p>Yes, I used exactly the same architecture as the low-resolution one, except I interpolated the positional encoding in <a href="https://github.com/huggingface/pytorch-image-models/blob/v0.9.2/timm/layers/pos_embed.py#L17" target="_blank">resample_abs_pos_embed</a>.</p>
<p>I forgot to mention that for EVA02-e, fine-tuning the entire layer would use too much computing resources, so I trained by freezing all but the last 10 layers.</p>
<p>Thank you for your suggestion. I will consider it.</p>
<p>A similar process is implemented in timm, so that may be helpful.<br>
<a href="https://github.com/huggingface/pytorch-image-models/blob/v0.9.2/timm/models/eva.py#L606-L616" target="_blank">https://github.com/huggingface/pytorch-image-models/blob/v0.9.2/timm/models/eva.py#L606-L616</a></p>
<p>I uploaded the code for Q-Former. By passing patch embedding extracted from images to this Qformer Module, sentence embedding and token embedding can be extracted.<br>
<a href="https://gist.github.com/zaburo-ch/aeaa2d26d076991106958791dab38929" target="_blank">https://gist.github.com/zaburo-ch/aeaa2d26d076991106958791dab38929</a></p>
<p>1) A private computer cluster owned by my company.<br>
2) Yes, Stable Diffusion 2 is used.</p> |
Stable Diffusion - Image to Prompts | 36th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It's about 400k pairs.</p>
<p>No shakeup. Congrats to all the winners!!</p>
<p>This is my <strong>second solo silver</strong>. And I'm finally becoming a <strong>Kaggle competition Master</strong> after the result finalises!!!</p>
<hr>
<p>My final ensemble consists of fine-tuned models and zero-shot models.</p>
<p><strong>TLDR:</strong> KNN regression + CLIP interrogator + an ensemble of 4 fine-tuned models, with manual weights of 0.3, 0.1 and 0.6 respectively. The submission notebook can be found <a href="https://www.kaggle.com/code/xfffrank/sd-clip-knnregression-vit-interrogator/notebook" target="_blank">here</a>.</p>
<p>The details are explained as below.</p>
<h2>Fine-tuned models</h2>
<h3>Data preprocessing</h3>
<p>For this part, or even the whole competition, <strong>data preprocessing and the diversity of datasets</strong> are two of the most important points. For the training of each single model, I used texts from Diffusion-2M, Diffusion-14M, the public 900k and 80k datasets.</p>
<p>There are two reasons why preprocessing is necessary:</p>
<ol>
<li>Filter high-correlation data so that the model is not easy to overfit.</li>
<li>Reduce the size of training set so that the training cost is acceptable.</li>
</ol>
<p>For each dataset, the preprocessing follows this pipeline:</p>
<ol>
<li>Remove duplicates according to existing dataset.</li>
<li>Filter by hand-crafted rules.<ol>
<li>Remove texts with num_of_words < 5.</li>
<li>Remove duplicates for texts that have the same 4 starting/ending words.</li>
<li>Remove texts that have non-English characters.</li>
<li>Remove texts that have more than 77 tokens, using the same CLIPTokenizer as Stable Diffusion v2.</li></ol></li>
<li>Filter by correlation(i.e. cosine similarity) within the dataset. I used a threshold of 0.8.</li>
<li>Filter by correlation with existing datasets. I used a threshold of 0.8.</li>
</ol>
<blockquote>
<p>When filtering by correlation, it's important to utilize GPU power and process the dataset in batches. For instance, when utilizing the <code>encode</code> function provided by the "Sentence Transformer" library, enabling the "normalize_embeddings=True" parameter will return embeddings with a unit length. This allows for easy computation of cosine similarity using <code>torch.matmul</code>.</p>
</blockquote>
<h3>Data generation</h3>
<p>I re-generated the images using Stable Diffusion v2 following the settings in the dataset description.</p>
<h3>Training</h3>
<ul>
<li>CV split: Split the validation set according to the token length. Split into 12 ~ 15 folds and take the first fold as the validation set.</li>
<li>How many layers to freeze during fine-tuning?<ul>
<li>I found that the validation score did not increase after the number of training parameters reached a certain point, so I manually tune the starting unfreezing layer to make the number of training parameters as ~80 million.</li></ul></li>
<li>Data augmentation: the only two methods I found helpful are <code>HorizontalFlip</code> and <code>MixUp</code>.</li>
<li>Number of epochs: 4. The validation score always reached the highest point at epoch 3.</li>
</ul>
<h3>Ensemble</h3>
<table>
<thead>
<tr>
<th>Model</th>
<th>public LB</th>
<th>weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>clip-vit-large-patch14-336</td>
<td>0.57954</td>
<td>0.3</td>
</tr>
<tr>
<td>clip-vit-large-patch14</td>
<td>0.57665</td>
<td>0.2</td>
</tr>
<tr>
<td>blip-image-captioning-large</td>
<td>0.57621</td>
<td>0.2</td>
</tr>
<tr>
<td>convnext_large_mlp.clip_laion2b_ft_soup_320</td>
<td>0.57907</td>
<td>0.3</td>
</tr>
</tbody>
</table>
<ul>
<li>ensemble public LB: 0.59314</li>
<li>ensemble public LB (normalise the outputs before performing ensemble): 0.59673</li>
</ul>
<h2>KNN regression</h2>
<p>Follows the <a href="[https://www.kaggle.com/code/motono0223/sdip-clip-knnregression-zeroshot-method](https://www.kaggle.com/code/motono0223/sdip-clip-knnregression-zeroshot-method)" target="_blank">public notebook</a>, except that I also added datasets processed on my own.</p>
<h2>CLIP interrogator</h2>
<p>Follows the <a href="[https://www.kaggle.com/code/leonidkulyk/lb-0-45836-blip-clip-clip-interrogator](https://www.kaggle.com/code/leonidkulyk/lb-0-45836-blip-clip-clip-interrogator)" target="_blank">public notebook</a>.</p>
<h2>Final thoughts</h2>
<p>There are several things I didn’t have time to explore:</p>
<ol>
<li>Generate more datasets to increase the diversity.</li>
<li>Filter more public datasets and add them to the KNN model.</li>
<li>Explore more zero-shot methods.</li>
</ol>
<p>I’m still new to the multi-modal field, but this competition is a good starting point!</p>
<p>No, my submission time is about 5 hours.</p> |
Stable Diffusion - Image to Prompts | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for organizers hosting a nice competition.</p>
<p>My approach is based on CLIP model with directly predicting 384 embedding vector following <a href="https://www.kaggle.com/code/shoheiazuma/stable-diffusion-vit-baseline-train" target="_blank">this baseline notebook</a>.</p>
<h2>Dataset</h2>
<p>I used about ~400K data. The validation score of Diffusion DB was well correlated to LB score but still 0.025 ~ 0.03 gap.</p>
<ul>
<li><p><a href="https://vizwiz.org/tasks-and-dathasets/image-captioning/" target="_blank">Vizwiz image caption</a> ~70k</p>
<ul>
<li>This was key dataset for my experiment.</li>
<li>Training with this data, local val vs LB score -> 0.5415/0.5309, but without this 0.5528/0.48765</li>
<li>I think this caption dataset is generally more descriptive, diverse and longer than COCO.</li>
<li>Original training samples are 23K at this dataset, but each training sample has max 5 captions, so sampled 3 captions for each training sample.</li></ul></li>
<li><p>Diffusion DB, 300k</p>
<ul>
<li>only using SD2 images, which I personally generated for improving local val/LB correlation</li>
<li>210K samples from prompt filtering and another 80k images from hard sampling using a trained model.</li></ul></li>
<li><p>COCO, 25k</p>
<ul>
<li>Under training with Vizwiz, this dataset contribution was relatively low.</li></ul></li>
<li><p><a href="https://www.kaggle.com/datasets/motono0223/gustavosta-stable-diffusion-prompts-sd2-v2" target="_blank">Lexica.art</a>, 10k</p></li>
</ul>
<h2>Model</h2>
<ul>
<li>only using CLIP models<ul>
<li>My baseline model was ViT Base 224(laion2B), almost all experiments were done on this model. Final score for this model was, local val/Public/Private=0.6402/0.61402/0.61357</li>
<li>Single best model. ViT L 336(open ai), local val/Public/Private=0.6658/0.63557/0.63425<ul>
<li>I could not find good training parameters for larger models, ViT-H and ConvNext xxLarge.</li></ul></li>
<li>Best submission, ensemble of ViT-L(laion2b, openai, datacompxl), ViT-H, and ConvNext Large/xxLarge, local val/Public/Private=0.6791/0.64935/0.64814</li></ul></li>
</ul>
<h2>Training</h2>
<h3>CLIP fine tuning</h3>
<p>Fine tuning CLIP model needs extra care compared to imagenet weight, we have to keep original CLIP weight as possible as we can to get best performance.<br>
I found the following two methods improve score by ~ +0.02 in total and used a lot of time to find best hyperparameters.</p>
<ol>
<li><p><a href="https://arxiv.org/pdf/2202.10054.pdf" target="_blank">LP-FT(Linear Probe and then Fine Tune)</a></p>
<ul>
<li>This was studied at <a href="https://arxiv.org/ftp/arxiv/papers/2210/2210.08473.pdf" target="_blank">the top-1 solution of previous competition</a> and also other teams at this competition.</li></ul></li>
<li><p>Combination of EMA and layer-wise learning rate decay</p>
<ul>
<li>This has been studied at <a href="https://arxiv.org/abs/2212.06138" target="_blank">this CLIP fine tuning paper</a>.</li>
<li>Best parameter for ViT-B was (EMA decay, layer-wise decay) = (0.9998, 0.4)</li>
<li>But this did not work for ViT-L, layer-wise decay = 0.6 was best. It seemed decay factor for initial layer should be around 1e-6 so we have to choose large decay late for larger models.</li></ul></li>
</ol>
<p>These two methods has not only direct effect but also indirect effect, because by these methods we can increase learning rate with keeping CLIP original weight and get better result.</p>
<h3>Augmentation</h3>
<ul>
<li><p>Crop/RandomErase/RandAug(without posterize, solarize and equalize)</p>
<ul>
<li>This contributed to local val score, ~ +0.007 and this enabled longer training.</li>
<li>Strong image transformation did not fit this competition, I tried to find weak one.</li>
<li>Horizontal Flip worsen performance</li></ul></li>
<li><p>Same prompt and different seed</p>
<ul>
<li>10% of Diffusion DB and 20% of Vizwiz samples have 3 images which are belong to the same prompt but different SD2 generation seed. For each training epoch one of 3 images randomly chosen.</li>
<li>This also contributed to the score and validation loss curve.</li></ul></li>
<li><p><a href="https://github.com/ShieldMnt/invisible-watermark" target="_blank">Invisible watermark</a> augmentation</p>
<ul>
<li>Adding watermark on input images during training with the following <a href="https://github.com/Stability-AI/stablediffusion/blob/334969331438036c1b4fb529c262f78499870d86/scripts/txt2img.py#L363" target="_blank">stability ai repository</a></li>
<li>This did not contribute much to the score, only ~ +2.0e-5</li></ul></li>
</ul>
<h3>Code</h3>
<p>This is <a href="https://www.kaggle.com/code/sai11fkaneko/3rd-place-solution/notebook" target="_blank">my submission notebook</a>.</p>
<ol>
<li><p>210k samples from DiffusionDB 14M with <a href="https://www.kaggle.com/code/shoheiazuma/diffusiondb-data-cleansing" target="_blank">this baseline code</a></p>
<ul>
<li>I confirmed this sampling result was as diverse as other dataset with cos distance of prompt embedding.</li>
<li>And I used DiffusionDB 14M, but I think DiffusionDB 2M pool is enough for good performance.</li></ul></li>
<li><p>80k hard sample</p>
<ul>
<li>Using original DiffusionDB 2M image, so we don't have to take time consuming generation process.</li>
<li>I set some filters for sampling DiffusionDB 2M image with its metadata, "square image", "class guidance scale around 7.5", "diffusion step =50", "image_nsfw != 2.0", "active user filtering"</li>
<li>Inference over filtered DiffusionDB 2M image, pick up samples of score below 0.55. This score was calculated with LB 0.57 model.</li></ul></li>
</ol>
<p>The current code has a lot of redundant prats so it's a really confusing one I think. Instead of sharing my code, let me point to timm library for <a href="https://github.com/huggingface/pytorch-image-models/blob/a83e9f2d3be348ad77065027b6fca8151f5b075f/train.py#L307" target="_blank">EMA</a>, <a href="https://github.com/huggingface/pytorch-image-models/blob/a83e9f2d3be348ad77065027b6fca8151f5b075f/train.py#L174" target="_blank">layer-wise decay</a> There are clear implementation and I also referred this timm library.</p>
<p>Both two components above improve CLIP score I think and of course If you have afford you can add LP-FT.</p> |
Stable Diffusion - Image to Prompts | 4th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi <a href="https://www.kaggle.com/rturley" target="_blank">@rturley</a> congrats on being a comp master :)</p>
<p>Some context is that i started this comp with only a text-retrieval method (no training) and it gave me good intuition of which dataset to focus on for training clip encoders later on. So i guess the text-retrieval using clip-style gave me some feedback on what images pre trained clip is good at. </p>
<p>In terms of compute-time/effort to improvements, i would say the text-retrieval solution is very low impact. </p>
<p>But in the last few days, epoch ensemble and adding past checkpoints (i didnt think i could do more training at this point) to my vit encoders only solution did not see improvements. In terms of competition, the ensemble did add a 0.004 boost (0.641 to 0.645) bringing me up +2 ranks so i guess it was acceptable. </p>
<p>Did you normalised your predictions when combining the different approaches? I got weird results if predictions were not normalised before combining. </p>
<p>Congratulations to all the winners! Thank you, Kaggle, for hosting such a fun competition (kudos for coming up with 16K prompts)! Also, a shoutout to all the open-source efforts in the DS community.</p>
<p><a href="https://www.kaggle.com/code/gerwynng/sd-itp-4th-place-solution" target="_blank">Final Submission Notebook</a></p>
<h1>Summary</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2796179%2F06b4a722da105f50301945222179de44%2F4th_solution.png?generation=1684243194540991&alt=media" alt=""></p>
<table>
<thead>
<tr>
<th></th>
<th>Public</th>
<th>Private</th>
<th>Runtime</th>
</tr>
</thead>
<tbody>
<tr>
<td>(1+2+3) Text Candidate Generation</td>
<td>0.614X</td>
<td>0.612X</td>
<td>6 hours</td>
</tr>
<tr>
<td>(4) ViT Encoders</td>
<td>0.644X</td>
<td>0.641X</td>
<td>3 hours</td>
</tr>
<tr>
<td>Weighted ensemble (CV Best: 0.4, 0.6)</td>
<td>0.646X</td>
<td>0.643X</td>
<td>9 hours</td>
</tr>
<tr>
<td>Weighted ensemble (LB Best: 0.25, 0.75)</td>
<td>0.648X</td>
<td>0.645X</td>
<td>9 hours</td>
</tr>
</tbody>
</table>
<p><strong>Text Candidate Generation</strong></p>
<ul>
<li>Text retrieval: performing a top-k exhaustive search from 56M precomputed text embeddings, using pretrained <a href="https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k" target="_blank">CLIP-bigG</a>, followed by cosine similarity filtering using pretrained <a href="https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K" target="_blank">CLIP-H14</a></li>
<li>CLIP interrogator: performing a top-k exhaustive search from precomputed CLIP-bigG text embeddings of 1.6M "prompt components". Each of the retrieved string components are concatenated into prompts.</li>
<li>CoCa: using <a href="https://github.com/mlfoundations/open_clip#fine-tuning-coca" target="_blank">LAION's implementation</a>. </li>
</ul>
<p><strong>ViT Encoders</strong><br>
Two-stage finetune process using: 1. DDB; 2. generated 7M images<br>
Initialised from the following checkpoints: CLIP-ViT <a href="https://huggingface.co/laion/CLIP-ViT-L-14-laion2B-s32B-b82K" target="_blank">L</a>, H (as above), <a href="https://huggingface.co/laion/CLIP-ViT-g-14-laion2B-s12B-b42K" target="_blank">G</a>, bigG (as above)</p>
<h1>Dataset</h1>
<p><strong>Caption Set (Text Retrieval + Image Generation)</strong><br>
Total: ~5.2M, Generated Images: ~5.2M </p>
<ul>
<li><a href="https://github.com/poloclub/diffusiondb" target="_blank">DDB</a></li>
<li><a href="https://huggingface.co/datasets/Gustavosta/Stable-Diffusion-Prompts" target="_blank">Gustavosta's Stable Diffusion Prompts</a></li>
<li><a href="https://github.com/krea-ai/open-prompts" target="_blank">Kree AI's Open Prompts</a></li>
<li><a href="https://huggingface.co/datasets/bartman081523/stable-diffusion-discord-prompts" target="_blank">Stable Diffusion Discord Prompts</a></li>
<li><a href="https://huggingface.co/datasets/succinctly/midjourney-prompts" target="_blank">Midjourney prompts</a></li>
<li><a href="https://www.kaggle.com/datasets//900k-diffusion-prompts-dataset" target="_blank">tanreinama's 900k</a></li>
<li><a href="https://ai.google.com/research/ConceptualCaptions/" target="_blank">Google Conceptual Captions</a></li>
<li><a href="https://nocaps.org/download" target="_blank">nocaps</a></li>
<li><a href="https://cocodataset.org/#download" target="_blank">coco</a></li>
<li><a href="https://textvqa.org/textcaps/dataset/" target="_blank">textvqa textcaps</a></li>
</ul>
<p><strong><a href="https://github.com/google-research-datasets/wit" target="_blank">WIT (Text Retrieval + Image Generation)</a></strong><br>
Total: ~2.5M, Generated Images: ~1M</p>
<p><strong>Generated Prompts (Text Retrieval + Image Generation)</strong><br>
Total: ~10M+, Generated Images: ~1M</p>
<ul>
<li><a href="https://www.kaggle.com/datasets/leonidkulyk/magic-prompt-1m" target="_blank">leonidkulyk's magic prompt 1M</a></li>
<li>xiaozhouwang's <a href="https://www.kaggle.com/datasets/xiaozhouwang/sd2gpt2" target="_blank">sd2gpt</a> and <a href="https://www.kaggle.com/datasets/xiaozhouwang/sd2hardcode" target="_blank">sd2hardcode</a></li>
</ul>
<p>i also generated more using <a href="https://huggingface.co/succinctly/text2image-prompt-generator" target="_blank">text2image-prompt-generator</a> and <a href="https://huggingface.co/Gustavosta/MagicPrompt-Stable-Diffusion" target="_blank">MagicPrompt</a></p>
<p><strong><a href="https://huggingface.co/datasets/dalle-mini/YFCC100M_OpenAI_subset" target="_blank">YFCC 100M open subset</a> (Text Retrieval only)</strong><br>
Total: ~4M</p>
<p><strong><a href="https://huggingface.co/datasets/laion/laion-coco" target="_blank">Laion CoCo</a> (Text Retrieval only)</strong><br>
Total: 27M (random sampled)</p>
<p><strong><a href="https://huggingface.co/datasets/mlfoundations/datacomp_pools/tree/main/small" target="_blank">Datacomp Small</a> (Text Retrieval only)</strong><br>
Total: 5M</p>
<p><strong>Prompt Components (Interrogator only)</strong><br>
Total: 1.6M components</p>
<ul>
<li><a href="http://www.ashley-bovan.co.uk/words/partsofspeech.html" target="_blank">English PoS word list</a></li>
<li><a href="https://storage.googleapis.com/openimages/web/download_v7.html" target="_blank">Open Image Class List</a></li>
<li><a href="https://www.kaggle.com/datasets/gerwynng/img2text-prompt-components" target="_blank">CuPL Imagenet Prefixes</a></li>
<li><a href="https://github.com/pharmapsychotic/clip-interrogator/tree/main/clip_interrogator/data" target="_blank">Original Clip Interrogator Components</a></li>
<li>+ i took top frequent phrases (e.g. split prompts by <code>,</code>) from OpenPrompts dataset (~1.2M)</li>
</ul>
<h2>Text Preprocessing</h2>
<p>Most stable-diffusion prompts are really messy with lots of unreadable/unnecessary words and punctuations. I manually looked at samples and tried to clean them. (<a href="https://www.kaggle.com/gerwynng/sdi2p-text-preprocessing" target="_blank">functions in this notebook</a>)</p>
<ul>
<li>removing near-duplicates (using sentence transformers' cosine similarity) works well for text-retrieval solutions; reduces 10-20% in dataset sizes while achieving similar scores. </li>
</ul>
<h2>CV</h2>
<p>32k sampled prompts from Caption Set. Total is ~34k after allowing some prompts to repeat with different generated images.</p>
<h1>Solutions and Training</h1>
<h2>Text Retrieval</h2>
<p>No training was performed in this step.</p>
<p>I selected the top 60 candidates by calculating the cosine similarity between the image CLIP-bigG embeddings and my precomputed text embeddings. Then, for each of these top 60 candidates, I recalculated the cosine similarity using CLIP-H14 and filtered out those with a cosine similarity <= 0.29.</p>
<p>To perform exhaustive search on the 56M embeddings at inference time, I employed the following technique:</p>
<ul>
<li>I divided my dataset into different sections as listed above (dataset chunking).</li>
<li>PCA with dimensions of 768 for all sections, except for Laion CoCo where I used 512 dimensions.</li>
<li>mem-mapped numpy arrays in float16 format.</li>
<li>a two-layer chunking approach. The outer loop involved splitting the 16k test image embeddings into 768 chunks, while the inner loop involved chunking the text image embeddings into 1024*1024 chunks.</li>
<li><code>torch.mm</code> calculations on GPU to compute the cosine similarity.</li>
</ul>
<h2>CLIP Interrogator</h2>
<p>I came out with 14 templates of how to create prompts from components. E.g. <code>['adjectives', 'nouns', 'verbs', 'openprompts_modifiers', 'adverbs', 'movements']</code>. For each template, I selected the top 8 candidates from each component, resulting in a total of 112 text candidates.</p>
<p>Initially, I experimented with random search, using permutations of prompt components + the number of components. However, I observed that this approach led to overfitting on the CV set too quickly. Thus, I decided to use the 14 templates that I manually selected from EDA.</p>
<h2>CoCa Model</h2>
<p>This is low impact. I tried finetuning it, but it did not offer significant improvement over the pretrained version.</p>
<h2>Vit-Encoders</h2>
<p>I kept batch size ~160, and then un-freezed layers depending on model size</p>
<ul>
<li>L: un-freeze from 2 onwards</li>
<li>H: un-freeze from 14 onwards </li>
<li>G: un-freeze from 24 onwards </li>
<li>bigG: un-freeze from 36 onwards</li>
</ul>
<p>Stage 1: DDB (~1.9M left after text-preprocessing, used their original images)<br>
Stage 2: Self generated 7M<br>
Both using LR: 4/5e-5 for 3/4 Epoch, using CosineEmbeddingLoss</p>
<p>Did not work: adding dropout (/ spatial), increasing FC layers, epoch ensemble, changing patch size</p>
<h3>TTA</h3>
<p>In my final effort, I was able to achieve a 0.02X boost by utilizing different image mean and standard deviation normalization as Test-Time Augmentation (TTA). I used the pretrained statistics along with statistics computed from my 32K test set (<a href="https://www.kaggle.com/gerwynng/estimate-image-stats" target="_blank">notebook on how i computed here</a>). I performed two inferences and averaged the results, followed by a simple average across the four ViT encoders.</p>
<h2>Weighted Average (Text Candidates + ViT Encoders)</h2>
<ul>
<li>Prior to taking a weighted average, I normalized the ST embeddings (from Text Candidates) and ViT embeddings.</li>
<li>My CV only relied on one fold. As a risk mitigation strategy, I made one submission with weights based on the maximum CV (Text Candidates: 0.4, ViTs: 0.6), and another submission with weights based on the maximum LB score (Text Candidates: 0.25, ViTs: 0.75).</li>
</ul> |
Stable Diffusion - Image to Prompts | 5th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We thank all organizers for this very exciting competition.<br>
Congratulations to all who finished the competition and to the winners.</p>
<h2>Solution summary</h2>
<p>Our approach involved training multiple image models to predict prompt embeddings (generated by a sentence transformer) directly from images, then ensembling the predictions from these models. The final submission was an ensemble of four models:</p>
<ul>
<li>eva02_large</li>
<li>convnext_xxlarge</li>
<li>convnext_large</li>
<li>vit_large</li>
</ul>
<p>Initially, in the early stages of the competition, we experimented with models that predicted the image prompts themselves. However, we found that the method of directly predicting prompt embeddings yielded better scores. Consequently, from the midpoint of the competition onwards, we focused on improving the method of direct embedding prediction.</p>
<h2>Dataset Generation</h2>
<p>We generated approximately 5 million pairs of prompt-image sets (some images were generated multiple times from the same prompt) as detailed in the table below:</p>
<table>
<thead>
<tr>
<th>dataset name</th>
<th># of prompts</th>
<th>images per prompt</th>
<th>Total images</th>
</tr>
</thead>
<tbody>
<tr>
<td>cc3m</td>
<td>249,593</td>
<td>3</td>
<td>748,781</td>
</tr>
<tr>
<td>lexica</td>
<td>67,101</td>
<td>3</td>
<td>201,303</td>
</tr>
<tr>
<td>diffusiondb</td>
<td>326,035</td>
<td>3</td>
<td>978,105</td>
</tr>
<tr>
<td>cc3m part2</td>
<td>1,166,852</td>
<td>1</td>
<td>1,166,852</td>
</tr>
<tr>
<td>diffusiondb part2</td>
<td>1,371,480</td>
<td>1</td>
<td>1,371,480</td>
</tr>
<tr>
<td>mscoco</td>
<td>197,527</td>
<td>3</td>
<td>592,583</td>
</tr>
<tr>
<td>Total:</td>
<td>3,378,588</td>
<td>1or3</td>
<td>5,059,104</td>
</tr>
</tbody>
</table>
<p>During training, for samples with three images corresponding to a prompt, we randomly selected one image to ensure that duplicate images from the same prompt were not used within the same epoch.</p>
<p>Dataset references</p>
<ul>
<li>cc3m: <a href="https://ai.google.com/research/ConceptualCaptions/download" target="_blank">https://ai.google.com/research/ConceptualCaptions/download</a></li>
<li>lexica: <a href="https://www.kaggle.com/datasets/safavieh/gustavosta-stable-diffusion-prompts" target="_blank">https://www.kaggle.com/datasets/safavieh/gustavosta-stable-diffusion-prompts</a></li>
<li>diffusiondb: <a href="https://huggingface.co/datasets/poloclub/diffusiondb" target="_blank">https://huggingface.co/datasets/poloclub/diffusiondb</a></li>
<li>mscoco: <a href="https://cocodataset.org/" target="_blank">https://cocodataset.org/</a></li>
</ul>
<h2>Validation Strategy</h2>
<p>To partition the generated image-prompt pairs for training and evaluation, we first calculated the prompt embeddings using a sentence transformer. We then grouped samples with similar prompt embeddings. After grouping samples with a cosine similarity of 0.7 or higher in their embeddings, we divided the dataset into 10 folds based on these groups using GroupKFold. Model training was conducted on all folds except fold0, which was reserved for evaluation.</p>
<p>Furthermore, during the creation of the cc3m and diffusiondb datasets, we precomputed the cosine similarity of prompt embeddings. For prompts with a similarity of 0.9 or above, we only used one for image generation. From the midpoint of the competition, we also began to utilize the remaining prompts that we had not used before, creating the cc3m_part2 and diffusiondb_part2 datasets.</p>
<h2>Model Architecture</h2>
<p>Our model architecture is straightforward, consisting of a backbone connected to a 384-dimensional fully connected (FC) layer (without bias), which matches the output dimensions of the sentence transformer.</p>
<p>Some models have a larger input image resolution as shown below.</p>
<table>
<thead>
<tr>
<th>model</th>
<th>pretrained backbone</th>
<th>input size</th>
</tr>
</thead>
<tbody>
<tr>
<td>eva02_large</td>
<td>eva02_large_patch14_448.mim_m38m_ft_in22k (timm)</td>
<td>448 (pretrained@448)</td>
</tr>
<tr>
<td>convnext_xxlarge</td>
<td>laion2b_s34b_b82k_augreg_rewind (open_clip)</td>
<td>384 (pretrained@256)</td>
</tr>
<tr>
<td>convnext_large</td>
<td>laion2b_s29b_b131k_ft_soup (open_clip)</td>
<td>384 (pretrained@320)</td>
</tr>
<tr>
<td>vit_large</td>
<td>laion2b_s32b_b82k (open_clip)</td>
<td>336 (pretrained@224)</td>
</tr>
</tbody>
</table>
<p>The ViT model of open_clip can change the input image size as follows.</p>
<pre><code>backbone = open_clip_model.visual
hgrid, wgrid = backbone.grid_size
hpatch = input_height // hgrid
wpatch = input_width // wgrid
backbone.patch_size = (hpatch, wpatch)
backbone.conv1.kernel_size = (hpatch, wpatch)
backbone.conv1.stride = (hpatch, wpatch)
</code></pre>
<h2>Training Process</h2>
<p>We used the PyTorch framework for training our models, with the following configurations:</p>
<ul>
<li>10 or 15 epochs with Distributed Data Parallel (DDP), Automatic Mixed Precision (AMP), and Gradient Checkpointing</li>
<li>CosineEmbeddingLoss as the loss function</li>
<li>MADGRAD as the optimizer</li>
<li>Warmup 1 or 3 epochs training only FC layer with the backbone frozen (learning rate: from 1e-2 to 1e-4)</li>
<li>Cosine Learning Rate Scheduler with learning rates ranging from 1e-4 to 1e-7.</li>
<li>Finetune 3 or 5 epochs with all dataset.</li>
</ul>
<p>As for data augmentation, we implemented a relatively light configuration as follows:</p>
<ul>
<li>No Rotation</li>
<li>No Horizontal Flip</li>
<li>RandomResizedCrop with a scale of 0.5 to 1.0</li>
<li>ColorJitter (brightness=0.05, contrast=0.05, saturation=0.05, hue=0.05)</li>
</ul>
<p>Until midway through the competition, we trained our models using only three datasets: cc3m, lexica, and diffusiondb.</p>
<table>
<thead>
<tr>
<th>model</th>
<th>epochs</th>
<th>warmup</th>
<th>initial lr</th>
<th>final lr</th>
<th>dataset</th>
</tr>
</thead>
<tbody>
<tr>
<td>eva02_large</td>
<td>10</td>
<td>1</td>
<td>1e-4</td>
<td>1e-6</td>
<td>cc3m,lexica,diffusiondb</td>
</tr>
<tr>
<td>convnext_xxlarge</td>
<td>10</td>
<td>1</td>
<td>1e-4</td>
<td>1e-6</td>
<td>cc3m,lexica,diffusiondb</td>
</tr>
<tr>
<td>convnext_large</td>
<td>15</td>
<td>3</td>
<td>1e-4</td>
<td>1e-6</td>
<td>cc3m,lexica,diffusiondb</td>
</tr>
<tr>
<td>vit_large</td>
<td>15</td>
<td>3</td>
<td>1e-4</td>
<td>1e-6</td>
<td>cc3m,lexica,diffusiondb</td>
</tr>
</tbody>
</table>
<p>From the midpoint onwards, we included the cc3m_part2, diffusiondb_part2, and mscoco datasets for fine-tuning the model.</p>
<table>
<thead>
<tr>
<th>model</th>
<th>epochs</th>
<th>warmup</th>
<th>initial lr</th>
<th>final lr</th>
<th>dataset</th>
</tr>
</thead>
<tbody>
<tr>
<td>eva02_large</td>
<td>3</td>
<td>0</td>
<td>1e-6</td>
<td>1e-7</td>
<td>all</td>
</tr>
<tr>
<td>convnext_xxlarge</td>
<td>3</td>
<td>0</td>
<td>1e-6</td>
<td>1e-7</td>
<td>all</td>
</tr>
<tr>
<td>convnext_large</td>
<td>5</td>
<td>0</td>
<td>1e-5</td>
<td>1e-7</td>
<td>all</td>
</tr>
<tr>
<td>vit_large</td>
<td>5</td>
<td>0</td>
<td>1e-5</td>
<td>1e-7</td>
<td>all</td>
</tr>
</tbody>
</table>
<h2>Ensemble Strategy</h2>
<p>In our final submission, we utilized four models. Also, for each model, we used four or five different cross-validation folds, and thus we created an ensemble of 18 folds across the four models.</p>
<table>
<thead>
<tr>
<th>model</th>
<th>folds</th>
</tr>
</thead>
<tbody>
<tr>
<td>eva02_large</td>
<td>0,2,2*,5,9</td>
</tr>
<tr>
<td>convnext_xxlarge</td>
<td>0,1,6,6*,7</td>
</tr>
<tr>
<td>convnext_large</td>
<td>0,1,2,3</td>
</tr>
<tr>
<td>vit_large</td>
<td>0,3,4,8</td>
</tr>
</tbody>
</table>
<p>Folds marked with * are the ones where we fine-tuned for only 1 epoch without data augmentation.</p>
<p>We had models validated on fold0 prepared for each model, and the weights of the ensemble were adjusted using these fold0 validation data.</p>
<p>We also experimented with determining weights not just on a per-model basis, but along each of the 384 output dimensions for each model (amounting to 384x4 weights), which were also determined through training. However, the training didn't result in significant variability across the weights for specific dimensions, and the outcome was almost the same as when determining a single weight per model.</p>
<p>Upon submitting both methods and comparing their scores, the method using 384 weights per model resulted in a slight score increase of +0.0002. Therefore, we implemented the latter method for our final submission. However, it's worth noting that this improvement is likely within the margin of error.</p>
<h2>Things that didn't work well for us</h2>
<ul>
<li>Mixup</li>
<li>GeM (Generalized Mean) Pooling</li>
<li>GGeM pooling (<a href="https://arxiv.org/abs/2212.04114" target="_blank">Group Generalized Mean Pooling for Vision Transformer</a>)</li>
<li>Triplet Attention (<a href="https://arxiv.org/abs/2010.03045" target="_blank">Convolutional Triplet Attention Module</a>)</li>
<li>Image captioning like BLIP, BLIP2, CoCa<ul>
<li>Predicting the prompt embedding directly was more effective in terms of both computational cost and score.</li></ul></li>
<li>Weighting loss based on scores from the model's OOF predictions (loss considering easy or difficult samples)</li>
<li>kNN approach using a pretrained clip model<ul>
<li>Although improvements were observed when ensembled with the prediction results of a single model, no effect was observed when added to the results of the 4-model ensemble used in the final submission.</li></ul></li>
<li>Cosine similarity loss for all sample combinations (details to follow)</li>
</ul>
<h3>Cosine Similarity Loss for All Sample Combinations</h3>
<p>When using CosineEmbeddingLoss, the loss calculation is performed for each image-prompt pair, without considering the relationships between different pairs. To address this, we tried to implement a loss that would bring the cosine similarity of all pairs of model outputs closer to the cosine similarity of all pairs of corresponding targets within a mini-batch, considering all combinations of samples.</p>
<p>However, we didn't adopt this approach as it didn't improve performance, whether used alone or in combination with CosineEmbeddingLoss.</p> |
Stable Diffusion - Image to Prompts | 6th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you so much!</p>
<p>Thank you organizers for hosting this competition, and congrats to all the winners. I was depressed today after missing the prize, but I am happy to have won the gold medal.<br>
I tried some approaches at the beginning of the competition, but after all the solution settled down to quite a simple one:<br>
<strong>create 8.87M SD2-generated images from the captions of the image/video captioning dataset, and after deduplicating/preprocessing the dataset, train the generated images with timm's latest backbones (i.e. eva, convnext, swin).</strong><br>
The inference notebook for final submission is <a href="https://www.kaggle.com/code/sokazaki/sdip-6th-place-submission" target="_blank">here</a></p>
<p><strong>First Attempts</strong><br>
At the beginning of this competition, as the public notebooks do, I also tried some image captioning models.<br>
After trying several models, the combination of 4 captioning models (<a href="https://arxiv.org/abs/2205.14100" target="_blank">GIT trained on MS COCO</a> / CoCa / <a href="https://arxiv.org/abs/2302.01328" target="_blank">Socratic model from IC3</a> / CLIP Interrogator) and ViT trained model from <a href="https://www.kaggle.com/code/shoheiazuma/stable-diffusion-vit-baseline-inference" target="_blank">@shoheiazuma notebook</a> reached 0.5568 Public LB, and with the 4 image captioning models + ViT&Swin trained using <a href="https://www.kaggle.com/code/shoheiazuma/stable-diffusion-vit-baseline-train" target="_blank">@shoheiazuma notebook</a> the score reached 0.57439 Public LB.<br>
Therefore, at the beginning I considered creating two ways (i.e. image captioning models and direct optimization models like <a href="https://www.kaggle.com/code/shoheiazuma/stable-diffusion-vit-baseline-train" target="_blank">@shoheiazuma notebook</a>) to create final submissions, however, the inference time of image captioning models were too long, so I discarded to search for the best image captioning models, and shift to concentrate on creating direct optimization models. For creating direct optimization models, it was the key to create the SD2 dataset as soon as possible, so I started the survey of the image/video captioning dataset.</p>
<p><strong>Data Generation</strong><br>
I surveyed the image captioning / video captioning dataset and selected the below datasets to create SD2 images.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1476759%2F8a506f048c85efde0c8b8894827b5cb5%2F2023-05-16%20231638.png?generation=1684246639225798&alt=media" alt="SD2 generated dataset"><br>
For the training dataset, The rich vocabulary & the moderate length of the text captioning seemed important to me, so I selected the dataset based on the two factors. Also, I used the faiss script from <a href="https://www.kaggle.com/code/tomokihirose/use-vector-search-for-diffusion-db-cleansing" target="_blank">@tomokihirose notebook</a> only for Diffusion14M DB and WebVid10M to remove redundant above 0.9 cosine similarity captions from the dataset. Also, as preprocessing for the dataset, the captions which are not English / shorter than 5 words / longer than 70 words are removed from the training dataset. <br>
For empowering depth effect <a href="https://www.kaggle.com/competitions/stable-diffusion-image-to-prompts/discussion/398407" target="_blank">as discussed in this thread</a>, I also added diffusion2M DB 224x224 resized dataset [15] and another created 722K (180.5K captions * 4 different seed) from diffusion2M DB captions to the above training dataset.<br>
The gain by the depth was nice (+ 0.005~0.010 Public LB) as shown in the latter results, so I should have dug more deeper into the depth side for further performance improvement.</p>
<p>To create the SD2 dataset, I used the diffuser's library (the weight is from stabilityai/stable-diffusion-2) with the default setting. For a fast generation, I also applied torch.compile mode and xformers with PyTorch2.0 using the below snippets:</p>
<pre><code>stable_model.unet = torch.(stable_model.unet)
stable_model.enable_xformers_memory_efficient_attention()
</code></pre>
<p>For further speedup, applying <a href="https://github.com/facebookresearch/ToMe" target="_blank">Token Merging</a> or using DPMSolver++ in SD2 were candidates, however, I couldn't have confidence that using those approaches won't decrease the total performance and it's hard to recreate the dataset due to the time resources, so I decided not to use those techniques.<br>
For creating 8.87M dataset, it took about 2 months with 5~10 V100s and 1~5 A100s. In my setting, V100/A100 can produce about 100K/200K images in one week, so it costs about 600 V100 days / 300 A100 days for creating 8.87M dataset.</p>
<p><strong>Model Training</strong><br>
I selected 3 backbones (<a href="https://huggingface.co/timm/eva_large_patch14_336.in22k_ft_in22k_in1k" target="_blank">eva-large 336 size</a>, <a href="https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384" target="_blank">convnext-large 384 size</a>, <a href="https://huggingface.co/timm/swin_large_patch4_window12_384.ms_in22k_ft_in1k" target="_blank">swin-large 384 size</a>), and each model is trained for 5 epoch. For training, it took about 3~7 days on one A100. All models converged around 4 epochs for the validation dataset (train/valid split is 0.9/0.1).<br>
In my experiments, the performance of eva 196 size, deit3, and efficientnet variants was lower than the above 3 models, so I didn't use them.</p>
<p>The training script is almost the same as <a href="https://www.kaggle.com/code/shoheiazuma/stable-diffusion-vit-baseline-train" target="_blank">@shoheiazuma notebook</a>. The difference from the training notebook is here:</p>
<ul>
<li>changed batch size to 128 for all models</li>
<li>training with fp16 mode</li>
<li>applied horizontal flip (ColorJitter, GaussianBlur, and RandomCrop didn't lead to the improvement)</li>
</ul>
<p>Also, in the retraining phase, I changed CosineEmbeddingLoss() to CosineEmbeddingLoss(margin=-0.5) + 0.01 x MSELoss and trained with all datasets.<br>
For final submission, the highlighted 14 models in the attached picture were ensembled with different resize resolution TTA (460x460 and 512x512) with equal ensemble weight.<br>
The performance of each trained model is here. Split1 (5.02M) and Split2 (6.50M) are selected from the created 8.87M SD2 dataset except for 722K (180.5K * 4 different seeds) from DiffusionDB2M:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1476759%2F59e4ce02426f42578cef6d1cdca8bb5e%2F2023-05-18%20004915.png?generation=1684338631593500&alt=media" alt="ConvNext Results"><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1476759%2F628b5ae887ae41492ad40f6c4800c03e%2F2023-05-18%20004936.png?generation=1684338658637625&alt=media" alt="Swin Results"><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1476759%2Fd02d95a4f44090f657b420bb7ee45762%2F2023-05-18%20004955.png?generation=1684338689799294&alt=media" alt="EVA Results"></p>
<p><strong>What didn't work in my environment</strong></p>
<ul>
<li>Hard Prompts Made Easy (<a href="https://arxiv.org/abs/2302.03668):" target="_blank">https://arxiv.org/abs/2302.03668):</a> <br>
the score using this method was low, and it took a bit long time to generate each optimized prompt.</li>
<li>Stable unCLIP (<a href="https://github.com/Stability-AI/stablediffusion/blob/main/doc/UNCLIP.MD):" target="_blank">https://github.com/Stability-AI/stablediffusion/blob/main/doc/UNCLIP.MD):</a> <br>
I tried to use unCLIP for Test Time Augmentation, but it took a long time to generate each image variation.</li>
<li>Training with Stable Diffusion's latent feature: <br>
Extracted the feature of VAE in SD2 to train, but it didn't work well.</li>
<li>Using LLM (e.g. T5, GPT2, GPT3 Davinci-003) to summarize/regenerate the captions from image captioning models: <br>
I couldn't get much gain even with GPT3 Davinci-003, so I stopped to use this approach.</li>
<li>Also I tried to generate prompts using Vicuna-13B, however, I couldn't get the weight from the application form, so I gave up using it.</li>
</ul>
<p><strong>Acknowledgement</strong><br>
I sincerely would like to appreciate Hitachi Ltd., as I used my company's resources for creating datasets and training models.</p>
<p><strong>Reference for the dataset</strong><br>
<a href="https://cocodataset.org/#download" target="_blank">[1] MS COCO captions 2017</a><br>
<a href="https://huggingface.co/datasets/poloclub/diffusiondb/tree/main" target="_blank">[2] DiffusionDB 14M</a><br>
<a href="https://paperswithcode.com/dataset/flickrstyle10k" target="_blank">[3] Flickr Funny & Romantic dataset</a><br>
<a href="https://paperswithcode.com/dataset/flickr30k" target="_blank">[4] Flickr 30K</a><br>
<a href="https://www.imageclef.org/photodata" target="_blank">[5] Iaprtc12</a><br>
<a href="https://nocaps.org/download" target="_blank">[6] Nocaps</a><br>
<a href="https://www.cs.rice.edu/~vo9/sbucaptions/" target="_blank">[7] SBU Captioning Dataset</a><br>
<a href="http://users.cecs.anu.edu.au/~u4534172/senticap.html" target="_blank">[8] Senticap Dataset</a><br>
<a href="https://textvqa.org/textcaps/" target="_blank">[9] Textcaps Dataset</a><br>
<a href="https://vizwiz.org/tasks-and-datasets/image-captioning/" target="_blank">[10] VizWiz Dataset</a><br>
<a href="https://trecvid.nist.gov/trecvid.data.html" target="_blank">[11] VTT 2016-2022 from TRECVID</a><br>
<a href="https://m-bain.github.io/webvid-dataset/" target="_blank">[12] WebVid10M</a><br>
<a href="https://www.kaggle.com/competitions/stable-diffusion-image-to-prompts/discussion/402146" target="_blank">[13] captions from </a><a href="https://www.kaggle.com/jeinsong" target="_blank">@jeinsong</a><br>
<a href="https://www.kaggle.com/competitions/stable-diffusion-image-to-prompts/discussion/398529" target="_blank">[14] captions from </a><a href="https://www.kaggle.com/xiaozhouwang" target="_blank">@xiaozhouwang</a><br>
<a href="https://www.kaggle.com/competitions/stable-diffusion-image-to-prompts/discussion/396136" target="_blank">[15] DiffusionDB 2M 224x224 resized dataset from </a><a href="https://www.kaggle.com/atom1231" target="_blank">@atom1231</a><br>
<a href="https://huggingface.co/datasets/poloclub/diffusiondb/tree/main" target="_blank">[16] DiffusionDB 2M captions</a></p> |
Stable Diffusion - Image to Prompts | 7th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I used an effective batch size of 4,000 by taking the maximum batch size that would fit on my GPU and multiplying by the number of gradient accumulations that would equal 4,000. In other words, for a model where I could only get a batch size of 8, I would have 500 gradient accumulations.</p>
<p>The learning rate was different for each model, but typically, I would train the head at something like 0.0008 and the top layer of the backbone at 0.0001 when the LR for each layer below decaying by a factor of 0.85.</p>
<p>Thanks to all for the great competition, and especially to my teammate <a href="https://www.kaggle.com/evilpsycho42" target="_blank">@evilpsycho42</a>. Don't let his "evilpsycho" username fool you--he was a fantastic teammate, and we had a great Slack conversation throughout the competition as we discussed our strategy and commiserated together when many of our best ideas failed to work as well as we hoped.</p>
<p>Here are some key elements to our 7th place solution:</p>
<p><strong>Data Generation</strong></p>
<p>We assumed the training data would play a major role in this competition, so we started generating custom images using Stable Diffusion as quickly as possible. We tried to gather a diverse set of prompts, generally limiting them to have a cosine similarity of less than 0.7 using the competition metric. We filtered what we thought we the highest quality, low correlation prompts from the following sources:</p>
<ul>
<li>Diffusion DB</li>
<li>Conceptual Captions</li>
<li>COCO image captions</li>
<li>Flickr Image Captions</li>
<li>The ChatGPT prompts from <a href="https://www.kaggle.com/jeinsong" target="_blank">@jeinsong</a> (<a href="https://www.kaggle.com/datasets/jeinsong/chatgpt-images-w-prompts" target="_blank">https://www.kaggle.com/datasets/jeinsong/chatgpt-images-w-prompts</a>)</li>
</ul>
<p>Most importantly, we designed a custom prompt generator that we found to be very effective. It formed prompts from random combinations of subjects (40k people, places, and things), artistic media, adjectives, and verbs.</p>
<p><strong>Validation Set</strong></p>
<p>I'm sure most of us struggled at first to find a good validation set. We found it was important to ensure that our validation prompts came from a variety of sources and had a low correlation with our training data. </p>
<p><strong>Models</strong></p>
<p>Our best models were CLIP vision towers, especially <code>'convnext_xxlarge.clip_laion2b_soup'</code> and <code>'eva02_large_patch14_clip_336.merged2b'</code>, and we also included a few of the smaller ViT and convnext models from open_clip/timm in our final ensemble.</p>
<p>While these CLIP backbones had the best single model scores, we found a robustness benefit from also including a small weight on a zero-shot image captioning model in our final ensemble, though not a large difference which one we used. Our top model uses the GIT model captioning model (<a href="https://arxiv.org/abs/2205.14100" target="_blank">https://arxiv.org/abs/2205.14100</a>)</p>
<p><strong>Head Pretraining Trick</strong></p>
<p>Since we were using CLIP models, we found a pretraining trick that allowed us to initialize the model's projection head that maps the CLIP backbone output to the 384 embedding used by the competition. We used our custom prompt generator to generate a million random prompts, and we created their text embeddings with the CLIP model's text tower as well as the text embeddings using the competition metric (<code>all-MiniLM-L6-v2</code>). Then we trained a simple linear layer that mapped the CLIP embedding to the competition embedding (e.g. <code>nn.Linear(in_features=clip_embeddings_size,out_features=384,bias=True)</code> )</p>
<p>We used this as the pretrained weights for our model's projection head, which added information and performed better than using random weights to initialize the head.</p>
<p><strong>Training</strong></p>
<p>We tried a few different variations of training techniques, and our favorite runs generally included:</p>
<ul>
<li>2 epochs</li>
<li>layer rate decay</li>
<li>very little image augmentation (small amount of random/resize crop)</li>
<li>large amounts of gradient accumulation to simulate large batch sizes</li>
</ul>
<p>We found the WiSE-FT technique was very effective in improving single model scores, but not helpful in our ensemble. </p>
<p><strong>Single Model LB Scores</strong></p>
<p>In case you are curious, here are the LB scores of our top models:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F642735%2Fe87c32b582ab1f658a722de90addcaa7%2Fprivate_lb_score_single_model.png?generation=1684201595735019&alt=media" alt="private_lb_score"></p>
<p><strong>Conclusions</strong></p>
<p>Thanks to the organizers, and congrats to the other contestants!</p>
<p><strong>Edit May 18</strong><br>
Based on comments below, we shared the following, which I will add to the original post.</p>
<ul>
<li><p>The prompt generator is here <a href="https://www.kaggle.com/datasets/rturley/custom-prompt" target="_blank">https://www.kaggle.com/datasets/rturley/custom-prompt</a> with sample code in this notebook <a href="https://www.kaggle.com/code/rturley/custom-prompt-generator-for-image-generation" target="_blank">https://www.kaggle.com/code/rturley/custom-prompt-generator-for-image-generation</a></p></li>
<li><p>The first 100k images that came from this prompt generator is in this dataset <a href="https://www.kaggle.com/datasets/rturley/stable-diffusion-100k-custom-prompts-and-images" target="_blank">https://www.kaggle.com/datasets/rturley/stable-diffusion-100k-custom-prompts-and-images</a> with sample code in this notebook (<a href="https://www.kaggle.com/rturley/tour-of-stable-diffusion-custom-prompt-dataset)[https://www.kaggle.com/rturley/tour-of-stable-diffusion-custom-prompt-dataset" target="_blank">https://www.kaggle.com/rturley/tour-of-stable-diffusion-custom-prompt-dataset)[https://www.kaggle.com/rturley/tour-of-stable-diffusion-custom-prompt-dataset</a>]</p></li>
</ul>
<p>If there is any interest in knowing more detail or having us post sample code or datasets, please let us know. </p>
<p>Sure. I've just shared them.</p>
<ul>
<li><p>The prompt generator is here (<a href="https://www.kaggle.com/datasets/rturley/custom-prompt" target="_blank">https://www.kaggle.com/datasets/rturley/custom-prompt</a>) with sample code in this notebook (<a href="https://www.kaggle.com/code/rturley/custom-prompt-generator-for-image-generation" target="_blank">https://www.kaggle.com/code/rturley/custom-prompt-generator-for-image-generation</a>)</p></li>
<li><p>The first 100k images that came from this prompt generator is in this dataset (<a href="https://www.kaggle.com/datasets/rturley/stable-diffusion-100k-custom-prompts-and-images" target="_blank">https://www.kaggle.com/datasets/rturley/stable-diffusion-100k-custom-prompts-and-images</a>) with sample code in this notebook (<a href="https://www.kaggle.com/rturley/tour-of-stable-diffusion-custom-prompt-dataset" target="_blank">https://www.kaggle.com/rturley/tour-of-stable-diffusion-custom-prompt-dataset</a>)</p></li>
</ul>
<p><a href="https://www.kaggle.com/marbury" target="_blank">@marbury</a> asked if we would share our custom prompt generator. </p>
<p>The code and word dictionaries for the prompt generator are in this dataset: <a href="https://www.kaggle.com/datasets/rturley/custom-prompt" target="_blank">https://www.kaggle.com/datasets/rturley/custom-prompt</a></p>
<p>Here is a notebook to shows how to use it to generate custom prompts quickly (10k in 2 seconds): <a href="https://www.kaggle.com/code/rturley/custom-prompt-generator-for-image-generation" target="_blank">https://www.kaggle.com/code/rturley/custom-prompt-generator-for-image-generation</a></p>
<p>Most of our final models trained for two epochs. I think we had one or two models that used an earlier configuration where we trained for 1 epoch with a frozen backbone (aka the "linear probe") and then trained for 2 more epochs, but when we used our pretraining trick for the model head, the frozen epoch was no longer helpful.</p>
<p>Thanks, you too!</p>
<p>1) It might have been more precise to say “layer-wise learning rate decay”, but basically, I wrote a simple function that grouped the PyTorch model parameters by layer, so there would be something in the range of 27 to 60 parameter groups. Then, each parameter group has a different learning rate with the highest being the model’s head and the learning rate exponentially decreasing by the decay parameter (around 0.85). So the lowest layers of the model are hardly changing because the learning rate is so small.<br>
2) Yes, I found this helps the validation score. For large models where the GPU memory only allows very small batch sizes, the gradient accumulation groups them together to act like a larger batch.</p>
<p>Yes, with the model head already pretrained with reasonable values, the trick of training the head with a frozen backbone for one epoch (which is normally very helpful) was not longer necessary. So we went straight to training the whole backbone with layer-wise learning rate decay.</p>
<p>We collected or generated 3 million images used in training. As the competition was ending, I was starting to believe it would have been more effective (and efficient) to stick with 1 million or less of the highest quality image/prompt pairs. If we had an additional week in the competition, I would have built a smaller and more selective dataset.</p> |
Stable Diffusion - Image to Prompts | Silver medal solution: ViT, ConvNext, kNN-regression with BLIP-2 captions | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to the host, organizers, all the participants and my teammate <a href="https://www.kaggle.com/manwithaflower" target="_blank">@manwithaflower</a>!</p>
<p>Brief summary of our solution:</p>
<ul>
<li><p>Models trained to predict prompt embeddings (ViT, ConvNext):</p>
<ul>
<li>First, train a separate MLP projection head for each model using selected image-text pairs from the LAION dataset. Filter the images based on image/text cosine similarity, retaining only the highest-scoring ones. This actually results in a dataset primarily consisting of images of written text with related prompts containing that same text - but it is still surprisingly effective, achieving 0.54434 public/0.54351 private on a single model.</li>
<li>Fine-tune the models further with selected prompt/image pairs. Filter the prompts based on cosine similarity, the more aggressive filtering the better, with our final filtering threshold of 0.55 cosine similarity. Mainly use data from DiffusionDB 14M, <a href="https://www.kaggle.com/xiaozhouwang" target="_blank">@xiaozhouwang</a> GPT2 & Hardcoded datasets, <a href="https://www.kaggle.com/jeinsong" target="_blank">@jeinsong</a> ChatGPT dataset, <a href="https://www.kaggle.com/motono0223" target="_blank">@motono0223</a> dataset, images from Discord downloaded with URLs from Open Prompts dataset, and images downloaded from URLs in <a href="https://www.kaggle.com/tanreinama" target="_blank">@tanreinama</a> 900k dataset.</li>
<li>Apply layerwise learning rate decay for fine-tuning.</li>
<li>Use a loss function based on angular distance, which outperforms raw cosine similarity.</li>
<li>For consistent local validation, filter prompts from the training dataset that are similar to validation prompts based on cosine similarity.</li>
<li>Determine ensemble weights for the models using Nelder-Mead minimization algorithm on local validation.<br>
The resulting ensemble achieves 0.59333 public/0.59109 private on the leaderboard.<br>
<br></li></ul></li>
<li><p>Retrieval-based/kNN-regression approach with BLIP-2 image captioning:</p>
<ul>
<li><p>Improve <a href="https://www.kaggle.com/motono0223" target="_blank">@motono0223</a>'s kNN-regression approach:</p>
<ul>
<li>Filter out duplicate text embeddings from the datasets.</li>
<li>Combine multiple sub-arrays of numpy arrays.</li>
<li>Penalize distances between sample and examples differently by simple distance exponentiation.</li>
<li>Add more diverse data.</li>
<li>Memory-map stored reference prompt embedding arrays for efficient access without loading the files into memory.</li></ul>
<p>This allows to reduce inference time to 1 hour and achieves 0.57179 public/0.56952 private.</p></li>
<li><p>Generate 20 diverse image captions for each test image using BLIP-2 and diverse beam search decoding. Store these embeddings along with the corresponding reference embeddings, and use them with the retrieval-based/kNN-regression approach. This achieves 0.58821 public/0.58617 private.</p></li></ul></li>
<li><p>Ensemble both approaches! The resulting ensemble scores 0.61004 public/0.60787 private on the leaderboard.</p></li>
</ul>
<p>Since we only need specific SBERT embeddings, it is not efficient to load entire files into memory, especially considering that Kaggle datasets are mounted on network storage and can be slow to read. Therefore, it is sensible to use <a href="https://numpy.org/doc/stable/reference/generated/numpy.memmap.html" target="_blank">memory-mapped files with numpy</a> to read only the necessary embeddings directly from the stored files</p> |
Stanford Ribonanza RNA Folding | 10th Place Solution for the Stanford Ribonanza RNA Folding Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you. It can be interesting if you try this loss on your model. However, your score is already so high, so it's hard to tell if it would contribute anything with the current data. Maybe with some tinkering.</p>
<p>Initially, I had not intended to publish my solution since I used an interesting method for the loss (that I did not see in other solutions, although I might have missed it), and since I did not finish within the price range, I wanted to keep it to myself for some other future compatible competition. However, since apparently there are plans for a paper by the hosts, I eventually opted to publish it.</p>
<h1>Context section</h1>
<p><a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding" target="_blank">Business context</a>.<br>
<a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/data" target="_blank">Data context</a>.</p>
<h1>Overview of the Approach</h1>
<p>I used a 1Dconv+transformer model. The model does not include positional encoding since the conv layers take care of positional relationship ‘automatically,’ allowing for straightforward generalization for longer sequences. I based my model on a previous work of <a href="https://www.kaggle.com/hoyso48" target="_blank">@hoyso48</a> in the ISLR competition; see <a href="https://www.kaggle.com/competitions/asl-signs/discussion/406684" target="_blank">HERE</a> and <a href="https://www.kaggle.com/competitions/asl-signs/discussion/406978" target="_blank">HERE</a>. BTW, congrats for 2nd place. You know, Hoyeol is probably the person from whose work I learned the most. This solution is already the second gold I get, thanks to the knowledge I got from his work. So, this is a special thank you. <br>
Hoiso’s original model was causal in time; I changed it to be symmetric to both sides by employing suitable paddings and maskings. <br>
I trained two kinds of models; model_1 was trained only on the sequences, and model_2 included CapR and bpp sum&max per nucleotide.<br>
Things that helped me (I did not perform a proper ablation study, so I will not give exact numbers most of the time):</p>
<ol>
<li>Clipping the predictions with a sigmoid (regular clipping also helps, but a sigmoid helps even more). It also made the convergence smoother.</li>
<li>One-hot encoding of the nucleotides instead of embeddings (this is mainly for the seq-only model. When including more features, one-hot is a given)</li>
<li>CapR</li>
<li>BPP sum&max<br>
I also found the injection of bpps values to the attention helpful, even more so with 2D conv layers on the bpps in various schemes. HOWEVER…I was very very suspicious of bpps. I feared that they would be much worse on the private test set since it consists of new sequences, and I thought that even if it was good for the public set, maybe the BPPs models were developed on similar sequences, but for the private set, it would fail. So I chose only to include sum&max per nucleotide, thinking that these ‘averaged’ values would be less prone to overfitting. Also, tbh, I did not have enough time at the end to do proper research and ensemble of the more complicated models due to trying too many ideas in too little time, lol.</li>
<li>Weighted loss function:<br>
Let's start from the end.<br>
loss(reactivity, reactivity_error, pred)=MAE(reactivity, pred)*potentials <br>
with: <br>
potentials = (1-2*dist.cdf(reactivity-|reactivity-pred|) <br>
and: <br>
dist = normal_distribution(mu=reactivity, sigma= reactivity_error)<br>
Now, to explain why.<br>
I thought a lot about the loss function. Since we were given the errors, I searched for an appropriate way to incorporate uncertainty in the measurements into the loss. I was a bit surprised when I could not find something useful- of course, I maybe did not dig deep enough, but at the very least, it seems that a lot of digging is required. I found several things about incorporating uncertainties into the predictions (i.e., predicting the uncertainties of the predictions), but I wanted the opposite.<br>
Now, the obvious thing to do is directly weigh the loss by the error- to give some examples, Hoiso (2nd place) used log1p error, and I saw suggestions like 1/(1+err), etc. However, I needed more than this: it punished high and low errors too much. To understand why, let's talk for a second about the meaning of error- generally, it means that instead of knowing the exact value, we can only give an approximation for the true value, with a distribution of probability for it to be any number, and the probability given by normal distribution with sigma = error. When we say that reactivity=0 ± 1, we give a probability of 0.68 for the exact value to be between -1 to 1 (i.e., within one standard deviation). <br>
Lets assume that we have reactivity1 =0 ± 1 , reactivity2 =0 ± 3 , pred1 =pred2 =100 . If we normalize, for example, by 1/(1+err), and with MAE, then loss1 =50, loss2 = 25. So the first predictions contribute to the loss twice as much as the second one, even though, if we look at the distribution of probabilities- the first one has 0.68 to be in [-1,1] and the second one has 0.68 to be in [-3,3] (and 0.95 to be in [-6,6]) so for a prediction so far away from most of the distribution, I expect both predictions to contribute to the loss about the same. Of course, we clip the predictions between 0,1, making the analysis more complicated and depending on the exact loss and clipping each person applies. <br>
On the other hand, assume we have reactivity1=0 ± 0.01, rectivity2=0 ± 0.1, pred1=pred2=0.05, then we get loss1=0.0495, loss2=0.0454. This time, they are almost the same- even though we know that reactivity1 has a probability of 0.68 to be in [-0.01, 0.01], which is far from the prediction relatively to the prediction for reactivity2, which lies inside one standard deviation of the ground truth distribution, so I expect it to contribute much less to the loss. <br>
One way to address this issue is to perturb the reactivity values according to their probability distribution. However, this leads to unstabilized training since many values have a high error, and moreover, they are perturbed outside the range of [0,1], leading to more complications.<br>
I wanted to properly incorporate our knowledge about the probability distribution of the values into the loss function. At this point, I took some inspiration from the shell theorem (check it <a href="https://en.wikipedia.org/wiki/Shell_theorem" target="_blank">on Wikipedia</a>), and by considering the probability distribution as a form of ‘potential,’ I finished with the above formula. <br>
This loss helped my convergence. I don’t remember precisely how much, maybe about 0.001? Not by a huge amount. But it certainly helped. It also made it even smoother. I think this method allows the inclusion of high error values while squeezing as much information as possible from them. Also, I believe this is better than trying to pseudo-label high-error values since noisy data is still data with real information inside. We just need to squeeze this information out…<br>
But wait, there is more. What if I suspect the errors we were given themselves are inaccurate?<br>
No problem. Just multiply the given errors by some factor, add some constant errors, and assemble many models with randomly chosen factors.<br>
At this point, I remembered the method of label smoothing for logistic regression, and, well, this is basically a form of label smoothing for regression. Once I framed it as label smoothing, it was easy to find similar ideas, e.g. <a href="https://proceedings.mlr.press/v80/imani18a/imani18a.pdf" target="_blank">this paper</a>. Although it seems like they overcomplicated things there a bit.<br>
Note that this works the same for other losses, e.g., for MSE, multiply MSE by the same 'potentials,' etc.</li>
</ol>
<h1>Details of the submission</h1>
<ol>
<li><p>Ensembling (obviously):<br>
Here is a nice plot of the power of ensembling for the sequence-only models (model_1)<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6832115%2F81ab8378d36c6bc5a08c64da32642008%2Fensembling_1.png?generation=1703442429686814&alt=media" alt=""></p></li>
<li><p>Ensembling models with bpp+CapR with sequences-only models<br>
I trusted my sequences-only models much more than the +bpp&CapR ones, so I submitted one ensemble of pure sequences-only models. I planned the other ensemble to be only +bpp&CapR models. But then, on the very last day, just when I had the two ensembles ready, I looked again at the pictures of the results for R1138v1 predictions. See more details <a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/discussion/444653" target="_blank">here</a>. Basically, we wanted the models to replicate the following image:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6832115%2F506500c54df757dd1b8046843f9a25e8%2FR1138v1.png?generation=1703442477381564&alt=media" alt=""><br>
Here is the picture of my sequence-only models ensemble:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6832115%2F6a3fece98bcbd1a2deb22b7b5dfa52cc%2FR1138v1%20model_1.png?generation=1703442512137941&alt=media" alt=""></p>
<p>It’s a bit hard to see, but it recover the central line, especially in the 2A3 picture. However, the sidelines are missing. On the other hand, for the +bpp&CapR models ensemble:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6832115%2F8cd376a57c59e86e7bcb09917ef4e6e6%2FR1138v1%20model_2.png?generation=1703442537753029&alt=media" alt=""><br>
Here, it’s the opposite. I have visible traces of the sidelines but almost nothing for the central one (maybe one very vague point).<br>
These images bring an immediate idea…more ensembling…(this was not obvious at first since the BPP&CapR models/ensemble had a much higher validation/public score than the sequences-only ones)<br>
I had only three submissions left. I tried the following weights: 6:2, 5:3, and 4:4 in favor of a sequence-only ensemble. 4:4 had a better score but I was too suspisious of bpp and CapR, so I chose 5:3 weights which had a similar score to the original only bpp+CapR ensemble but was expected to be more robust.<br>
After the competition ended, I tried other weights. Here is the complete analysis:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6832115%2Fa86e7996fb22dca69e22fe486ff72282%2Fensembling_2.png?generation=1703442561705283&alt=media" alt=""><br>
Even if I chose the best weight, I would still end outside the price range. So I don’t feel too terrible about missing here. I could end up in 8th place, though, if I chose the 4:4 weights on the last day. Keep in mind that my solution does not include the edge information of the BPPs, as opposed to most (all?) other top solutions. Maybe, except the 3rd place, he got an extremely strong sequences-only solution, much better than mine.</p></li>
</ol>
<h1>More things I tried but did not included in the final solution or didn’t work out (possibly also due to lack of time):</h1>
<p>Using the external data source as another head for the loss, using the external data source to train a model, predicting on the train set, and use the predictions as features, different predictions for bpps (eterna, contra, etc.) did not contribute too much or at all. Structures (i.e. (.) notation) also were not especially useful. I had plans for the 3D features but not enough time…I tried other schemes for the loss, for example, minimizing log loss (sigmoid) (i.e., framing it as logistic regression since the values are between [0,1]) or directly minimizing the ‘potentials’ (see #5. Weighted loss functions)- did not work. I tried various schemes for direct injection of bpp to the attention with or without 2D conv, and it did help (at least +0.001 in validation, probably would be more if I submitted), but as I said before, I did not trusted the bpps and the public LB enough. </p>
<h1>Validation scheme</h1>
<p>During experimentations, I trained on the middle reactivities of the length 177 sequences (between nucleotides #36 and #116, since we were told that the private set might include data on the nucleotides on the edges). I validated on the length 206 sequences and the start/end of 177 length sequences (up to the #36 nucleotide and from the #116 one). For submission, I trained on sequences from all sizes, including the edges reactivities, and validated each model on a random part of the sequences I did not train on for the specific model.</p>
<h1>Main points to take from my solution</h1>
<p>My loss function and ensembling with sequences-only models might have a chance to improve the scores of the top models/ensembles a bit. However, I would not give it too high a chance since their scores are much above mine, and they probably captured most of what there was to capture already. Even so, I think that when we have more data (private test is 1M new sequences, and there are talks about producing ten times more data), bpps and CapR methods would be less and less useful while squeezing more data from the measurements (as I tried to do with my loss function) would become more useful than now. So I still hope that my work will be of some help.</p>
<h1>Sources</h1>
<p>I based my model on Hoyso's solution to the ISLR competition; see <a href="https://www.kaggle.com/competitions/asl-signs/discussion/406684" target="_blank">HERE</a> and <a href="https://www.kaggle.com/competitions/asl-signs/discussion/406978" target="_blank">HERE</a>. <br>
I used <a href="https://www.kaggle.com/code/ratthachat/preprocessing-deep-learning-input-from-rna-string" target="_blank">ratthachat's notebook</a> to calculate the CapR values.<br>
Check my <a href="https://github.com/shlomoron/Stanford-Ribonanza-RNA-Folding-10th-place-solution" target="_blank">GitHub</a> for data preparation and training code.</p> |
Stanford Ribonanza RNA Folding | 11th place solution: Transformer with MFE distance embeddings (with code) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I greatly appreciate the efforts of the hosts in organizing this competition. This was an unfamiliar domain filled with both challenges and fun. I wonder if the competition was worth it for organizers after all. Hopefully, the solutions shared by kagglers will prove beneficial to you and other researchers.</p>
<p>Here is a quick summary of my solution:</p>
<h2>Features</h2>
<ul>
<li><strong>Eternafold MFE and BPPs</strong>: I experimented with other packages (RNAstructure, Vienna, CONTRAfold, rnasoft, ipknot). I found that only the features from Eternafold contributed significantly to my model.</li>
<li><strong>CapR structure</strong>. This led to faster convergence of the model.</li>
<li><strong>bpRNA structure</strong>. I am unsure of its contribution to the score.</li>
</ul>
<h2>Model architecture</h2>
<p>I forked Huggingface's BERT and implemented several modifications:</p>
<ul>
<li><strong>Attention injection support</strong>: I added BPPs with 2d cnn on top to 8/16 of model's attention heads. The most effective variant involved a separate CNN layer for each attention head. However, this significantly increased training time with only a marginal improvement in score. I sacrificed a few score points in favor of faster experimentation. Most of my models in the final ensemble only use one 2d cnn layer per transformer block.</li>
<li><strong>1d CNN layers between transformer layers</strong>: This addition was based on the premise that closely situated nucleotides influence each other, and a CNN layer could effectively extract these features.</li>
<li><strong>Relative embeddings</strong>: I only used relative embeddings in my model. Initially, they seemed a safer choice due to the length differences between train/test sequences. More importantly, they appeared more intuitive in the context of RNA structure, as opposed to absolute embeddings. I used two distance types:<ul>
<li>The shortest distance along ribose-phosphate chain.</li>
<li>The shortest distance along the MFE structure. </li>
<li>To illustrate this point, consider the image below, which depicts an RNA molecule from <a href="https://www.kaggle.com/code/brainbowrna/rna-science-computational-environment" target="_blank">this notebook</a>. C and G are encircled and located at opposite ends of the RNA chain. Despite their positioning, they are actually paired with each other, indicating that the distance between them is merely 1 in terms of the RNA folding structure. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1420334%2Fbdcae78bf822ea1e1f00b4192ab4e784%2Frna_image_dists.png?generation=1702011163871853&alt=media" alt=""></li></ul></li>
</ul>
<h2>Performance</h2>
<p>The single model achieved a public score of approximately 0.1432. However, by averaging models with minor variations, I was able to achieve a score below 0.14. </p>
<p>The code is available at <a href="https://github.com/chubasik/stanford-ribonanza-rna-folding" target="_blank">https://github.com/chubasik/stanford-ribonanza-rna-folding</a></p>
<p>P.S: I had to post it a second time for it to be marked as a write-up</p> |
Stanford Ribonanza RNA Folding | 15th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First and foremost, I'd like to express my gratitude to Kaggle for hosting this competition, and to the organizers for their active involvement and responsiveness in the forums. I gained valuable insights from participating in this competition. Special thanks to my teammates, Anton <a href="https://www.kaggle.com/ant0nch" target="_blank">@ant0nch</a> and María <a href="https://www.kaggle.com/manaves" target="_blank">@manaves</a>, and a shout-out to María for her extensive domain knowledge as a biotechnologist. Lastly, I want to extend my thanks to my employer, Freepik, for providing additional computing resources that proved crucial in the final weeks.</p>
<p>Our solution is based on a single transformer encoder module. We used the standard pytorch implementation with <code>d_model: 256</code>, <code>dim_feedforward: 768</code>, <code>num_layers: 16</code>, <code>dropout: 0.1</code>, GELU activation and normalization first.</p>
<p>The model was trained on a 5-fold split grouped by clusters, and clusters were obtained using KMeans of A, C, G, U -> 1, 2, 3, 4 mapped vectors.</p>
<p>We used AdamW with weight decay 1e-2 and OneCycleLR schedule with <code>pct_start: 0.02</code> and <code>max_lr: 1e-3</code>.</p>
<p>The encoder input is:</p>
<ul>
<li>A 252 dimensional vector encoding the sequence as the sum of nucleotide embeddings:</li>
</ul>
<pre><code> self.embs = nn.
</code></pre>
<ul>
<li>We also encoded the secondary structures from the 47 provided algorithms using an embedding bag where only present data counted towards the mean. Data from the structures were encoded like this: . -> 1, ( -> 2, ) -> 3, [ or < or { -> 4, ] or > or } -> 5.</li>
</ul>
<pre><code> self.se_embs = nn.
</code></pre>
<ul>
<li><p>The two embeddings were added together, and then the following features from the BPP files were concatenated for each nt: max, median, mean and std.</p></li>
<li><p>Additionally, both the eternafold BPP data and adjacency matrices computed from the secondary structures were used as attention biases via a learnable linear layer. We used 48 matrices for regular pair adjacency and another 48 matrices for pseudoknot adjacency, plus the BPP matrix (so total = 48 * 2 + 1). We augmented the given data by generating additional secondary structures with mxfold2.</p></li>
</ul>
<pre><code> self.algo_conv = nn.Conv2d(48 * 2 + 1, nhead, =1, =0, =)
</code></pre>
<ul>
<li>Positional encoding: we used the standard sinusoidal positional encoding from the original transformer paper. We attempt to achieve longer sequence generalization by assigning each nucleotide in the sequence random correlative positions sampled uniformly from <code>[0, 512[</code>. This hopefully forces the network to learn from the relative order of the nucleotides rather than their absolute position.</li>
</ul>
<pre><code>
rnd_pos = torch.empty((n, s), dtype=torch.int64, device=pred.device)
row
row[] = torch.randperm(.max_seq_len, device=pred.device)[].sort().values
pos = .pos(rnd_pos)
</code></pre>
<p>We trained several models for 100 epochs with a batch size of 32 and different ablations of the above features, different SNR filtering strategies (>1 and >0.5) and different GroupKFold shuffle seeds. In the end, we averaged the predictions of all 5 folds of our 7 best models.</p>
<h2>What didn't work:</h2>
<ul>
<li>training in bfloat16 gave substantially poorer results</li>
<li>pseudolabels, or we just didn't know how to implement this properly</li>
</ul> |
Stanford Ribonanza RNA Folding | 16th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First off, I wanted to thank the competition hosts for putting together this competition as well as the community for all of the discussion.</p>
<h1>Summary</h1>
<ul>
<li>Transformer architecture<ul>
<li>Separate models for 2A3 and DMS</li></ul></li>
<li>Used Pr(base paired) derived from BPPM</li>
<li>Dynamic position bias</li>
</ul>
<h1>Approach</h1>
<h2>Data:</h2>
<p>The input features were:</p>
<ul>
<li>Sequence</li>
<li>Pr(base paired) (i.e. sum along one axis of the BPPM) from 4 RNA structure packages</li>
</ul>
<p>I was hesitant to include the BPPM for a few reasons that I thought would result in hindering the generalized performance for longer sequences:</p>
<ol>
<li>Base-base pairing probability is overly specific ‘wrong’ information</li>
<li>Lack of consideration of pseudoknots</li>
<li>Dilution of individual probabilities for longer test sequences compared to the shorter train sequences</li>
</ol>
<p>My compromise was to consider the global probability of pairing for each base. <a href="https://www.nature.com/articles/s41592-022-01605-0" target="_blank">This paper from the Das lab</a> assessed how Pr(paired) as calculated by various RNA structure packages compared to chemical mapping assay results. They found CONTRAfold 2, ViennaRNA 2 at 60 degrees C, and RNAsoft with BLStar parameters to be the most effective approaches. Therefore, I generated Pr(paired) vectors for all sequences from these three methods as well as Eternafold to use as features.</p>
<p>As an aside, I used Arnie to run these packages and spent more time than I care to admit getting it setup for multiprocessing, although overall I think it ended up saving time. I did try to expand to additional methods, however, I pretty quickly abandoned it due to bugs / run time constraints.</p>
<p>Earlier on in the competition, I did also try various structure features generated from ViennaRNA (MFE, centroid, ensemble). Compared to sequence alone, I was able to get improved performance, however, it added no benefit once the Pr(base paired) feature was included.</p>
<h2>Architecture:</h2>
<p>My approach ended up being a single transformer architecture separately trained to predict 2A3 or DMS.</p>
<p><strong>Seq</strong></p>
<ul>
<li>4 token embedding -> 512 output dimension</li>
</ul>
<p><strong>Pr(base paired)</strong></p>
<ul>
<li>Linear layer 4 input dimension -> 512 output dimension</li>
</ul>
<p><strong>Transformer</strong></p>
<ul>
<li>18 layers, 512 dimension, 8 heads</li>
<li>Postnorm</li>
<li>Dynamics position bias</li>
<li>Intra-attention Gating on Values</li>
<li>Input: Seq + Pr(base paired)</li>
</ul>
<p>This was my first time working with a transformer architecture. Luckily, I found the library <a href="https://github.com/lucidrains/x-transformers" target="_blank">x-transformers</a> by <a href="https://www.kaggle.com/lucidrains" target="_blank">@lucidrains</a> and used that to try out / learn about numerous variations of transformers.</p>
<p><strong>What helped:</strong><br>
<em>Dynamic Position Bias</em><br>
One of my first goals was to understand how to generalize a transformer to longer sequences. Initially I found the ALiBi paper and thought that would be a useful approach. However, based on the x-transformers documentation it appeared as if dynamic position bias would be more likely to be effective in this context. I did not attempt any direct comparisons between ALiBi (or other relative position biases) and dynamic position bias since I did not trust performance differences on length extrapolation from the train set alone. Dynamic position bias did show the largest bump of any modification on the public vs. private sets (public - 0.15400 vs 0.15002; private - 0.18476 vs 0.14976). Note: this result was before the change to using Pr(base paired) data.</p>
<p><em>Intra-attention Gating on Values</em><br>
This is an attention variant from Alphafold2 that gates the aggregated values with the input. This was one of the feature variations available on x-transformers that appeared to have a small improvement on performance. Note: I added this one a while back and just kept it in, so it is entirely possible it adds nothing of value in the final model.</p>
<p><em>Postnorm</em><br>
One of the first changes I assessed with the transformer was to switch from the generally more preferred prenorm to the original postnorm approach. This resulted in improvements to convergence rate and performance. I rechecked this multiple times over the course of the competition as the architecture and input data approaches shift and consistently got the same type of result. I also attempted the various other norm approaches found in x-transformers, but none resulted in any noticeable improvement.</p>
<p><em>18 Layers</em><br>
18 layers seemed to be the sweet spot of performance and training time. The main depths I tested were 12, 18, 24. 12 vs 18 showed a small performance improvement on the leader boards (public - 0.14383 vs 0.14330; private - 0.14595 vs 0.14517). 18 vs 24 I did not notice any difference in the CV score, so I did not submit any 24 layer models.</p>
<p><em>Sum embeds</em><br>
My approach to integrating the embedded sequence and the Pr(base paired) after passing through a linear layer (both of dimension 512) was to simply sum them elementwise. I tried other approaches, but they all appeared to have negative impacts to convergence and/or performance.</p>
<h2>Training:</h2>
<p>I used the splits used in <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a> starter transformer notebook and all of my experimental work was done using fold 0. Because I ended up using individual models for 2A3 and DMS prediction, the set of sequences satisfying SN >= 1.0 and reads >= 100 was higher for each group since there was a subset of sequences where only one of 2A3/DMS satisfied the constraint.</p>
<p>Additionally, I incorporated the react_error values as a means of weighting the loss so that I could use more of the lower quality sequences with greater confidence (luckily this happened right after the time the issue with react_error was identified and corrected). The react_error values were strictly greater than 0 and were usually fairly low (I believe the median on at least the high quality samples was ~0.125). I decided to try weighting the loss of each base using an exponential function:</p>
<p>Weighting v1:<br>
exp(-react_error)</p>
<p>Weighting v2:<br>
exp(-(react_error^2))</p>
<p>This would result in a weight value between 0 and 1 for each base in the sequence. While I don’t have a good mathematical rationale for this approach and I am sure there may be better options, it did give a noticeable boost to CV performance (~0.0005 - 0.001), including when using only the higher quality sequences.</p>
<p>My approach to training ended up being somewhat erratic for the final model.</p>
<ul>
<li>2A3 model<ul>
<li>fit_one_cycle(epochs=256, lr=5e-4, pctstart=0.01)</li>
<li>SN >= 1.0, reads >= 100</li>
<li>exp(-(react_error^2)) weighting</li>
<li>Early stopping at epoch 45</li></ul></li>
<li>2A3 model part 2<ul>
<li>Fine tune 2A3 model, lr=5e-5</li>
<li>SN >= 0.2, reads >= 75</li>
<li>exp(-react_error) weighting</li></ul></li>
<li>DMS model<ul>
<li>Fine tune 2A3 model part 2, lr=5e-5</li>
<li>SN >= 1.0, reads >= 100</li>
<li>exp(-(react_error^2)) weighting</li></ul></li>
<li>DMS model part 2<ul>
<li>Fine tune DMS model, lr=5e-5</li>
<li>SN >= 0.2, reads >= 75</li>
<li>exp(-react_error) weighting</li></ul></li>
</ul>
<p>As I mentioned above, I used fold 0 of my CV splits for all experimentation. Unfortunately, I lost time testing out other architectural approaches in the final 2 weeks and only settled on coming back to the split models due to the slight performance improvement within the last day, so I was unable to run the full set of CV folds. However, in a last minute effort to at least capture some of the fold 0 validation data in my model I ended up running 1 epoch on this data for my second final submission, which ended up giving me a nice little last minute boost on the leaderboard (public: 0.14330 vs 0.14199; private: 0.14517 vs 0.14458).</p> |
Stanford Ribonanza RNA Folding | 184th Place Solution ESM2 (2 models) using RMDB + QUICK_START datasets | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hey everyone,</p>
<p>Hope y'all had a good time! This was my first competition and I got lucky after the LB shake-up haha (top-68% public -> top-25% private). Super excited for competitions like this in the future.</p>
<p>Here's a link to my solution - if you are interested in seeing how finetuned ESM2 performs on RMDB + QUICK_START Dataset using 2 models (1 for each experiment type):</p>
<p><a href="https://www.kaggle.com/code/pranshubahadur/esm2-rmdb-rna-dataset" target="_blank">https://www.kaggle.com/code/pranshubahadur/esm2-rmdb-rna-dataset</a></p>
<p>This was a regression problem in predicting the reactivity of RNA sequences. Needless to say, I learned a lot about NLP while solving this problem statement. </p>
<p>It could've been much better if I had explored the bpp files imo…I will learn from my mistakes and improve my approach in the future…</p>
<p>Be sure to follow / upvote if you like my work & it brought some value to you!</p>
<p>Thank you for the opportunity hosts!</p>
<p>Congratulations to the winners!</p> |
Stanford Ribonanza RNA Folding | 19th Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for host! it's very interesting competition.<br>
I share about my solution.</p>
<h1>TL;DR</h1>
<ol>
<li>Conv + Transformer + LSTM Architecture</li>
<li>Pseudo Labeling</li>
<li>Weighted Average</li>
</ol>
<p>important point is that I use test data while training process for robustness of Private data.</p>
<h1>Solution</h1>
<h2>Architecture</h2>
<p>basic architecture</p>
<ol>
<li>Conv + Transformer + LSTM</li>
<li>Conv + LSTM + Graph Attetion</li>
</ol>
<h2>Training Strategy</h2>
<ol>
<li>MLM</li>
<li>Traning</li>
<li>Pseudo Labeling</li>
</ol>
<h3>MLM</h3>
<p>I use first pretreind Masked Language Model(Masked RNA Model?)<br>
I delete some RNA sequence sometimes(0.3), and predict deleted sequence.</p>
<h3>Training</h3>
<h4>Feature</h4>
<ol>
<li>sequence</li>
<li>predicted_loop(eternafold/contrafold_2/vienna_2)</li>
<li>structure(eternafold/contrafold_2/vienna_2)</li>
</ol>
<p>I use one hot encoding with 2/3 feature, also I express probability(eternafold: (, contrafold_2: (, vienna_2:) ->(:0.66, ):0.33)</p>
<h4>Training Strategy</h4>
<p>Training is very simple</p>
<ol>
<li>use Filtered Dataset</li>
<li>weighted loss(sequence)</li>
<li>remove some sequence(structure/predicted_loop)</li>
</ol>
<h3>Pseudo Labeling</h3>
<p>Private dataset is longer than Public dataset. I decide to get robustness from test data.<br>
but this competiton is regression, I cannot get confidence.<br>
I use pretrain, then train only training dataset.</p>
<p>I apply iterative that pseudo labeling.<br>
also few step later, I relabel noisy training data(=filter0), and train this process with pseudo labeling data.</p>
<p>Public: 0.141->0.139->0.1384->0.1383</p>
<h2>Ensemble</h2>
<p>use weighted average<br>
I check only public because I didn't believe cv score when I used pseudo labeling.</p>
<h2>Note</h2>
<p>I forgot exhanging old and new dataset….<br>
it's very important…(Single model achieve Private 0.143..)</p> |
Stanford Ribonanza RNA Folding | 21th solution: ESM2 + custom folding head | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Pretraining was key. We saw a performance improvement with more pretraining. I'm sorry I can't say more because I have not tried without a pretrained ESM2.</p>
<p>Your score is not that bad!</p>
<p><strong>TLDR</strong></p>
<p>We entered the competition quite late, our first sub was from 10 days ago. Given the limited time, we decided to bet on a low development path and leverage the excellent ESM2 model family from META.</p>
<p>We pretrained ESM2 backbones on competition data and RNA Central long sequences, then added custom folding heads and fine tuned on DMS and 2A3 labels.</p>
<p>We honestly are a bit surprised by our end result, we expected worse when we started. </p>
<p><strong>ESM2 Pretraining</strong></p>
<p>We have some experience pretraining <a href="https://github.com/facebookresearch/esm" target="_blank">ESM2</a> protein language models, so we thought they would be a good fit for this competition. We pretrained three versions of ESM2 models (35M, 150M, 650M) on both train and test sequences using MLM (masked language modeling) loss. </p>
<p>In particular, we used huggingface’s <a href="https://huggingface.co/docs/transformers/model_doc/esm#transformers.EsmForMaskedLM" target="_blank"><code>EsmForMaskedLM</code></a> model class and <a href="https://huggingface.co/docs/transformers/main/main_classes/data_collator#transformers.DataCollatorForLanguageModeling" target="_blank"><code>DataCollatorForLanguageModeling(tokenizer, mlm=True)</code> </a> data collator which supports MLM loss.</p>
<p>Since there are no train sequences and only 8000 test sequences longer than 208, we downloaded <a href="https://rnacentral.org/" target="_blank">RNA Central </a> data and selected 12 million samples with length between 115 and 457 (the range in this competition data) to complement the competition data.</p>
<p>We pretrained 5-20 epochs depending on how many RNA central data used, for a total of 25M to 70M total samples seen.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F75976%2Feb2f9716c901ef7021c64e644a16d2a6%2Fesm2.png?generation=1702046716073336&alt=media" alt=""></p>
<p><strong>Folding Head</strong></p>
<p>We adapted the model CPMP used in the Covid Vaccine competition three years ago to devise a folding head on top of ESM2 token embeddings. That model was presented <a href="https://www.kaggle.com/competitions/stanford-covid-vaccine/discussion/189723" target="_blank">here</a>.</p>
<p>The folding head takes as input both the ESM2 token embeddings and the bpp files. Its main component is an attention/convolution layer, repeated twice, then a linear classification head. </p>
<p>The Attention/Convolution layers are similar to the transformer encoder structure: an attention layer followed by a convolution bloc with skip connection.<br>
The attention layer is a bpp attention layer followed by a bidirectional RNN (GRU or LSTM yields two model variants). Sure, GRU is not attention, but here it plays the same role as an attention layer where each node attends its two neighbors in the sequence.<br>
The bpp attention is a standard attention layer where attention weights are the bpp values.<br>
The convolution layer was designed after the Efficientnet convolution bloc. We then stack these as in this <a href="https://www.kaggle.com/thebigd8ta/open-vaccine-pytorch-v" target="_blank">notebook</a>.</p>
<p>We bet on this RNN/Convolution architecture rather than a transformer for two reasons:</p>
<ul>
<li>We literally started working on it 5 days before competition end, and we did not have time to tune a new architecture</li>
<li>we hoped that ESM2 attention would have already learned what a transformer head attention would learn.</li>
</ul>
<p><strong>Finetuning</strong></p>
<p>Our best models are the ones with the custom folding head on top of the ESM2 backbone. We also trained simple models with just a linear head. The final ensemble is a mix of both, most being with a folding head.</p>
<p>We used four variants of training data:<br>
Quick start<br>
Quick start + SN >= 0.3<br>
Quick start + SN >= 0.3, and SN as sample weight<br>
Quick start + SN >= 0.1, and SN as sample weight<br>
SN is signal_to_noise clipped to (0, 1). We used it as sample noise in some variants. Model quality increased from top variant to bottom one.</p>
<p>We used a 5 fold CV and submitted the average of each fold model predictions.</p>
<p><strong>Scores</strong></p>
<p>Our best single model is a 650M LSTM model with SN weight. CV = 0.12847, Public LB = 0.14308 (5 fold ensemble)<br>
Our best ensemble is a 15 model ensemble (75 folds in total), CV = 0.12478, Public LB = 0.14174</p>
<p><strong>What didn’t work</strong></p>
<p>Pretrained models from <a href="https://github.com/ml4bio/RNA-FM" target="_blank">RNA-FM</a> were not as good as our ESM2 pretrained models.</p>
<p>ESM2 can also output contact predictions. To do so it stacks all attention activations, then it applies a regression head. We tried to use that to predict bpp values via an auxiliary loss. This improved the competition metric score a bit. But this comes at the expense of a 4x space and a 2x time increase, which prevented us from using it with 150M and 650M variants of ESM2.</p>
<p>Bo & CPMP</p>
<p>no, we replaced the vocab.txt files used by the ESM tokenizer. We used this as <code>rna_vocab.txt</code>:</p>
<pre><code><cls>
<pad>
<eos>
<unk>
A
C
G
U
<null_1>
<mask>
</code></pre>
<p>Then create the tokenizer with:</p>
<p><code>tokenizer = EsmTokenizer('../rna_vocab.txt', model_max_length=context_length)</code></p>
<p>Yes, we have submitted ESM2 alone. Best one has 0.14781 private / 0.14359 public. But it looks like an outlier, others have much worse scores. </p>
<p><a href="https://www.kaggle.com/boliu0" target="_blank">@boliu0</a> did the ESM2 part, maybe he can say more.</p> |
Stanford Ribonanza RNA Folding | 2nd place solution - Squeezeformer + BPP Conv2D Attention | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you! 🫡</p>
<p>That's a great question! In fact, it is indeed not necessary in terms of CV(+0.0001 in my case without AliBi). However, the reason I used it in the model is that I thought alibi bias would indeed generalize well for long sequences. As you can see, alibi bias simply adds negative values to the attention matrix for distant frames. It's a very simple method, but the reason it provides robustness for long sequences, as I understand it, is simply because it just <strong>blocks attention between frames that are too far apart</strong>. </p>
<p>If you look at the table, you will see that absolute positional encoding (sinusoidal), unlike relative positional encoding, has significantly lower robustness for long sequences. As I understand it, this is because when the model receives a long sequence it has not seen during training, for each query, there is a relatively higher likelihood of attending to all positions(even when very distant) of the key compared to the relative method, which ultimately means that due to the normalization by softmax, it is very likely to get a less sparse attention matrix compared to when a short sequence is input. </p>
<p>Even if you use conv1d instead of positional encoding, the same issue may arise in the Transformer's attention as there is no constraint on the score between long frames. I haven't read the alibi paper in detail or conducted any experiments, but I decided to just add the alibi bias based on this hypothesis. It is just naive thoughts on it but I hope this answer is helpful…!<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5003978%2F3950e09c5ad2e1d18d6125571f2b60fa%2Fimage.png?generation=1702058837308276&alt=media" alt=""> </p>
<p>yep, this is just the hypothesis and definitely need some experiments to check it's also applicable in RNA sequences. But as far as I think Transformer can always capture distant relationships very well(whenever it can) other than any other architectures with multiple layers. and I put robustness on top of everything so I felt adding it is more safe.</p>
<p>I tried it once, but it didn't work. I can't fully explain it, but I think it's because there's already a certain amount of noise in the dataset. (Adding any kind of noise to the dataset in the form of augmentation also didn't work in my case)</p>
<p>Thanks to Kaggle and the hosts for organizing this competition. It was truly inspiring and challenging, and I learned a lot from this one.👍</p>
<h3>Code: <a href="https://github.com/hoyso48/Stanford---Ribonanza-RNA-Folding-2nd-place-solution" target="_blank">https://github.com/hoyso48/Stanford---Ribonanza-RNA-Folding-2nd-place-solution</a></h3>
<h1>TLDR</h1>
<p><strong>Keypoints:</strong></p>
<ul>
<li>Squeezeformer[1] + GRU head.</li>
<li>Simple Conv2DNet for bpp, adding it as a bias to the attention matrix.</li>
<li>ALiBi positional encoding[2] for robust generalization on longer sequences.</li>
<li>Weighted loss with signal_to_noise, with longer epochs.</li>
<li>Additional features for minor score improvements.</li>
</ul>
<p>I adopted Squeezeformer, which I became familiar with after the ASL fingerspelling competition.Thanks to <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a> and <a href="https://www.kaggle.com/goldenlock" target="_blank">@goldenlock</a> for their solutions in the last ASL competition. The most crucial part of my solution is how to utilize the bpp matrix. I applied a simple shallow Conv2DNet to bpp and directly added it to the attention matrix.</p>
<p><strong>Features:</strong></p>
<p>I used some features found useful in the OpenVaccine Challenge, to help fast initial convergence. These included:</p>
<ul>
<li>CapR looptype.</li>
<li>eternafold mfe.</li>
<li>predicted Looptype with eternafold mfe.</li>
<li>bpp features (sum, nzero, max).</li>
</ul>
<p>However, unlike in the OpenVaccine challenge, these features only marginally helped (about -0.0005). Therefore, I believe these features should be removed in the future for the simplicity.</p>
<h1>Model</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5003978%2Fb3c99f0b3ace13f5c4381c094b899d46%2F-3.drawio-2.png?generation=1702054547887335&alt=media" alt=""></p>
<p><strong>Squeezeformer Encoder:</strong></p>
<p>I chose Squeezeformer with minor modifications (BN after conv1d, SwiGLU in FFN, etc.), which mixes Conv1D blocks with Transformer. While I tried other recent Conv-Transformer Hybrid architectures, Squeezeformer was the most efficient. Compared to a Vanilla Transformer, Squeezeformer showed strong performance early in training and consistently showed faster convergence.</p>
<p>The models used the following parameters: dim=192, num_heads=4, kernel_size=17, num_layers=12.</p>
<p><strong>GRU head:</strong></p>
<p>Adding a single GRU layer after the encoder yielded minor improvements.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5003978%2F3191b22f70fe4b38f04b6664871d2b96%2F.drawio-10.png?generation=1702054568362614&alt=media" alt=""></p>
<p><strong>ALiBi positional encoding:</strong></p>
<p>I adopted AliBi positional encoding as it claimed to generalize better over long sequences than other methods.</p>
<p><strong>BPP as Attention Bias:</strong></p>
<p>The bpp matrix (using only the provided one) was added as a bias in the attention matrix after multiplied by per-head predefined scales, significantly improved performance (around -0.0025).</p>
<p><strong>BPP 2DConvNet:</strong></p>
<p>Using Bpp directly as an attention bias was a good start, but I felt it needed more flexibility(I felt it was too sparse). Among various options, adding a 2D CNN on top of the BPP matrix proved very helpful (-0.002). However, multiple 2D CNNs applied to the BPP matrix (usually 206 x 206) were inefficient in terms of training/inference time. Thus, I just used a simple shallow 2-layer 2DCNN, with the output matrix shared across all Transformer block layers.</p>
<h1>Training</h1>
<ul>
<li>Epochs: 200.</li>
<li>Batch size: 256.</li>
<li>Learning rate: 2e-3, with Cosine Decay and warmup.</li>
<li>Optimizer: AdamW, weight decay = 0.01.</li>
<li>Loss: Weighted MAE (weight = log1p(signal_to_noise).clip(0,10)).</li>
</ul>
<p>The single model CV (K-fold, k=5) scored 0.119, with a public LB of 0.140 and a private LB of 0.142. After ensemble on different seed I got public LB of 0.135 and private LB of 0.140.<br>
Although there were some questionable correlations between CV/LB in certain submissions, generally, they aligned well with the CV.<br>
With the above setup, training a single model took around 30 hours on a single RTX 4090. </p>
<h2>Discarded ideas & Thoughts:</h2>
<ul>
<li><strong>Self-Supervised Learning (SSL)</strong>: At first I motivated to participate in this competition as it would be really nice if any SSL method could be successfully applied without using any features other than the sequence. Initial trials with Data2Vec and BERT-like SSL methods showed inconsistent improvements. Due to the additional training time required, I did not consider SSL further. However, I believe there is still huge potential in this idea.</li>
<li><strong>Large Models</strong>: Attempts to train larger models (dim > 512) with proper regularizations were unsuccessful. I think this and SSL failure suggests that the primary challenge lies in the inherent noise within the training dataset.</li>
<li><strong>Augmentations</strong>: Most augmentation methods I tried had no effect.</li>
<li><strong>Pseudo Labels</strong>: While pseudo labeling might help in LB, it didn't improve CV in my case, so I didn't use it for safety&training time. However, after seeing the correlation between public and private LB, I think it might have been slightly beneficial in both public and private LB.</li>
</ul>
<p>For me, this competition was a series of choices regarding whether to experiment with or adopt some promising ideas, especially when there were only 2-3 weeks left. Some of the ideas I thought might be helpful were abandoned without further consideration because they required more time for implementation and training. I think that this strategy may have made my solution somewhat suboptimal or redundant, but overall I see it worked quite well as my solution appeared to capture most of the crucial aspects of other teams' solutions.</p>
<h2>References</h2>
<p>[1]Sehoon Kim, Amir Gholami, Albert Shaw, Nicholas Lee, Karttikeya Mangalam, Jitendra Malik, Michael W. Mahoney, and Kurt Keutzer. 2022. Squeezeformer: An Efficient Transformer for Automatic Speech Recognition. arXiv:2206.00888 [eess.AS]. <a href="url" target="_blank">https://doi.org/10.48550/arXiv.2206.00888</a></p>
<p>[2]Ofir Press, Noah A. Smith, Mike Lewis. 2022. Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation. arXiv:2108.12409 [cs.CL]. <a href="url" target="_blank">https://doi.org/10.48550/arXiv.2108.12409</a></p>
<p>Hi, there are BPPs for the test sequences in a folder Ribonanza_bpp_files.</p>
<p>Ablation study results may vary depending on the final model, but if I remember correctly, GRU gave approximately -0.0003. And the features you pointed out were the most important. I believe both CapR and eternafold mfe are -0.0002.</p>
<p>Congratulations on third place too! I'm very excited for your next move.👍</p> |
Stanford Ribonanza RNA Folding | 36th solution : Cheap Transformers that can run on gaming laptops and does not require a BPP matrix | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for your comment. As far as I know, the following point is important to use this architecture.</p>
<h2>Using Eternafold to get structure</h2>
<p>It improves CV score at approximately 0.01. I tried other libraries such as U-Fold, R-fold, Vienna RNA etc but none of them worked well.</p>
<h2>Keeping sequence length 64-128</h2>
<p>Too long/short length of sequence makes LB score worse. </p>
<p>First of all, I would like to thank all the participants in the competition. My solution is a cheap transformer that does not require bpp-matrix and can learn and predict with poor computational resources named "Cheap Transformers"</p>
<h1>Generalization to long sequences</h1>
<p>My starting point is <a href="https://www.kaggle.com/code/iafoss/rna-starter-0-186-lb" target="_blank">Iafoss's amazing work</a> where the reactivities are predicted by simple transformers.</p>
<p>It works very well in cross validation in random fold splitting but gives poor performance when train and validation data is split by length of sequence like following. </p>
<pre><code>short_indices = df[df[]..() < ].index.tolist()
long_indices = df[df[]..() > ].index.tolist()
</code></pre>
<p>Therefore, I need another architecture to generalize to long sequences.</p>
<h1>Cheap Transformer</h1>
<p>This is an architecture of cheap transformer</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1233385%2Fef5a1e8baadc56507bee658eb624c633%2FScreenshot%20from%202023-12-08%2023-14-44.png?generation=1702046810306490&alt=media" alt="">er. It just cut a certain length sequence from the original sequence.</p>
<p>When regression, it cuts the sequence in multiple ways and ensemble the prediction results for each character. The ensemble is important. It improves cv-score at about 0.005.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1233385%2Fbbaf5eee4d3b85a8c3ce62c7bc6cf34f%2FScreenshot%20from%202023-12-08%2023-14-52.png?generation=1702046847620018&alt=media" alt=""></p>
<p>This architecture uses short sequences, reduces training time, and requires less GPU memory during execution. Therefore, it runs fast enough even with the GPU of a gaming laptop.</p>
<h1>Final submission</h1>
<p>I prepared multiple models with cut sequence lengths of 64, 96, and 128 and with various transformer's parameters and created an ensemble. Pseudo labeling was also performed, but it had little effect on LB. As you can see the ensemble contributed LB but its effect is not so big, therefore, using single model would be the most practival.</p>
<h1>Scores</h1>
<table>
<thead>
<tr>
<th>Method</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>Iafoss's architecture</td>
<td>0.15669</td>
<td>0.20204</td>
</tr>
<tr>
<td>Cheap transformers (len=64)</td>
<td>0.15635</td>
<td>0.15661</td>
</tr>
<tr>
<td>Cheap transformers (len=96)</td>
<td>0.15458</td>
<td>0.15445</td>
</tr>
<tr>
<td>Cheap transformers (len=128)</td>
<td>0.15518</td>
<td>0.15625</td>
</tr>
<tr>
<td>Public best (ensemble with Iafoss's arch)</td>
<td>0.14716</td>
<td>0.18222</td>
</tr>
<tr>
<td>ensemble</td>
<td>0.15271</td>
<td>0.15346</td>
</tr>
</tbody>
</table> |
Stanford Ribonanza RNA Folding | 403rd Place Solution for the Stanford Ribonanza RNA Folding Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to the organizers and Kaggle for hosting such a great competition.<br>
I value everyone's sharing at Kaggle. <br>
The public notebook provided by SWORDSMAN is amazing and I am thankful for it <a href="https://www.kaggle.com/code/liuyanfeng/stanford-rrf-tensorflow-tpu?scriptVersionId=150578739" target="_blank">https://www.kaggle.com/code/liuyanfeng/stanford-rrf-tensorflow-tpu?scriptVersionId=150578739</a>. 感谢您创建笔记本.</p>
<h1>Context</h1>
<p>Business context: <a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/overview" target="_blank">https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/overview</a><br>
Data context: <a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/data" target="_blank">https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/data</a></p>
<h1>Overview of the approach</h1>
<p>The SN_filter is set to 1 the signal_to_noise>1.0 and reads > 100.<br>
Two files for training experiments with DMS_MaP and 2A3_MaP.<br>
Repeat dataset 5 time.</p>
<h1>Described the models or algorithms used</h1>
<p>Tensorflow keras RNN with Attention.</p>
<table>
<thead>
<tr>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.15803</td>
<td>0.24428</td>
</tr>
</tbody>
</table>
<p>Attention 8 heads.<br>
6 layers.</p>
<p>GaussianNoise 0.01.<br>
Dense 400 relu.<br>
GaussianNoise 0.01.<br>
Dense 40 relu.<br>
Dense 2.</p>
<h4>RNN Model:</h4>
<table>
<thead>
<tr>
<th>Layer type</th>
<th>Output Shape</th>
</tr>
</thead>
<tbody>
<tr>
<td>PositionalEmbedding</td>
<td>990</td>
</tr>
<tr>
<td>EncoderLayer</td>
<td>1458968</td>
</tr>
<tr>
<td>EncoderLayer</td>
<td>1458968</td>
</tr>
<tr>
<td>EncoderLayer</td>
<td>1458968</td>
</tr>
<tr>
<td>EncoderLayer</td>
<td>1458968</td>
</tr>
<tr>
<td>EncoderLayer</td>
<td>1458968</td>
</tr>
<tr>
<td>EncoderLayer</td>
<td>1458968</td>
</tr>
<tr>
<td>Dropout</td>
<td>0</td>
</tr>
<tr>
<td>Dense</td>
<td>79600</td>
</tr>
<tr>
<td>Dense</td>
<td>16040</td>
</tr>
<tr>
<td>Dense</td>
<td>82</td>
</tr>
<tr>
<td>GaussianNoise</td>
<td>0</td>
</tr>
<tr>
<td>GaussianNoise</td>
<td>0</td>
</tr>
</tbody>
</table>
<p>Params: 8,850,520</p>
<h1>Described the data preprocessing, feature engineering, and/or feature selection strategy</h1>
<p>Encoding sin and cos.<br>
Embedding 2048.</p>
<h1>Details of the submission</h1>
<p>What work:<br>
A Custom Loss Function hat take two arguments: target value and predicted value Sum 0 / Sum 1,0.<br>
Epoch 0 Lr 0.0005. Epoch 1-60 Lr is calculated by multiplying Lr by 0.904.</p>
<h1>Preventing overfitting and Validation Strategy</h1>
<p>KFold.</p>
<p>Values outside the interval 0 1 are clipped.</p>
<h1>Code samples</h1>
<p>Source public notebook <a href="https://www.kaggle.com/code/liuyanfeng/stanford-rrf-tensorflow-tpu?scriptVersionId=150578739" target="_blank">https://www.kaggle.com/code/liuyanfeng/stanford-rrf-tensorflow-tpu?scriptVersionId=150578739</a></p>
<h4>Code samples prepare dataset</h4>
<pre><code> n ():
df = df_c.get_chunk()
df = df[df[] > ]
df_DMS_MaP = df[df[] == ]
df_2A3_MaP = df[df[] == ]
delete_list = []
k df.columns:
k k:
delete_list.append(k)
df_DMS_MaP = df_DMS_MaP.drop(delete_list, axis=)
df_2A3_MaP = df_2A3_MaP.drop(delete_list, axis=)
n == :
df_DMS_MaP.to_csv(+train_file, header=df_DMS_MaP.keys() ,index=)
df_2A3_MaP.to_csv(+train_file, header=df_2A3_MaP.keys() ,index=)
:
df_DMS_MaP.to_csv(+train_file, mode=, header=,index=)
df_2A3_MaP.to_csv(+train_file, mode=, header=,index=)
</code></pre>
<h4>Code samples demonstrating model training</h4>
<pre><code> GlobalSelfAttention(
num_heads=num_heads,
key_dim=d_model,
dropout=dropout_rate)
</code></pre>
<pre><code> x = self.pos_embedding(x)
x = tf.keras.layers.Dropout(dropout_rate)(x)
x = GlobalSelfAttention( num_heads=num_heads, key_dim=d_model, dropout=dropout_rate)(x)
x = FeedForward(d_model, dff)(x)
x = tf.keras.layers.GaussianNoise(stddev=)(x)
x1 = tf.keras.layers.Dense(, activation=)(x)
x1 = tf.keras.layers.GaussianNoise(stddev=)(x1)
x1 = tf.keras.layers.Dense(,activation=)(x1)
o1 = tf.keras.layers.Dense()(x1)
</code></pre>
<h4>Code samples model inference</h4>
<p>Predict two chemical modifiers DMS Y[:,1] and 2A3 Y[:,0].</p>
<pre><code> Y = model.predict( np.array(test_df.loc[span*g_i:span*(g_i+)-,].to_list()).astype() )
np.clip(Y,,)
</code></pre>
<h1>Sources</h1>
<p><a href="https://www.kaggle.com/code/liuyanfeng/stanford-rrf-tensorflow-tpu?scriptVersionId=150578739" target="_blank">https://www.kaggle.com/code/liuyanfeng/stanford-rrf-tensorflow-tpu?scriptVersionId=150578739</a><br>
<a href="https://www.kaggle.com/code/misakimatsutomo/stanford-rrf-tensorflow-tpu/edit/run/153311838" target="_blank">https://www.kaggle.com/code/misakimatsutomo/stanford-rrf-tensorflow-tpu/edit/run/153311838</a></p>
<p>Thank you for taking the time to read the writeup.</p> |
Stanford Ribonanza RNA Folding | 419th: Tensorflow and additionally ( Vienna RNA, Scikit Learn and XG Boost) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>**Summary</p>
<p>**The rna-modelling.ipynb notebook presents a comprehensive approach to RNA sequence modeling using TensorFlow and advanced data handling techniques. The notebook focuses on utilizing TensorFlow's TPU (Tensor Processing Unit) capabilities for efficient computation and handles various types of RNA experiment data, including DMS_MaP and 2A3_MaP. It integrates RNA data from different sources and prepares it for modeling.</p>
<p>**Introduction<br>
**RNA modeling is crucial in understanding the complexities of biological processes. This notebook explores RNA sequence data, aiming to model and predict RNA structures and reactivities. The approach harnesses TensorFlow's distributed computing power and TensorFlow's TPU strategy, providing a robust platform for handling and analyzing large-scale RNA data. The notebook specifically addresses different experiment types in RNA data, indicating a versatile and comprehensive approach to RNA sequence modeling.</p>
<p>**Preprocessing<br>
**The preprocessing steps in the notebook include:</p>
<p>Data Loading: RNA sequence data is read and concatenated from CSV files, ensuring a comprehensive dataset.<br>
Data Integration: RNA data from the RNA Mapping Database (RMDB) and Kaggle's Stanford Ribonanza dataset are integrated.<br>
Experiment Type Handling: Different experiment types like DMS_MaP and 2A3_MaP are identified and handled separately.<br>
Sequence and Reactivity Extraction: Sequences and their respective reactivities are extracted and prepared for modeling.<br>
Model<br>
TensorFlow and TPU Utilization: The notebook leverages TensorFlow's advanced features and TPU for efficient and scalable modeling.<br>
Data Preparation for Modeling: Sequences are prepared, and their corresponding reactivities are extracted to serve as the target variables for the models.<br>
Modeling Approach: While the specific details of the modeling approach are not explicitly stated in the provided excerpts, it's implied that advanced machine learning techniques, possibly involving neural networks or similar architectures, are used.</p>
<p>**Other Models <br>
**RNA Modeling with ViennaRNA (another-rna-modelling-with-viennarna.ipynb)<br>
Features and Models:</p>
<p>ViennaRNA and XGBoost: Integrates ViennaRNA for RNA folding and structure prediction with XGBoost for regression tasks.<br>
Feature Engineering: Extracts features like minimum free energy (MFE), base-pairing probabilities, GC content, and AU content from RNA sequences.<br>
Data Preprocessing: Utilizes pandas for data handling and scikit-learn's LabelEncoder for encoding categorical variables.<br>
Model Training and Validation: Splits data into training and validation sets, trains an XGBoost regressor, and computes Mean Absolute Error (MAE) as a performance metric.<br>
Submission Preparation: Generates predictions for submission based on the validation set and saves them in a CSV file.</p>
<p><strong>Basic XGBoost for RNA Computing (basic-xg-boost-for-rna-computing (2).ipynb)</strong><br>
Features and Models:</p>
<p>XGBoost Focused: Primarily uses the XGBoost algorithm for regression tasks related to RNA reactivity.<br>
Simple Feature Engineering: Employs basic techniques like sequence length encoding and experiment type categorization.<br>
Model Training: Focuses on training an XGBoost regressor with prepared features and targets.<br>
Validation and Submission: Calculates the Mean Absolute Error (MAE) on the validation set, prepares mean and median predictions for submission, and saves the submission file.</p>
<p>**Evaluation<br>
**Performance Metrics: The notebook uses performance metrics, Mean Absolute Error (MAE).<br>
Validation and Testing: The approach involves splitting the data into training, validation, and testing sets to ensure the model's effectiveness and generalizability.<br>
Practical Application: The model's predictions on RNA reactivities are crucial for understanding RNA behavior, which has practical implications in fields like bioinformatics and molecular biology.</p> |
Stanford Ribonanza RNA Folding | 44th Place Solution for the Stanford Ribonanza RNA Folding Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Context section</h2>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/overview" target="_blank">https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/overview</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/data" target="_blank">https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/data</a></li>
</ul>
<h2>Overview of the Approach</h2>
<p>This is my first write-up (also the first time in the medal section 😁). Thanks to the organizers and participants! My approach was based on work we are doing in our lab for RNA secondary structure prediction [1] (this article includes source code). This is a ResNet-based model that takes the input sequences as one-hot representations and outputs the interaction matrix prediction (similar to the bpp provided). The model has a first 1D feature extraction stage, then it is converted to 2D with a simple matmul, and undergoes a second stage of 2D feature extraction to reach final prediction. For this competition, the output was "flattened" using a sum by columns, thus arriving to a representation of the activation of each nucleotide in the sequence. The inner 2D representation allow us to add bpp information as an additional channel to learn features from.</p>
<p>In terms of data it was rather a simple approach, the best model used the training data with SNR>0.5. Train/test splits were performed using clustered sequences using cdhit-est (80% threshold) [2]. This is important to not overfit patterns on similar sequences. Provided BPP were also used. </p>
<p><img src="https://github.com/sinc-lab/sincFold/blob/main/abstract.png?raw=true" alt=""></p>
<h2>Details of the submission</h2>
<p>I wanted to try different approaches but focused on improving the one detailed above. As training was time consuming, using only the medioids of each sequence cluster proved to reach competitive results (compared to my best submission) thus it was the approach during development. It would be nice to filter leaked test sequences but it seems that my best solution in public LB is the same in private, so good news for the model. Final results were averaging 5 models, 3 using only medioids because I ran out of time. Also didn't use test data for anything except looking at the public LB </p>
<p>An interesting take is the use of BPP information. As it is based on classical RNA structure prediction methods and do not model pseudoknots, it could affect model prediction in those cases. I trained models without BPP reaching to far worse avg results (public LB ~.16). I did not tried in private yet, but analyzing a sample case as described in [3] it can be seen that the final ensemble model (middle image) miss the predictions pointed by the arrow in the reference prediction (upper image), while the model without BPP (bottom image) have some resemblance. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F208067%2F77df07e5804b090f0301093fe0dceb73%2Fplot_reffinalsinbpp.png?generation=1702052560298402&alt=media" alt=""></p>
<p>Things that didn't work: </p>
<ul>
<li>Tried white noise and flip augmentation</li>
<li>Using sequences with less than .5 snr</li>
<li>Tried to use errors during training but didn't reach to a converging method. It could work though </li>
</ul>
<p>Hope we see more works on bio sequences!</p>
<h2>Sources</h2>
<p>[1] <a href="https://www.biorxiv.org/content/10.1101/2023.10.10.561771v1" target="_blank">https://www.biorxiv.org/content/10.1101/2023.10.10.561771v1</a><br>
[2] <a href="https://sites.google.com/view/cd-hit" target="_blank">https://sites.google.com/view/cd-hit</a><br>
[3] <a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/discussion/444653" target="_blank">https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/discussion/444653</a></p> |
Stanford Ribonanza RNA Folding | 4th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>In my experiment, A4000 (16GB) x 4, bs=64 for each GPU (bs=32 for exp072)</p>
<p>First of all, a big thank you to Kaggle staff and host for providing a fun competition. <br>
<a href="https://www.kaggle.com/tattaka" target="_blank">tattaka</a> and <a href="https://www.kaggle.com/fuumin621" target="_blank">monnu</a> were able to complete their journey to Competitions Grandmaster with this result.</p>
<h2>Summary</h2>
<ul>
<li>Various models ensemble<ul>
<li>Modified RNAdegformer proposed by Shujun</li>
<li>1D Conv & Residual BPP Attention</li>
<li>Transformer with BPP Attention Bias</li></ul></li>
<li>bp_matrix generated by eternafold (partially contrafold)<ul>
<li>I didn't realize until halfway through that it was being provided.</li></ul></li>
<li>finetuning with higher s/n threshold</li>
<li>Pseudo Labeling using reactivity error prediction</li>
</ul>
<h2>Modified RNAdegformer</h2>
<p>code: <a href="https://github.com/tattaka/stanford-ribonanza-rna-folding-public" target="_blank">https://github.com/tattaka/stanford-ribonanza-rna-folding-public</a> </p>
<h3>Input</h3>
<ul>
<li>Sequence</li>
<li>BPP Matrix (by EternaFold and Contrafold)</li>
</ul>
<h3>Architecture</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1745801%2Fd89808656d85bd143d94055a543a5a55%2F2023-12-09%2018.25.55.png?generation=1702114143461138&alt=media" alt=""><br>
Made some changes to the <a href="https://academic.oup.com/bib/article/24/1/bbac581/6986359" target="_blank">RNAdegformer</a>(<a href="https://github.com/Shujun-He/RNAdegformer" target="_blank">https://github.com/Shujun-He/RNAdegformer</a>) proposed by <a href="https://www.kaggle.com/competitions/stanford-covid-vaccine/discussion/189564" target="_blank">Shujun</a>, including the order of layers.</p>
<ul>
<li>kernel_size = 7 except for the last transformer, which is 1</li>
<li>postnorm</li>
<li>GLU family activation</li>
<li><a href="https://arxiv.org/abs/2108.12409" target="_blank">ALiBi</a> positional encoding is applied separately head from bp_matrix</li>
<li>Other minor changes ensemble</li>
</ul>
<table>
<thead>
<tr>
<th>model name</th>
<th>error prediction for pseudo labeling</th>
<th>use pseudo label</th>
<th>act_fn for feedforward</th>
<th>norm layer</th>
<th>add norm and act for conv1d</th>
<th>use contrafold(second bpps)</th>
<th>connect attn_weight to bpps bias</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>exp064</td>
<td>yes</td>
<td>no</td>
<td>SwiGLU</td>
<td>Layernorm</td>
<td>no</td>
<td>no</td>
<td>no</td>
<td>0.12087</td>
</tr>
<tr>
<td>exp070</td>
<td>no</td>
<td>exp064</td>
<td>SwiGLU</td>
<td>RMSnorm</td>
<td>yes</td>
<td>no</td>
<td>no</td>
<td>0.1199 / tiny: 0.12143</td>
</tr>
<tr>
<td>exp071</td>
<td>yes</td>
<td>no</td>
<td>GeGLU</td>
<td>RMSnorm</td>
<td>yes</td>
<td>yes</td>
<td>no</td>
<td>0.12146</td>
</tr>
<tr>
<td>exp072</td>
<td>no</td>
<td>exp064 + exp071</td>
<td>GeGLU</td>
<td>RMSnorm</td>
<td>yes</td>
<td>no</td>
<td>yes</td>
<td>0.11976</td>
</tr>
</tbody>
</table>
<h3>Training</h3>
<ul>
<li>simple kfold (k = 5)</li>
<li>1st stage: First train with sn > 0.5 (300epoch)</li>
<li>2nd stage: Then finetune with sn > 1.0 for a short number of epochs (15epoch)</li>
<li>When training with train dataset only, output errors for each nucleotide in addition to reactivity</li>
<li>For pseudo labels, use sn_pred>0.75 for the 1st stage, sn_pred>1.0 for the 2nd stage, and future=1 only<ul>
<li>Train from scratch with the pseudo labels added</li></ul></li>
<li>lr=1e-3, bs=256, AdamW(eps=1e-6), with warmup for 1st stage</li>
</ul>
<h3>Score for the single model</h3>
<p>best model: exp072 </p>
<ul>
<li>CV (k-fold): 0.11976</li>
<li>Public Score: 0.13681</li>
<li>Private Score: 0.14124</li>
</ul>
<h2>1D Conv & Residual BPP Attention</h2>
<p>code: <a href="https://github.com/fuumin621/stanford-ribonanza-rna-folding-4th" target="_blank">https://github.com/fuumin621/stanford-ribonanza-rna-folding-4th</a></p>
<p>Based on the 1D Conv + BPP Attention architecture proposed by <a href="https://www.kaggle.com/competitions/stanford-covid-vaccine/discussion/189241" target="_blank">nyanp</a>, we made improvements.</p>
<h3>Architecture</h3>
<ol>
<li>Sequence Embedding</li>
<li>Conv 1d</li>
<li>SE Residual & Residual BPP Attention x 12 layers<ul>
<li>Using BPP as attention weight</li>
<li>Reduced the kernel size of 1D convolution as the layer depth increased</li></ul></li>
<li>Bi-LSTM x 2 layers </li>
<li>Linear</li>
</ol>
<h3>Input</h3>
<ul>
<li>Sequence</li>
<li>BPP Matrix (by EternaFold)</li>
</ul>
<h3>Parameter</h3>
<ul>
<li>Drop out rate: 0.1</li>
<li>n_dim: 256</li>
<li>Kernel size: 9, 7, 5, 3 (decreases as the layer depth increases)</li>
<li>Learning rate: 4e-3 with cosine scheduler</li>
<li>Batch size: 64</li>
</ul>
<h3>Training</h3>
<p>The training strategy essentially adopted the same approach as the above Modified RNAdegformer</p>
<h3>Score for the single model</h3>
<ul>
<li>CV (k-fold): 0.12161</li>
<li>Public Score: 0.13889</li>
<li>Private Score: 0.1425</li>
</ul>
<h3>What Didn't Work</h3>
<ul>
<li>BPP packages other than EternaFold (contrafold_2)</li>
<li>Distance Matrix</li>
<li>Structure</li>
<li>BPP Feature Engineering (max, sum, nb_count)</li>
<li>Sample Weight by SN</li>
<li>etc…</li>
</ul>
<h2>Transformer with BPP Attention Bias (@ren4yu's Part)</h2>
<p>code: <a href="https://github.com/yu4u/kaggle-stanford-ribonanza-rna-folding-4th-place-solution" target="_blank">https://github.com/yu4u/kaggle-stanford-ribonanza-rna-folding-4th-place-solution</a><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1745801%2F7eea07453fa8011a1435c012533a084d%2Ftransformer.png?generation=1702099940846948&alt=media"></p>
<p>This model borrows the bpp attention bias idea from the RNAdegformer, but is closer to the original Transformer architecture.<br>
Instead of using positional embedding, Conv1D was used in FFN to give relative positional information.</p>
<h3>Training Procedure</h3>
<ul>
<li>Train model with AdamW LR=2e-3 to 2e-4, BS=128, use s/n filter threshold=0.5</li>
<li>Finetune model with AdamW LR=2e-4 to 0, BS=256, use s/n filter threshold=1.0</li>
</ul>
<h3>Score for the single model</h3>
<ul>
<li>CV (k-fold): 0.12188</li>
<li>Public Score: 0.13948</li>
<li>Private Score: 0.14267</li>
</ul>
<h3>Does not work for me</h3>
<ul>
<li>Increasing dimension or number of layers</li>
<li>UNet-like hierarchical architecture</li>
</ul> |
Stanford Ribonanza RNA Folding | 58th solution: simple Roformer model | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/alejopaullier" target="_blank">@alejopaullier</a> yes, you need to create a new model from config, and in the config you need to adjust the vocab size. You can either create/use a corresponding tokenizer, or handle tokenization yourself. Here's an example: </p>
<pre><code> = DebertaConfig()
=
= DebertaModel(config)
</code></pre>
<p>Thanks to the host and Kaggle for this amazing competition! </p>
<p>My goal was to try some simple experiments so that I can better appreciate reading the top solutions, so all-in-all I am pretty happy about the result. </p>
<p>Thank you for really great public notebooks that helped me get started quickly: </p>
<ul>
<li><a href="https://www.kaggle.com/code/iafoss/rna-starter-0-186-lb" target="_blank">RNA Starter</a> by <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a></li>
<li><a href="https://www.kaggle.com/code/barteksadlej123/rna-transformer-with-rotary-embedding" target="_blank">RNA Transformer with Rotary Embedding</a> by <a href="https://www.kaggle.com/barteksadlej123" target="_blank">@barteksadlej123</a> </li>
</ul>
<p>Also thanks a lot for super insightful discussions by the hosts, especially the generalization test shared by <a href="https://www.kaggle.com/shujun717" target="_blank">@shujun717</a>. </p>
<p>I used the fastai pipeline from the first notebook and the dataset/encoding (sequence+structure) from the second one. Models tested:</p>
<ul>
<li>Debertav2 - worked pretty well on cv/lb but didn't generalize well on the test visualization. I hoped the relative position embeddings would work well, but it didn't work out over longer distances. If anyone was able to overcome that, I'd really love to learn about it!</li>
<li>Longformer - sliding window attention worked much better on the test visualization, but didn't get good cv/lb scores. </li>
<li>Roformer - this model finally started looking well on the test visualization, but underperformed Deberta on cv/lb. I decided to go with it anyways because it seemed like it should generalize well to private lb. </li>
</ul>
<p>Training scheme: </p>
<ul>
<li>80 epochs with high lr (5e-4), one cycle schedule, and high regularization (dropout 0.3, eps 1e-7) on noisier data (signal_to_noise > 0.6, reads > 100)</li>
<li>10 epochs with lower lr (1e-5), one cycle schedule, removing dropout (0.0) and lower eps (1e-12) on clean data (SN_filter == 1)</li>
</ul>
<p>Final submission is a single Roformer model, blend of 4 folds. </p> |
Stanford Ribonanza RNA Folding | 5th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I would like to congratulate my teammates <a href="https://www.kaggle.com/ayaanjang" target="_blank">@ayaanjang</a> and <a href="https://www.kaggle.com/junseonglee11" target="_blank">@junseonglee11</a> on this hard earned (master triplet) win! We each had our own takes on the competition and they were combined for the final ensemble. Below we present the most important performance contributors; each of our transformer based models in the ensemble used a different combination of the following methods.</p>
<h2>Code</h2>
<ul>
<li>Roger: <a href="https://github.com/s-rog/StanfordRibonanza2023" target="_blank">https://github.com/s-rog/StanfordRibonanza2023</a></li>
<li>Ayaan: <a href="https://github.com/ehdgnsdl/2023-Stanford-Ribonanza-RNA-Folding" target="_blank">https://github.com/ehdgnsdl/2023-Stanford-Ribonanza-RNA-Folding</a></li>
<li>Junseong: <a href="https://github.com/JunSeongLee1102/2023-Kaggle-Stanford-Ribonanza-RNA-Folding" target="_blank">https://github.com/JunSeongLee1102/2023-Kaggle-Stanford-Ribonanza-RNA-Folding</a></li>
<li>Docker <a href="https://hub.docker.com/layers/junseonglee1102/ngc-custom/xformer-tmux/images/sha256-c98a8b37c2134b268056f5a77797b8f7e847938530d1fa603a4e1422153e1ada?context=repo" target="_blank">Image</a></li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3586013%2F6ececb933c804b2a3bcd0f5c1bba7845%2FRibonanza.png?generation=1702124640894983&alt=media"> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3586013%2F69cd9cdfc5a3f955c5d9c903a170f112%2Fimage.png?generation=1702223532569356&alt=media"><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3586013%2Fea9f3b82fec44453abb87485457db0fd%2Fjunseong_model_figure.png?generation=1702267969051338&alt=media"></p>
<h2>TLDR</h2>
<p>We started with <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a>'s baseline model and made the following changes:</p>
<ul>
<li>Utilize lower SN samples</li>
<li>Utilize Eterna BPP</li>
<li>Replace layer norm with RMS norm</li>
<li>Replace absolute positional embedding with relative positional bias</li>
<li>Replace encoder layer pre-norm layout with ResiDual norm layout</li>
<li>Remove bias from both QKV and FFN layers</li>
<li>Increase head size from 32 to 48</li>
<li>Add a single layer bidirectional GRU at the end of each encoder layer</li>
<li>Multi-stage pseudo label training</li>
<li>Misc: inverse square root LR schedule, 0.1 weight decay</li>
</ul>
<h2>Data Sampling</h2>
<p>I set a fixed number of sequences per epoch and used <code>WeightedRandomSampler</code> to draw samples without replacement weighted by SN. A bias term is added to the SN as a hyperparameter and is set on a schedule, increasing the amount of data used and regularization throughout training. The schedule hyperparameter can look something like:<br>
<code>{(i * 10): v / 10 for i, v in enumerate(range(-12, -7))}</code></p>
<h2>Data Augmentation</h2>
<p>We were able to utilize flip to an extent. While training with flip usually resulted in longer training and marginally worse CV, there is a boost to be had through flip TTA.<br>
Augmenting BPP with gaussian noise also improved CV and long range knot visibility.</p>
<h2>Norm</h2>
<p>I experimented with pre-norm, post-norm with <a href="https://arxiv.org/abs/2004.08249" target="_blank">Admin</a> and <a href="https://arxiv.org/abs/2304.14802" target="_blank">ResiDual</a>-norm. ResiDual gave the best CV as well as the best training stability. Changing from layer to RMS norm gave another slight boost.</p>
<h2>Attention Bias: Positional</h2>
<p>Relative positional bias is used in all 12 layers of the encoder during attention (all 12 layers use the same positional bias). Alibi and dynamic positional biases were also tested, but relative performed the best with our architecture. Rotary embedding was also worse than relative. Credits to <a href="https://www.kaggle.com/lucidrains" target="_blank">@lucidrains</a> for the <a href="https://github.com/lucidrains/x-transformers/blob/main/x_transformers/x_transformers.py#L252" target="_blank">implementation</a>.</p>
<h2>GRU</h2>
<p><a href="https://www.kaggle.com/junseonglee11" target="_blank">@junseonglee11</a> added a GRU at the end of each encoder layer which improved perf when we could not scale the attention layers further (the residual output from ResiDual does not interact with the GRU). I included this in my model and experimented with many layouts but it seems like simple is best. I also tried replacing the GRU with a 1D FusedMBConv block, which improved CV but worsened LB.</p>
<h2>Multi-head RNN and LSTM</h2>
<p>Simimlar to the multi-head attention structure, multiple RNN layers can replace single RNN layers by halving their hidden dimension (increasing # of layers and decreasing the hidden dimension offset each other). <a href="https://www.kaggle.com/junseonglee11" target="_blank">@junseonglee11</a> tested 2, 3, 4, 8 heads instead of single RNN layer. Among them, 2 head RNN structures combining a GRU layer and a LSTM layer gave the best performance.</p>
<h2>BPP BMM Convolutional Block</h2>
<p><a href="https://www.kaggle.com/ayaanjang" target="_blank">@ayaanjang</a> used <a href="https://www.kaggle.com/code/nyanpn/6th-place-cnn-gcn" target="_blank">"1D conv + ResidualBPPAttention (using bmm)"</a> to integrate BPP, see the following code:</p>
<pre><code> :
def :
super.
self.conv1 =
self.conv2 =
self.relu = nn.
def forward(self, src, attn):
h = self.conv2(self.conv1(torch.bmm(src, attn)))
return self.relu(src + h)
:
def :
super.
self.conv = nn.
self.bn = nn.
self.relu = nn.
self.dropout = nn.
def forward(self, src):
return self.dropout(self.relu(self.bn(self.conv(src))))
</code></pre>
<p>This BPP attention block was added between the FFN and the GRU, also not affecting the residual output from ResiDual. The inclusion of BPP greatly reduced CV.</p>
<h2>Attention Bias: BPP</h2>
<p>I used a different method to add BPP. In the first 6 layers of the encoder, BPP is added to the existing positional/mask bias after a convolution (every layer uses the same BPP derived bias). With 6 heads, the single channel BPP is passed to a 1x1 convolution layer that outputs 6 channels. Reducing the number of layers BPP is used in improved CV as well as long range knot visibility (12 -> 6 layers). Reducing the number of BPP heads, adding Contra BPP, using 3x3 conv all yielded no improvements.</p>
<h2>Psuedo Labels: 2 Stage Pipeline</h2>
<p>To generate psuedo labels, a 5 fold ensemble was trained with flip (and inferenced with TTA) then filtered by stdev quantile 0.75. Filtered valued have their reactivities set to nan, credits to <a href="https://www.kaggle.com/ayaanjang" target="_blank">@ayaanjang</a> for this pipeline. This parquet is then used as train to train a single pretrained model, which is then used to train 5 folds again on the original competition train set. This method from <a href="https://www.kaggle.com/junseonglee11" target="_blank">@junseonglee11</a> is quite effective and efficient time wise. The final stage took under 2 hours for each fold on a single 4090.</p>
<h2>Psuedo Labels: 3 Stage Pipeline</h2>
<p><a href="https://www.kaggle.com/ayaanjang" target="_blank">@ayaanjang</a> used a different pipeline as follows: train dataset only -> train + pseudo dataset -> train dataset only. The LRs for the stages are: 2e-3 -> 2e-4 -> 2e-5. The Pseudo labels were also filtered with the same method above, without filtering <a href="https://www.kaggle.com/ayaanjang" target="_blank">@ayaanjang</a> found there was no increase in performance. The 3rd stage finetuning serves to reduce overfitting.</p>
<h2>GRU Mystery</h2>
<p>Now a mystery… Here is my GRU code:</p>
<pre><code></code></pre>
<p>At first I added the transpose and flattern operations thinking that they would delegate each direction in the GRU to half the (3) heads or the other way around, mix them between heads. Upon further inspection I think it does neither… but it noticeably improves performance. If you have a satisfactory explanation please comment!</p>
<h2>Thanks</h2>
<ul>
<li><a href="https://www.kaggle.com/ayaanjang" target="_blank">@ayaanjang</a> and <a href="https://www.kaggle.com/junseonglee11" target="_blank">@junseonglee11</a> again for their hard work</li>
<li><a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a> for his excellent baseline, taught me a lot as this is my first transformer model</li>
<li><a href="https://www.kaggle.com/lucidrains" target="_blank">@lucidrains</a> for his various transformer implementations that I referred to excessively</li>
<li>The devs behind <a href="https://github.com/facebookresearch/xformers" target="_blank">xformers</a> and <a href="https://github.com/NVIDIA/apex" target="_blank">apex</a> for greatly speeding up training</li>
<li>Authors of the numerous papers that helped guide our methods</li>
<li>All kagglers who participated in discussions with us</li>
<li>Competition hosts for everything!</li>
</ul>
<p>I used cutlass in xformer's memory efficient attention. Not much training was needed in the final stage, around 20 epochs (each epoch being 20k seqs) as the model overfits quickly starting from the pretrained checkpoint.</p> |
Stanford Ribonanza RNA Folding | 60th Place Solution: transformer with a decoder layer, which accepts two types of information [with code] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Context</h2>
<ul>
<li><p>Competition overview: <a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/overview" target="_blank">https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/overview</a></p></li>
<li><p>Data: <a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/data" target="_blank">https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/data</a></p></li>
</ul>
<h2>Summary</h2>
<p>The <a href="https://www.kaggle.com/code/iafoss/rna-starter-0-186-lb" target="_blank">notebook published by Iafoss</a> [1] was my starting point. I used transformer architecture in this competition. Here are the details of the generated submissions that might have contributed to the result:</p>
<ul>
<li>In addition to sequence itself, I used minimum free energy (MFE) RNA secondary structure in dot-parentheses notation. For some experiments, I also calculated base pair probability matrices (BPPs), however, the best scoring submission did not use them. Both MFE and BPPs were obtained via EternaFold package [2].</li>
<li>Position information is encoded via rotary embeddings [3] in the best-scoring submission. Some other experiments use both rotary and sinusoidal embeddings.</li>
<li>Sequences and MFE indices always have end-of-sequence (EOS) token appended at the beginning and at the end.</li>
<li>Sequences and MFE indices are padded from both sides so that the middles of all RNA molecules enter the model at roughly the same position.</li>
<li>One model makes two types of predictions: DMS_MaP and 2A3_MaP.</li>
</ul>
<h2>Scores</h2>
<p>The best model achieved 0.15867 MAE on private and 0.15565 MAE on public leaderboard. I will refer to this submission as "submission-27" for historical reasons.</p>
<p>"Submission-23" was the second submission I chose for evaluation, with 0.17211 private and 0.15363 public MAE score.</p>
<p>The submission with the best public score was not chosen for evaluation because the model does not generalize well, as per <a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/discussion/444653" target="_blank">"How to check if your model generalizes to long sequences"</a> discussion.</p>
<h2>Models</h2>
<h3>"Submission-27"</h3>
<p>The model for "submission-27" consists of one decoder layer as the first layer and eight BERT-like encoder layers on top. </p>
<p>The decoder layer accepts sequence embeddings as "target" and MFE-structure embeddings as "memory" (information that would be coming from an encoder in a traditional encoder-decoder transformer). A mask that prevents the decoder from attending to padding tokens is applied. Other than masking the padding tokens, no other mask is applied, and the decoder is able to attend to all information it receives. A schematic presentation of how the information flows through the decoder is given in Figure 1.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F17319644%2Fd2ead7e811cb2333a432d2f4f9303b92%2Ffigure_submission_27_medium.png?generation=1702595040699637&alt=media" alt="Architecture of submission-27"></p>
<p><em>Figure 1. The flow of information through the decoder layer for "submission-27".</em></p>
<p>Rotary embeddings are used in every attention module inside the model.</p>
<h3>"Submission-23"</h3>
<p>The model for "submission-23" consists of two decoder layers accepting sequence embeddings together with additional information and eight BERT-like encoder layers on top. </p>
<p>The first decoder layer receives embedded sequences (sinusoidal embeddings are used from the starter notebook) and performs self-attention in the standard manner. After the addition and normalization, another attention-related matrix multiplication is performed where BPPs are used as attention scores, followed by the position-wise feedforward layer.</p>
<p>The second decoder layer is analogous to the decoder depicted in Figure 1. However, instead of sequence embeddings it receives the output from the first decoder layer. </p>
<p>Thus, the first decoder layer uses BPPs and the second decoder layer uses MFE structure as additional information. More details on the architecture can be found in the code.</p>
<h2>Training</h2>
<p>The training procedure was adapted from <a href="https://www.kaggle.com/code/iafoss/rna-starter-0-186-lb" target="_blank">the starter notebook</a> [1]. </p>
<p>The training was performed for 45 epochs on "Quick_Start" data (after some processing) for both "submission-27" and "submission-23". Model were saved after each epoch. The last model was picked for <br>
"submission-23" and the model after 28 epochs for "submission-27". </p>
<p>I used sequences that were not in "Quick_Start" data as validation data. These sequences were filtered out based on error values. </p>
<h2>Notes on the code</h2>
<p>Here is <a href="https://github.com/ilyanaanderson/stanford-ribonanza-rna-folding-submission" target="_blank">the link to the repository</a>.</p>
<p>This repository is set up to reproduce two of my solutions: "submission-27" and "submission-23". It is possible to run these scripts on a system with one GPU (10 GB of memory), 20 CPUs, and 128 GB RAM. <br>
Without BPPs (reproducing "submission-27" only), the RAM requirement is 64 GB.<br>
More specific details are in "note_on_requirements.txt".</p>
<p>This repository is not the original code I used for my experiments. The original code includes all earlier experiments and is more difficult to follow. The code in the current repository is organized in a step<br>
by step fashion. Some steps (step 3 and 4) need to be re-run twice: for training and for validation data separately. </p>
<p><em>Step 1:</em> The script processes training data and saves it as a parquet file. However, after this step the data is not ready to use. This is only an "intermediate" step in data processing. In this step, I create one dataframe where each row carries information about both types of reactivity. Sequences are not processed in this step.</p>
<p><em>Step 2:</em> Analogous step for validation data.</p>
<p><em>Step 3:</em> Sequences are processed for training data and for validation data. MODE variable ("train" or "validation") and BPP (boolean True or False) need to be set at the beginning of the script. It is possible to extract BPP information during this step. However, extracting BPP information will require a lot of memory in <br>
the subsequent steps (it can run only on machines with 128 GB RAM). Also, if BPP is used in training (for "submission-23"), then the inference is set up to calculate BBPs on the fly, which makes the inference script run 10 times longer for "submission-23" as compared to "submission-27". As the result of this step, small (partial) <br>
dataframes are written to "partial" files. There should be 17 files for "Quick_Start" training data and 4 files for validation.</p>
<p><em>Step 4:</em> The "partial" files from the previous step are collected into one file. I set this up in this way due to historical reasons: I was struggling to save BPP information for later use in training, so I solved the problem by processing smaller chunks of the dataframe at a time (in previous step). I wanted to write these files into storage just in case (in case my script crashes). Since they are written as "partial" files, I collect them into one<br>
file for convenience. </p>
<p><em>Step 5:</em> Processing test data will require a lot of time if the dataframe is processed at once. An analogous script <br>
took about 3 hours or more on my system. It is possible to re-write the code to process it in chunks, similar to training and validation data. However, I did not re-write it. Also, BPPs are not saved for test data because I was running out of time during the competition: instead, if BPPs are used in training, they will have to be calculated during inference. Saving BPP information into storage for test sequences could be beneficial if a lot of experiments use BPPs. </p>
<p><em>Step 6:</em> Training. SUBMISSION_NUMBER (23 or 27) needs to be specified together with BPP_YES (whether the training will require BPPs). BPPs can be used for "submission-23" if BPP information was saved to storage in previous steps.</p>
<p><em>Step 7:</em> Inference. Historically, I was writing smaller dataframes into storage. This script runs less than an hour for "submission-27" on my system and 8-10 hours for "submission-23" (because of calculating BPPs).</p>
<p><em>Step 8:</em> The smaller dataframes are collected into one CSV ready for submission.</p>
<p><em>Step 9:</em> This script generates reactivity vs. sequence plot for mutate and map sequences, according to <a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/discussion/444653" target="_blank">"How to check if your model generalizes to long sequences" discussion</a>.</p>
<p><em>Step 10:</em> This script prepares data for the second plot (see next step).</p>
<p><em>Step 11:</em> This script generates the secondary structure plot described in <a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/discussion/444653" target="_blank">"How to check if your model generalizes to long sequences" discussion</a>.</p>
<p>The naming of classes in this repository relies on numbers (e.g., DatasetEight is used for "submission-27") because I tried many variations of a class created for a specific purpose. There are additional datasets in datasets.py that could be used for experiments outside what was done for "submission-27" and "submission-23". DatasetTwelve is similar to DatasetEight, but it randomly perturbs reactivities according to their error. Note that columns "error_a" and "error_d" have to be preserved in the dataframe that is passed into DatasetTwelve (in the training script for "submission-27" these columns are dropped). DatasetEleven does analogous perturbation and can be used in place of DatasetTen (DatasetTen is used in "submission-23"). </p>
<h2>Other approaches that were tried</h2>
<p>I tried training on more data: "Quick_Start" data together with the data outside of that set, filtered by its error: the rows where all reactivities had error greater than a certain number were dropped.</p>
<p>I also tried reactivity perturbation, following the idea described in the <a href="https://www.kaggle.com/competitions/stanford-covid-vaccine/discussion/189620" target="_blank">winning solution</a> to the Kaggle OpenVaccine challenge [4]. </p>
<p>The additional data and the reactivity perturbation might have improved the performance for my submissions, but these two modifications to training need to be tested separately and on more experiments to draw any conclusions.</p>
<p>There were also experiments with a more traditional encoder-decoder architecture, where the encoder was pretrained to predict reactivities from the sequence-combined-with-mfe-structure information, then these <br>
weights were inherited by the encoder-decoder. In these experiments, the same information was fed to the encoder and to the decoder. I observed an improved score on the public leaderboard; however, these models did not generalize well to longer sequences. I was not using rotary embeddings in these series of experiments, which might have been the reason why they did not perform well.</p>
<h2>Acknowledgements</h2>
<p>The <a href="https://www.kaggle.com/code/iafoss/rna-starter-0-186-lb" target="_blank">starter notebook</a> published by <a href="https://www.kaggle.com/Iafoss" target="_blank">@Iafoss</a> [1] was crucial for me. Thank you so much for creating it.</p>
<p>Additionally, when including rotary embeddings, I relied on the code published by Arjun Sarkar in an article <a href="https://towardsdatascience.com/build-your-own-transformer-from-scratch-using-pytorch-84c850470dcb" target="_blank">"Build your own Transformer from scratch using Pytorch"</a> [5] as a starting point for further customization.</p>
<h2>References</h2>
<ol>
<li>Iafoss, “Rna starter [0.186 lb],” 2023. [Online]. Available: <a href="https://www.kaggle.com/code/iafoss/rna-starter-0-186-lb" target="_blank">https://www.kaggle.com/code/iafoss/rna-starter-0-186-lb</a></li>
<li>H. K. Wayment-Steele, W. Kladwang, A. I. Strom, J. Lee, A. Treuille, A. Becka, R. Das, and E. Participants, “Rna<br>
secondary structure packages evaluated and improved by high-throughput experiments,” <em>Nature Methods</em>, vol. 19, no. 10,<br>
pp. 1234–1242, Oct 2022. [Online]. Available: <a href="https://doi.org/10.1038/s41592-022-01605-0" target="_blank">https://doi.org/10.1038/s41592-022-01605-0</a></li>
<li>J. Su, Y. Lu, S. Pan, B. Wen, and Y. Liu, “Roformer: Enhanced transformer with rotary position embedding,” 2021.</li>
<li>J. Gao, “1st place solution,” 2020. [Online]. Available: <a href="https://www.kaggle.com/competitions/stanford-covid-vaccine/discussion/189620" target="_blank">https://www.kaggle.com/competitions/stanford-covid-vaccine/discussion/189620</a></li>
<li>A. Sarkar, “Build your own transformer from scratch using pytorch,” 2023. [Online].<br>
Available: <a href="https://towardsdatascience.com/build-your-own-transformer-from-scratch-using-pytorch-84c850470dcb" target="_blank">https://towardsdatascience.com/build-your-own-transformer-from-scratch-using-pytorch-84c850470dcb</a></li>
</ol> |
Stanford Ribonanza RNA Folding | 7th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I didn't do a careful analysis of that. I just tried to make make sure that I do not have any unexpected things in inference when batches may be composed of different length sequences. </p>
<p>Thanks. During the competition we tried some ideas from papers, like 2d Ushape networks, etc., but they did not work well. Our matrix mixer approach is motivated by <a href="https://www.kaggle.com/competitions/stanford-covid-vaccine/discussion/189564" target="_blank">one of OpenVacine solutions</a> as I understand, but other models were developed based on our work on this challenge.</p>
<p>No, but it is pretty simple. Before each convolution in the network you must set all padded tokens to 0, so the output of the network is exactly the same regardless the padding you add. Otherwise you may start accumulating some states in the padding tokens, which is not the thing you really want to have. You may try to skip this masking by just having all sequences of the same length and zero padding, but at inference whenever you add padding you get an undefined behaviour.</p>
<h2>Summary</h2>
<ul>
<li>Transformer based solution</li>
<li>Masked Conv1D instead of MLP</li>
<li>bpp injection into attention matrix, bpp-based token mixing, matrix mixing, and dual stream setup (attention boosting) for incorporating bpp</li>
</ul>
<h2>Introduction</h2>
<p>Our team would like to thank the organizers and Kaggle for making this competition possible. Also, I want to express my gratitude to my outstanding teammates <a href="https://www.kaggle.com/drhabib" target="_blank">@drhabib</a> and <a href="https://www.kaggle.com/martynoveduard" target="_blank">@martynoveduard</a> for their incredible contribution toward our final result.</p>
<h2>Details</h2>
<h3>Data</h3>
<p><strong>BPP</strong>: We generated additional bpp using vienna_2 (we also used SS output), contrafold_2, rnaformerv1, and rnafm. However, we saw nearly negligible improvement in comparison to using only bpp provided by organizers.<br>
<a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/discussion/454397" target="_blank"><strong>EX data</strong></a>: We tried to do fine-tuning on this data since it is only the source that provides GT for sequence ends. We got 5-10 bps CV improvement from this procedure (Iafoss/DrHB) and got the best visualization for long-range dependencies. However, at the sequence ends the predictions tend to get values close to zero for unknown reasons. At private LB EX data did not give any boost.<br>
<strong>CV split</strong>. slime used a random 4-fold split. DrHB and Iafoss used a split based on sequence similarity to avoid any possible leaks and also we excluded any train data overlapping with the test test from the train. The size of the val set is ~20k samples that passed SN criterion.</p>
<h3>Model</h3>
<p>At the beginning of the competition we quickly realized that bpp is very helpful for model performance. Initially, we tried to use it as an auxiliary output computed as the mean attention matrix, but convergence was quite slow (the typical run was 200-250 epochs). Then we build setups that also take bpp as an input. It drastically accelerated convergence, so that it took only several tens of epochs, while giving similar results.<br>
We considered several setups:<br>
(1) <strong>bpp injection</strong> (Iafoss): add bpp (in the logit form) directly to the attention matrix (in case of multiple bpp, each bpp is added to the particular head). We gradually attenuate the injection towards zero in the last layers to give the mode the freedom to develop interactions missed in bpp. The critical part of the model was replacing MLP with a masked 1d x5 convolution that performs the mixing of neighboring tokens. The transformer has 384 width, a head size of 64, 24 depth, a droppath of 0.3, dropout of 0.1. We use rotary enc to define the relative position of tokens. Single bpp and 6 bpp setups were considered. The best single model got 0.14375 private and 0.14013 public LB and 0.14292 and 0.13973 with using pseudo labels.<br>
(2) <strong>bpp mixing</strong> (DrHB): instead of injection of bpp into the attention matrix, we tried to add matrix multiplication modules performing mixing tokens based on bpp. The total structure of the model: 3 x [bpp eterna -> conv ->transformer -> bpp rest mean ->conv ->transformer ->ss- as adj -> graph transformer ].<br>
(3) <strong>matrix mixing</strong> (slime): trying to follow the previous competition solutions we added a learnable attention bias produced by a convolutional stream applied to bpp and ALIBI-like positional encoding. We used 2 bpp sources. The matrix mixer is represented by several conv layers with SE modules. The transformer consists of 12 matrix mixing + 12 regular transformer blocks with a width of 384. The best single model is 0.14509 private and 0.14066 public LB. This model used a different training pipeline and cannot be directly compared to others.<br>
(4) <strong>dual stream</strong> (Iafoss). The drawback of matrix mixing is that the attention bias is updated independently from the transformer based on the input bpp. How about attention boosting? We perform a simple projection of the attention state followed by scaled tanh nonlinearity to limit accumulated values and stabilize training. This value is added to the Attention matrix and then the result is fed to the next transformer layer. This modification has improved the model performance giving 0.14296 at private and 0.13697 public LB as a single model. Unfortunately, we discovered this setup only a few days before the end of the competition, and didn't have a chance to run multiple trainings and PL setup, which we expect to give a further ~10 bps improvement of LB.</p>
<p>The models are schematically depicted in the image below:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2Fd537664beced5e0923d38a5cd8a612e3%2Fplots.png?generation=1702136589769473&alt=media" alt=""></p>
<h3>Training</h3>
<p><strong>(Iafoss/DrHB)</strong> The loss is weighted based on the error, while we do not downselect data based on SN criterion: <code>w = 1/sqrt(1/6 + err.clip(100))</code>. Use AdamW, cosine annealing with warmup, lr=5e-4, wd=0.05, bs=16 (small bs was working better for the reason we could not identify). We used flip augmentation for both train and TTA, but the key thing here is using bpp computed for the correct order of nucleotides. It quite improves CV giving 10-15 bps boost. We also used bin based auxiliary loss which gave a slight improvement.</p>
<p>We split the training portion of the data into 4 folds, train 4 models (40-48 epochs), fine-tune on external data + train data (5-6 epochs with x10 EX data oversampling), fine-tune on noise-free samples for 12 epochs. Then we corrected the provided train data weighting GT and PL based on the inverse error (assuming 0.15 error for PL). The provided train data has a large level of noise slowing down the convergence. Test data and the sequence ends were labeled solely based on PL. Then we train a PL model on the whole train (excluding val samples) + test data, fine-tune on EX data, and fine-tune on noise-free samples. This procedure improved CV by 10 bpp, and the weighted average of all models generated in the procedure gets further improvement.</p>
<p><strong>(slime)</strong> <em>Pre-training</em>: First we performed MLM pre-training on a whole dataset for 5 epochs (40% of input tokens are replaced with mask tokens). We used cosine decay to zero with one epoch warmup and AdamW optimizer with base_lr= 5e-4 and wd=0.05. The model was trained to predict missing nucleotides with cross-entropy loss</p>
<p><em>Fine-tuning</em>: During the fine-tuning stage we initialized our model with weights from MLM-pretraining, it gave a noticeable improvement to the final result [10 bps CV]. We fine-tuned our models for 50 epochs with batch_size=16 on samples where either SN(DMS) == 1 or SN(2A3) == 1, we masked the loss depending on SN of the given test [10 bps boost compared to filtering train dataset with SN(DMS) == 1 and SN(2a3) == 1). In addition, we weighted the samples based on their reactivity error provided by the ground-truth data as <code>loss *= torch.log(1.1 + snr) / 2</code>. Similar to pertaining, we used AdamW optimizer [lr=5e-4, wd=0.05] and cosine scheduler with lr decay to zero. Since in matrix mixing model masking was not considered in convolutions, this model was trained with length-matching batch sampling (samples of exactly the same length).</p>
<h3>Best singe model end ensemble</h3>
<p>Best single model (dual stream model): 0.14292 at private and 0.13711 public LB, which can take top 10 itself. This result would be further improved by 10 bps to ~0.1419 if if we had time to run our full PL pipeline.<br>
Our final submission is a combination of ~20 models that got 0.14189 at private and 0.13604 at public LB.</p>
<h3>Things didn't work</h3>
<ul>
<li>EX data was helpful at CV and public LB (5-10 bps boost), but not helpful at private LB</li>
<li>Additional bpps gave only a negligible improvement in comparison to the use of single bpp provided by organizers</li>
<li>MLM on 30M external RNA sequence dataset</li>
<li>EMA, AWP, Floyd-warshall distance matrices </li>
<li>2D Ushape models</li>
</ul>
<p>I added these details.</p>
<p>It is an abbreviation for base pairing probability, i.e. the probability that nucleotides interact in the secondary structure as I understand. It is provided by organizers for eternafold package, and we computed it with several more packages.</p>
<p>Thanks. I just used PowerPoint.</p>
<p>BCE </p>
<p>Yes, as far as I remember. Use bpp was the key thing in the competition, especially for generalization at the private LB. It appeared, that my very early models got 0.18->0.15 private LB improvement by using bpp auxiliary loss. As the next step we realized that adding bpp as an input, like injection into the attention matrix, for example, increases the convergence speed by ~x10 giving comparable performance to aux setup, and we just focused on that. Not sure, though, if the performance of our latest setups could be reproduced with bpp aux.</p> |
Stanford Ribonanza RNA Folding | 81st Place Solution for the Stanford Ribonanza RNA Folding Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the organizers and Kaggle staff for holding the competition, and congratulations to the winners!</p>
<h1>Context</h1>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/overview" target="_blank">https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/overview</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/data" target="_blank">https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/data</a></li>
</ul>
<h1>Overview of the Approach</h1>
<p>I referenced the model used in the <a href="https://www.kaggle.com/competitions/asl-fingerspelling/discussion/434588" target="_blank">ASL Fingerspelling Competition 2nd place solution</a>.<br>
I employed a model consisting of Conv1D (x3) + Transformer.</p>
<h1>Details of the Submission</h1>
<h2>Model</h2>
<p>I initially used a model with 12 Transformer layers, based on the <a href="https://www.kaggle.com/code/iafoss/rna-starter-0-186-lb" target="_blank">excellent baseline provided by Iafoss</a>.<br>
However, due to stagnation in LB performance, I investigated other models. <br>
Influenced by the model used by the runner-up in a ASL Fingerspelling Competition, which incorporated Conv1D,<br>
I saw improved performance and thus switched to the following structure:</p>
<p>pseudo model code:</p>
<pre><code> _ range(5):
_ range(3):
x = Conv1DBlock(=192, =17, =4, =0.2)(x)
x = TransformerBlock(=192, =2, =4, =0.2, =0.2)(x)
</code></pre>
<h2>Dataset</h2>
<p>Modifications applied to <code>train_data.csv</code> included:</p>
<ul>
<li>Filtering for reads >= 100.</li>
<li>Adjusting for large errors in reactivity and reactivity_error by setting them to NaN, excluding them from loss calculations.
This involved computing signal-to-noise for each reactivity value and replacing reactivity with NaN under the following conditions:<ul>
<li>reactivity_error > 1.5</li>
<li>reactivity / reactivity_error > 1.5</li></ul></li>
</ul>
<p>This approach was inspired by the <a href="https://www.kaggle.com/competitions/stanford-covid-vaccine/discussion/189620" target="_blank">Stanford COVID Vaccine Competition winner's solution</a></p>
<h2>My Mistake</h2>
<p>For the final submission, I used an ensemble approach different from my main score.<br>
While it improved my Public LB score to 0.14381, it detrimentally lowered my Private LB score to 0.18093.</p>
<p>The mistake was including a model with an extremely poor Private LB score of 0.32353 in the ensemble.<br>
(comparable to the performance of sample_submission.csv at Private 0.33290)</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8577629%2F2ba2f5941a271b75c9111a6675669677%2F.PNG?generation=1702044697974444&alt=media" alt=""></p>
<p>This flawed model was created by merely changing the seed during training, with almost no predictive performance for lengths not present in the private dataset (207, 357, 457). <br>
The reason is unclear, but I suspect overfitting to a length of 177. Attempting to address potential leakage in the public LB by setting the MaP of leaked sequences to 0 made no difference.</p>
<p>I regret not validating with test data lengths like 206, which could have potentially averted this issue.</p>
<h2>Source</h2>
<ul>
<li><a href="https://www.kaggle.com/competitions/asl-fingerspelling/discussion/434588" target="_blank">https://www.kaggle.com/competitions/asl-fingerspelling/discussion/434588</a></li>
<li><a href="https://www.kaggle.com/competitions/stanford-covid-vaccine/discussion/189620" target="_blank">https://www.kaggle.com/competitions/stanford-covid-vaccine/discussion/189620</a></li>
<li><a href="https://www.kaggle.com/code/iafoss/rna-starter-0-186-lb" target="_blank">https://www.kaggle.com/code/iafoss/rna-starter-0-186-lb</a></li>
</ul> |
Stanford Ribonanza RNA Folding | 8th place solution (KF Part) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank the competition hosts for their exceptional organization and the challenging yet enriching environment they created. Their dedication to fostering a space for learning and innovation is deeply appreciated. I also express my profound gratitude to my teammates, <a href="https://www.kaggle.com/onodera" target="_blank">@onodera</a> and <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a>. Their collaboration, expertise, and unwavering commitment throughout this project were invaluable.</p>
<p>Here I will try to summarize some of the main points of our solution.</p>
<h1>Solutions from our temamates</h1>
<ul>
<li><a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/discussion/460956" target="_blank">onodera part</a></li>
</ul>
<h1>CV Strategy: GroupKFold</h1>
<p>In the field of RNA secondary structure estimation, various studies including <a href="https://academic.oup.com/bib/article/24/4/bbad186/7179751?login=false" target="_blank">Sato et al., 2023,</a> have highlighted the inappropriateness of using random splits for evaluation.This stems from the concept of 'Families' in RNA experimental data - groups of RNA molecules sharing specific functions or structures, resulting in high structural similarity within the same family. For instance, <a href="https://academic.oup.com/nar/article/50/3/e14/6430845" target="_blank">Fu et al., 2022</a> noted that the <a href="https://academic.oup.com/view-large/figure/333767038/gkab1074fig3.jpg" target="_blank">E2EFold model was overestimated due to this issue</a>.</p>
<p>To address this, we adopted clustering based on RNA sequence edit distances, using cluster IDs for GroupKFold. Additionally, we aggregated all the limited samples with seqlen=206 into fold=0. This strategy allowed for continuous evaluation of the model's performance on longer sequences.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2311404%2Fb39fa96937be0505cbe1b8480910784e%2Fsplit.png?generation=1702029388253122&alt=media" alt=""></p>
<h1>Datasets</h1>
<p>As part of my models, I experimented with alternating training using both the RMDB dataset and the competition dataset. This approach was adopted with the anticipation that it would enhance the model's ability to handle longer RNA sequences, a critical aspect in RNA secondary structure prediction.</p>
<h1>Features</h1>
<p>We processed input sequences using the 'arnie' package, mainly utilizing eternafold, rnasoft, and rnastructure. The model inputs included embedding layers for structure and loop_type, and a Graph Neural Network (GNN) adjacency matrix for bpp (base pair probability).</p>
<p>However, bpp alone, representing the probability of forming a pair, requires many layers when combined with CNNs to view the entire graph. As an alternative to using Transformers for a global view, we utilized “structure” (renamed to 'chunk' due to naming conflicts) and “segment”, as defined in <a href="https://academic.oup.com/nar/article/46/11/5381/4994207" target="_blank">bpRNA</a> [Danaee et al., 2018].</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2311404%2Fd2a22732d565f34536b3ad5c9af99220%2Fgraph_features.png?generation=1702027001721359&alt=media" alt=""></p>
<h1>Models</h1>
<p>My architecture is inspired by <a href="https://academic.oup.com/bioinformatics/article/39/8/btad457/7230784?login=false" target="_blank">LegNet</a> from <a href="https://www.kaggle.com/dmitrypenzar1996" target="_blank">@dmitrypenzar1996</a> and <a href="https://www.kaggle.com/competitions/stanford-covid-vaccine/discussion/189241" target="_blank">OpenVaccine's 6th place solution</a> [nyanp]. It consists solely of CNN and GNN layers. The adjacency matrices for bpp, structure, chunk, and segment significantly differ, so I prepared independent CNN+GNN blocks for each type. Group Convolution and einsum enabled this without for loops.</p>
<p>Following <a href="https://academic.oup.com/bioinformatics/article/39/8/btad457/7230784?login=false" target="_blank">LegNet</a>, prediction head outputs a 100d vector instead of a 1d scalar, with weighted summation over bin values for the final output. This approach stabilized learning by controlling the output range in regression tasks.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2311404%2Feed37f263a4a1321dbd91ccaf915d66c%2Fmodel.png?generation=1702027231710653&alt=media" alt=""></p>
<h1>Loss</h1>
<p>We opted for an MAE + MSE weighted by signal-to-noise ratio as my optimization function. MSE provides gradients similar to MAE in the early stages of training but approaches zero later, aiding in convergence.</p>
<h1>Pseudo labels</h1>
<p>We conducted pseudo label training using predictions on the test set created collaboratively with teammates <a href="https://www.kaggle.com/onodera" target="_blank">@onodera</a> and <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a>.</p>
<h1>Result</h1>
<table>
<thead>
<tr>
<th>Name</th>
<th>CV</th>
<th>CV (seqlen=206)</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>single best (scratch training)</td>
<td>0.1336</td>
<td>0.1133</td>
<td>0.14012</td>
<td>0.14299</td>
</tr>
<tr>
<td>single best (+pseudo label 1st)</td>
<td>0.1313</td>
<td>0.1152</td>
<td>0.13828</td>
<td>0.14222</td>
</tr>
<tr>
<td>single best (+pseudo label 2nd)</td>
<td>0.1306</td>
<td>0.1221</td>
<td>0.13739</td>
<td>0.14186</td>
</tr>
<tr>
<td>blending w/ all models</td>
<td>0.127645</td>
<td>0.109341</td>
<td>0.13626</td>
<td>0.14263</td>
</tr>
</tbody>
</table>
<h1>References</h1>
<ul>
<li>Fu, Laiyi, et al. "<a href="https://academic.oup.com/nar/article/50/3/e14/6430845" target="_blank">UFold: fast and accurate RNA secondary structure prediction with deep learning.</a>" Nucleic acids research 50.3 (2022): e14-e14.</li>
<li>Sato, Kengo, and Michiaki Hamada. "<a href="https://academic.oup.com/bib/article/24/4/bbad186/7179751?login=false" target="_blank">Recent trends in RNA informatics: a review of machine learning and deep learning for RNA secondary structure prediction and RNA drug discovery.</a>" Briefings in Bioinformatics (2023): bbad186.</li>
<li>Danaee, Padideh, et al. "<a href="https://academic.oup.com/nar/article/46/11/5381/4994207" target="_blank">bpRNA: large-scale automated annotation and analysis of RNA secondary structure.</a>" <em>Nucleic acids research</em> 46.11 (2018): 5381-5394.</li>
<li>Sato, Kengo, and Michiaki Hamada. "<a href="https://academic.oup.com/bib/article/24/4/bbad186/7179751?login=false" target="_blank">Recent trends in RNA informatics: a review of machine learning and deep learning for RNA secondary structure prediction and RNA drug discovery.</a>" Briefings in Bioinformatics (2023): bbad186.</li>
<li>Penzar, Dmitry, et al. "<a href="https://academic.oup.com/bioinformatics/article/39/8/btad457/7230784?login=false" target="_blank">LegNet: a best-in-class deep learning model for short DNA regulatory regions.</a>" <em>Bioinformatics</em> 39.8 (2023): btad45</li>
</ul>
<p>Thanks for your comment.<br>
My code is dirty so I don't have a plan to publish yet, but the main differences between my approach and LegNet are the presence of GNN layers and the objective function.<br>
GNN layers based on secondary structures mentioned above enable the model to attend to long-range contacts.</p> |
Stanford Ribonanza RNA Folding | Approximately 22nd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Sorry for this <br>
yes, I didn't use BPP at all</p>
<p><strong>"Approximately" in the Post Title: Lessons from a Near-Miss Blend and the Power of a Top Model</strong></p>
<p>In this post, I dive into an intriguing aspect of our team's journey (only my part) in the recent Kaggle competition. Our final submission was a blend that, unfortunately, didn't perform as expected. If we had solely relied on our best model, we would have secured the 22nd place. This post focuses on that model - what it got right, where it faltered, and how it ultimately impacted our team's standing.</p>
<p>Despite the setbacks, I am grateful to the competition organizers and my teammate for an enriching experience. This competition was not just about the rankings but also about the invaluable learning and the chance to delve deeper into the subject in a competitive format. Here, I'll share insights and takeaways from our journey, hoping they can be useful for fellow Kagglers.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F586539%2F8b21536b8858508549d6e8db4642dd13%2FScreenshot%202023-12-11%20at%2016.48.46.png?generation=1702309752894013&alt=media" alt=""></p>
<p><strong>Exploring Model Backbone: From Scratch to Pre-trained Efficiency</strong></p>
<p>In the begging, I initially contemplated building a model from scratch. The idea was to create a lighter model and significantly reducing the vocabulary size, thus downsizing the model itself. I started with a model based on BERT and DeBERTa, but training was painfully slow.</p>
<p>I then experimented with pre-training on the task of subsequence masking. Although this approach slightly improved the model's initial performance, the gains were minimal, and the training remained sluggish. It's worth noting that my setup included only a 1080TI GPU, which restricted me to smaller batch sizes, contributing to the slower training speed.</p>
<p>Eventually, I shifted my strategy to leveraging a pre-trained, smaller language model. That's how I chose 'deberta-v3-xsmall' as my base model. This choice was also motivated by the potential benefits of relative positional embeddings, especially for sequences of varying lengths. This post details this journey, highlighting the challenges and learnings from each step.</p>
<p><strong>Implementing Auxiliary Losses: A Strategic Approach to Model Improvement</strong></p>
<p>In my solution, I considered incorporating auxiliary losses, given the plethora of potential candidates. My approach involved adding a dedicated 'head' for each candidate and training the model for five epochs. In the end, I decided to keep only those heads where the validation loss was decreasing, adjusting their weights accordingly. However, I faced time constraints due to personal commitments like work and family:)</p>
<p>The final model included heads for such as targets, reactivity errors, structure, dataset name, and signal-to-noise ratios. Unfortunately, I couldn't explore the 'bpp' aspect as much as I would have liked. I selected the weights for the losses based on intuition.</p>
<p><strong>Utilizing the Full Dataset</strong></p>
<p>In my approach, I made a strategic decision to assign weights to each sample in the loss calculation. These weights were based on the signal-to-noise ratio and the number of reads. Initially, due to my limited computational resources, I incrementally added parts of the dataset to the training process. However, I soon realized the necessity of utilizing the entire dataset.</p>
<p>This approach significantly extended the training duration. As a result, I couldn't complete the computation for all folds - for instance, in the image above, only 2 out of the 5 folds were processed. This post will explore the challenges and insights gained from this method, particularly how integrating the entire dataset and weighted samples impacted the model's training and performance.</p>
<p><strong>Blending Checkpoints</strong></p>
<p>In the concluding phase of my solution for the Kaggle competition, I focused on blending models at different checkpoints. Initially, I was conserving the top 3 checkpoints. As an experiment, I expanded this to include the best 5, and eventually, I blended the top 10 checkpoints. Intriguingly, each time I increased the number of checkpoints in the blend, I observed a corresponding improvement in my leaderboard score.</p> |
Stanford Ribonanza RNA Folding | GraphAttention solution approach | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi there,thanks Stanford University for this competition!<br>
Might be someone will found my work is interesting.<br>
Note enough of time was my biggest problem,but after all this is my first medal,even though its silver, i am really happy.</p>
<ol>
<li>Preprocessing RNA sequence to graph:<br>
1.1. Used Eternafold pkg for extracting secondary structure.<br>
1.2. OHE nucleotids -> node features<br>
1.3. Used as edge features -> [phosphodiester_bond,base_pairing(canonical or wobble),BPPS]</li>
<li>My final solution was:<br>
2.1. Random walk as positional encoding.<br>
2.2. Architecture: <br>
Combination of:<br>
local attention -> GraphTransformer.<br>
global attention -> Attention encoder.<br>
20 layers depth nn(192 hidden dim), for me expands hidden dimension doesn't help.</li>
</ol>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F10350612%2F29ca6292363042edcf0923d231234396%2Fd.drawio.png?generation=1702059627165899&alt=media" alt=""></p>
<p>This works helps me a lot:<br>
<a href="https://arxiv.org/abs/2009.03509" target="_blank">https://arxiv.org/abs/2009.03509</a><br>
<a href="https://arxiv.org/abs/2205.12454" target="_blank">https://arxiv.org/abs/2205.12454</a></p>
<p>3.<br>
Default splitting data on train and validation(10 % of SN filter =1) <br>
During train process using rna sequence with signal to noise >= 0.8</p>
<p>For final submission used only one model prediction no stucks, that might increase quality of prediction as i said before no time was problem.</p>
<p>github:<a href="https://github.com/cerenov94/ribonanzaRNA" target="_blank">https://github.com/cerenov94/ribonanzaRNA</a><br>
Instruments:<br>
Pytorch Geometric,Graphein</p> |
Stanford Ribonanza RNA Folding | [18th place solution] Transformer + bpp conv + relative position bias | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle and the hosts for organizing this competition. This competition was a good experience for me.</p>
<p><strong>Features</strong></p>
<p>I only used RNA sequence and bpp matrix provided by host. I tried other features, but it didn't have much effect.</p>
<p><strong>Model</strong></p>
<blockquote>
<p>x1 = rna_sequence<br>
x2 = bpp_matrix<br>
x = tf.keras.layers.Embedding(num_vocab, hidden_dim, mask_zero=True)(x1)<br>
x2 = tf.expand_dims(x2, axis=1)<br>
x2 = tf.keras.layers.Conv2D(16, 16, activation='gelu', padding='same', data_format='channels_last')(x2)<br>
pos = RelativePositionBias(16, 32, 128)(x, x)<br>
for i in range(12):<br>
x = rel_transformer_block(hidden_dim, hidden_dim*4, head=16, drop_rate=0.2, dtype=dtype)([x, x2, pos])<br>
x = tf.keras.layers.Dense(2)(x)</p>
</blockquote>
<p><strong>Training</strong></p>
<ul>
<li>Epoch : 80</li>
<li>Batch size : 128</li>
<li>learning rate : 1e-3, with Cosine Decay</li>
<li>Optimizer : Ranger</li>
<li>loss : mae loss</li>
<li>only single model</li>
</ul>
<p><strong>Mystery</strong></p>
<p>When I added the conv2d layer to my model, the learning speed slowed down noticeably. Does anyone know why?</p> |
Stanford Ribonanza RNA Folding | [1st place solution] Transformer model with Dynamic positional encoding + CNN for BPPM features | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We did not impose any explicit relative distance constraint; however, this layer may perform differently at relative distances greater than those in the training data set.</p>
<p>The Ribonanza RNA Folding competition has been an amazing opportunity and we are really glad to have participated in it. Kudos to the hosts for their substantial effort in keeping the high organizational level and giving the contestants plenty of valuable insight and support. We would also like to thank the community for the fruitful public discussions and coding initiatives. Congratulations to everyone involved! 🎉</p>
<h1>Code</h1>
<p>Open source code is available on <a href="https://github.com/autosome-ru/vigg_ribonanza/" target="_blank">github</a>.</p>
<h1>TLDR</h1>
<p>Our solution is based on transformer architecture. We predicted base pair probability matrix(BPPM) for each RNA sequence using EternaFold and added convolutional blocks to our architecture to process BPPM features. For some models we added Squeeze-and-Excitation layer in convolutional blocks Also we add these features with attention values before softmax operation in Self-Attention block. To allow better generalization for longer input we implemented Dynamic Positional Bias. Then we ensembled models with slight differences in architecture and training process.</p>
<h1>Data Preprocessing</h1>
<p>At first, for each sequence in train and test datasets we calculated a base pair probability matrix (BPPM) using EternaFold. During training and inference phases we passed to our model the RNA sequences encoded with a learnable embedding layer. Each nucleotide was considered a token and special <strong>< start ></strong> and <strong>< end ></strong> tokens were placed at both ends as well. To provide the model with the information about whether the sequence comes from a “clean” subset of training dataset (which SN_filter values show - 1 corresponds to “clean” sequences with high signal-to-noise ratio and 0 is otherwise) we encoded SN_filter values with a learnable embedding layer and added corresponding embeddings to sequence embeddings. The embedding dimension was chosen to be of size <strong>192</strong>. BPPMs were padded with zeros at their margins to account for adding <strong>< start ></strong> and <strong>< end ></strong> tokens. The figure below summarizes the data preprocessing part.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16725112%2F482c2d20fce57fa513021909a515e98b%2Fpreprocessing.png?generation=1701994647204619&alt=media" alt=""></p>
<h1>Model</h1>
<p>The model idea is partially inspired by <a href="https://www.kaggle.com/shujun717" target="_blank">@shujun717</a> solution to the Open Vaccine challenge, <a href="https://www.kaggle.com/competitions/stanford-covid-vaccine/discussion/189564" target="_blank">link</a></p>
<p>The model takes a sequence of tokens and BPPM as input and outputs DMS_MaP and 2A3_MaP reactivities for each input nucleotide. Its architecture comprises 12 consecutive Transformer Encoder Layers and an output projection linear layer. Each Transformer block takes a sequence of tokens and BPPM features from the previous layer and outputs updated feature maps as shown below. The Transformer Encoder block adopts common transformer encoder architecture, except we modified the Self-Attention block to ensure interaction between BPPM features and sequence features.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16725112%2F0c70879344b972d64ba13945e163a126%2Fmodel_general_view.png?generation=1701994737102906&alt=media" alt=""></p>
<h2>Self-Attention Block</h2>
<p>In the Self-Attention (SA) block we implemented an attention mechanism with the following modifications: after attention values for each head are calculated, we add BPPM features updated by the ‘Convolutional block’, which outputs BPPM features with the number of channels corresponding to the number of heads in the SA block. We set the number of heads in SA and corresponding channels of BPPM features to be 6. Thus, the hidden dimension size of Q, K, V matrices is 32. The overall structure of the SA block is shown below.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16725112%2F5a7bf6e6f893938ced8bdd2e9a1fb08c%2Fselfattention.png?generation=1701994715778085&alt=media" alt=""></p>
<h2>Dynamic Positional Bias</h2>
<p>The sequence length is distributed differently in train and test datasets (test sequences are generally longer). To allow for the better generalization of longer inputs we implemented a positional encoding to be added to attention values. We found Dynamic Positional Bias to be of more use compared to other relative positional encoding methods we tried, such as xPos (rotary positional encoding) and ALiBi. Dynamic Positional Bias calculates for each head a relative positional bias map, which is learnable and depends on sequence length. Relative positional bias doesn’t allow to leverage distance from start and end of sequence, so tokens <strong>< start ></strong> and <strong>< end ></strong> were added to fix that.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16725112%2F8d8468983d7a7d057dcf74d7511b3ac8%2Fdynpos%20(1).png?generation=1701994757603986&alt=media" alt=""></p>
<h2>Convolutional Block and SE block</h2>
<p>The models in the final ensemble come in two versions that are slightly different in the structure of the Convolutional block. The basic Convolutional block consists of 2D convolutional layer, batchnorm layer, activation and learnable gamma parameters that scale the output feature channels, whereas the modified version of this block (SE-Convolutional block) also contains Squeeze-and-Excitation layer, hence the name. The SE layer applies input-dependent rescaling of values along the channels as shown below. Thus, the only difference between models in the ensemble is the presence of the SE layer.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16725112%2F4b8f15f5aa950ba64ecbd55df123cad4%2Fconv_se.png?generation=1701994778649467&alt=media" alt=""></p>
<h1>Training Process</h1>
<p>The training process has been adapted from the <a href="https://www.kaggle.com/code/iafoss/rna-starter-0-186-lb/notebook" target="_blank">notebook</a> by <a href="https://www.kaggle.com/IAFOSS" target="_blank">@IAFOSS</a><br>
We used a one-cycle learning rate schedule (pct_start=0.05, lr_max=2.5e-3,) coupled with AdamW optimizer (wd=0.05), and batch size set to 128.<br>
The number of epochs for model training was determined from the dataset size.<br>
For final models (trained almost on the whole dataset) we used 270 epochs. <br>
In each epoch, 1791 batches were processed (we kept this number due to historical reasons), elements of each batch were sampled from dataset with the weight = 0.5 * torch.clamp_min(torch.log(sn + 1.01),0.01)</p>
<p>We have also found that training model with a simple SGD optimizer for ~15 epochs of 500 batches each additionally improves model performance (the number of epochs varies so we used a small validation set to determine the exact number of epochs)</p>
<p>Additionally, one model was trained to predict 2A3 given DMS, RNA sequence and BPPM (dms-to-2a3 model).</p>
<h1>Inference</h1>
<p>At the inference phase for all inputs we set the SN filter value to be 1 as if they come from a “clean” dataset.</p>
<h1>Ensembling</h1>
<p>We ensembled 15 models with SE-Convolutional block, 10 models with plain Convolutional block, 2 with plain Convolutional block trained on split by sequence lengths (one of these two models accepts bracket features). We took the average of their predictions. Then we predicted 2a3 reactivities based on averaged dms reactivities using dms-to-2a3 model and added these predictions to averaged 2a3 reactivities in the following way: (27/28)<em>averaged_2a3 + (1/28)</em>predicted_2a3.</p>
<h1>Other splits</h1>
<p>The clear problem with a simple KFold split is the high sequence similarity across the training dataset and the fact that test sequences are very distinct from the training data. This might hamper the model development, because the increase in quality on the validation set could be due to overfitting rather than actual improvement.</p>
<p>We have calculated a hamming distance matrix for all the sequences present in the training set and performed a modified DBSCAN clustering procedure with distance threshold set at 0.2. In the following picture we show the clusters mapped to their respective cluster identifier (a cluster ID was assigned as a number of the smallest sequence within that cluster in the train dataset).</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16725112%2F55dc21823386d878fb212d681cae593e%2Fcluster.jpg?generation=1701994798203967&alt=media" alt=""></p>
<p>We tested several splits based on sequence identity (the easiest one is just to split data into folds without shuffling) and found out that while the model final validation performance degrades as we choose more and more stricter distance threshold, its relative value behave the same way as for a simple KFold split. So, we decided to use simple KFold split than training final models </p>
<p>Also, we have conducted a test of the model performance on length-based split. For that, we trained a model on the short sequences (length < 206) and validated it on sequences of size = 206. The behavior of the validation metric was slightly noisier but still highly correlated with a metric for a simple KFold split.</p>
<h1>Public data leakage</h1>
<p>Approximately 13% of public test sequences are identical to the ones present in the train dataset (by sequence). To avoid selecting a model that is memorizing more of these sequences rather than learning RNA-related stuff, we zeroed out the predictions for these sequences in most of our submissions, while sometimes sending non-zeroed out submissions in order to compare our performance to other participants.</p>
<h1>Features we also tried</h1>
<h2>capR</h2>
<p>We don’t have conclusive results for that feature. It seems like it is not beneficial for the model on average but sometimes it resulted in a better model and sometimes – in worse. We decided to not use this feature.</p>
<h2>Brackets</h2>
<p>We have tried to use brackets generated by EternaFold, ContraFold, ViennaRNA, etc., as well as programs for pseudoknots prediction (like IPknot). However, after adding EternaFold BPPMs to the model, adding other features yields no significant increase in model performance. For some models we used brackets just to augment model</p>
<h2>Different BPPMs</h2>
<p>All other BPPMs (ContraFold, ViennaRNA, RNAsoft, RNAstructure) result in a suboptimal model.<br>
Averaging BPPMs doesn’t result in better performance.<br>
BPPMs from RFold yield the same quality as ViennaRNA BPPMs.<br>
The RNA-FM model produces both per-nucleotide embeddings and BPPM-like matrix. Still, those don’t help the model at all, and using them results only in a slight increase in model performance then compared to sequence only model</p>
<h2>SQUARNA matrix</h2>
<p>SQUARNA (<a href="https://github.com/febos/SQUARNA" target="_blank">github</a>) outputs a matrix different from BPPM, but can be used in the same manner. Unfortunately, this feature also didn’t yield any additional performance increase.</p>
<h1>What we also tried</h1>
<h2>Fully-Convolutional architecture</h2>
<p>The initial reason we decided to take part in the competition was to test our model LegNet <a href="https://academic.oup.com/bioinformatics/article/39/8/btad457/7230784" target="_blank">link</a>, which shows SOTA results on DNA sequences and worked well on some RNA-related tasks (not yet published).<br>
Unfortunately, any modifications of this architecture resulted in subpar performance when compared with properly tuned transformer models. This can be explained by the fact that predicting RNA secondary structure requires attending to long-range contacts. The transformer architecture suits better for such cases.</p>
<h2>Subsetting data</h2>
<p>Subsetting data (filtering by different thresholds on SN ratio) resulted in a performance boost for all models. However, this technique was superseded by weight sampling, which proved itself to be more effective.</p>
<h2>Fine-tuning on public datasets</h2>
<p>We tried to fine-tune our model on a public dataset, gathered by the organizers (<a href="https://www.kaggle.com/datasets/rhijudas/rmdb-rna-mapping-database-2023-data/data" target="_blank">link</a>) by training model to predict the results of the public experiments (excluding ones with a small number of samples) and for Ribonanza data simultaneously. Unfortunately, this also gave no boost to the model performance.</p>
<h2>Using 3D data</h2>
<p>We tried to use data about predicted 3D structure of 100k sequences from the train dataset but gave up on that once we had visually analyzed them:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16725112%2F2d054d1679176e0664130f6c46ccb0ae%2Ftrashfold.jpg?generation=1701994817511350&alt=media" alt=""></p>
<h2>Absolute positional embedding</h2>
<p>Using absolute positional embedding leads to unsolvable issues when generalizing upon longer sequences.</p>
<h2>Relative positional embedding</h2>
<p>The simplest approach is to augment absolute positional encoding during the training phase to shift randomly from 0 to (Lmax - seqlen) position. This indeed solves the issue with extrapolation, but works worse than other methods.<br>
Rotational positional embedding, unfortunately, doesn’t help the model to generalize on larger lengths.<br>
ALiBi positional embedding solves the issue with extrapolation but even after keeping it only for a part of heads (as suggested in <a href="https://github.com/lucidrains/x-transformers" target="_blank">https://github.com/lucidrains/x-transformers</a>) still behaves worse than dynamic positional bias.</p>
<h2>Augmentation</h2>
<p>First, we tried to use reverse augmentation. This can be done in three ways:<br>
reversing sequence before any modifications,<br>
reversing sequence before padding, but after adding and tokens,<br>
reversing sequence after padding.</p>
<p>The first two ways yield no gain for all variants of models we tested. Yet, the third one (upon coupling with additional finetuning) gave us a good result for a model with xpos positional encoding. The resulting single-model performance was 0.13937 on the public leaderboard. Unfortunately, xpos shows rather poor performance on long sequences so we abstained from using this model in the final submission.</p>
<p>We have also tried shift augmentation and different sequence padding approaches. This didn’t improve our model performance as well.</p>
<h2>Sliding window</h2>
<p>One of the possible ways to generalize for larger sequences is to predict reactivities using the sliding window. However, the idea is somewhat wrong in a biological sense, and it results in performance degradation when testing on train dataset sequences.</p>
<h2>Pseudolabelling</h2>
<p>Once we obtained ensembles of best-performing models we tried to use them to pseudolabel test dataset and use predictions with the highest confidence to train new models. While this indeed results in a better single-performing model, adding such a model to ensemble doesn’t improve ensemble performance.</p>
<h2>Changing loss</h2>
<p>Instead of filtering sequences with low SN we tried to mask positions with high reactivity error as it was done by <a href="https://www.kaggle.com/nullrecurrent" target="_blank">@nullrecurrent</a> in Open Vaccine challenge (<a href="https://www.kaggle.com/competitions/stanford-covid-vaccine/discussion/189620" target="_blank">link</a>)<br>
This resulted in poor performance.<br>
We tried to weight loss for each sequence by its SN – it didn’t result in any improvement</p>
<h1>Tools</h1>
<p><a href="https://github.com/DasLab/arnie/tree/master" target="_blank">arnie</a><br>
<a href="https://github.com/eternagame/eternafold" target="_blank">EternaFold</a></p>
<h1>Links</h1>
<h2>Squeeze-and-excitation block</h2>
<p>Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7132-7141).</p>
<h2>Dynamic positional bias</h2>
<p>Wang, W., Chen, W., Qiu, Q., Chen, L., Wu, B., Lin, B., … & Liu, W. (2023). Crossformer++: A versatile vision transformer hinging on cross-scale attention. arXiv preprint arXiv:2303.06908.</p>
<p>Hi! Thank you for your appreciation! I used Inkscape - it is a vector graphics editor allowing to flexibly implement any ideas.</p>
<p>Hi! I'm very glad to hear that! I used a vector graphics editor called Inkscape.</p> |
Stanford Ribonanza RNA Folding | [37th place solution🥈] Single Graph Transformer + Gated GCN Model | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>1. Leadboard score</h1>
<p>Public score: 0.15187; Private Score: 0.15349</p>
<h1>2. Data preprocess</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3160035%2Fd294a27c2af60aaccf72f496078dd7b1%2F1.png?generation=1703406681187161&alt=media" alt=""></p>
<h2>2.1 Graph node feature</h2>
<p>input node feature:</p>
<pre><code>RNA_ONE_HOT = {
: [, , , ],
: [, , , ],
: [, , , ],
: [, , , ]
}
</code></pre>
<p>output node feature: DMS_MaP and 2A3_MaP score, which were clipped to [0, 1] when training.</p>
<h2>2.2 Edge feature</h2>
<p>For neighbor edge in original RNA sequence, the edge feature = 1.0;<br>
For Ribonanza position pairs edge, the edge feature = 1.0 + 10 * Watson-Crick base pair probabilities, which are recorded in the supply Ribonanza_bpp_files.</p>
<h1>3. Model</h1>
<p>The model I chose was GraphGPS: github:<a href="https://github.com/rampasek/GraphGPS" target="_blank">https://github.com/rampasek/GraphGPS</a></p>
<h2>3.1 Basic layer</h2>
<p>The basic GraphGPS layer is composed of Graph Transformer and Gated GCN.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3160035%2Ff441e8b5ec70ff3a31f1e3c657f66d0c%2FSnipaste_2023-12-24_15-31-40.png?generation=1703406706736766&alt=media" alt=""></p>
<h2>3.2 Positional encoding</h2>
<p>I tried <strong>Laplacian Positional Encoding</strong> at the begining, but it didn't improve the performance of GraphGPS, which may means the Laplacian Positional Encoding is not useful when facing with topology diversity.<br>
Hence, I removed the positional encoding in GraphGPS.</p>
<h2>3.3 Model hyper-parameters</h2>
<pre><code>
</code></pre>
<h2>3.4 Data augmentation</h2>
<p>I tried two data augmentation methods: <strong>sub graph sampling</strong> and <strong>RNA sequence reversing</strong>.<br>
To be specific, the sub graph sampling is choosing a random center node from original graph, and sampling its k-steps neighbors to generate a new graph; RNA sequence reversing means random reversing the input sequence when training.<br>
Experimentally, I found the <strong>rna sequence reversing</strong> is more work for this mission.</p> |
Stanford Ribonanza RNA Folding | [3rd Place Solution] AlphaFold Style Twin Tower Architecture + Squeezeformer | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hey <a href="https://www.kaggle.com/digitalembrace" target="_blank">@digitalembrace</a>, sure lets talk via email</p>
<p>A big thanks to the organizers for making this competition happen and putting their efforts in solving hard problems, such as RNA structure prediction. This sentence stuck with me throughout the whole competition: “… without being able to understand how RNA molecules fold, we are missing a deeper <strong>understanding of how nature works, how life began</strong>, and how we can design…“</p>
<p>Many will glance over this statement but it is a huge mystery that sometimes kept me awake at night. I mean it is SO strange… the evolution/creation of 4 nucleotides with specific physical/chemical properties that can code and take on different structures to fulfill specific actions in the body, and we still don’t understand the origin/purpose of these molecules… Hopefully we are in the right direction in finding out the truth and reason behind it or even “what” created them. </p>
<p>Solution is not based on dozens of ensemble models, but rather 2 strong independent models. There are 2 models since I wasn’t sure if I will be able to produce an Alpha fold style viable solution in the allotted time, so I focused on a smaller “safe” version model first and a bigger “riskier” model later in the competition. But the ensemble of the 2 provided crucial in gaining the 3rd place.</p>
<h2>TLDR</h2>
<p>Blend of 2 Independent models. The smaller “safer” version model is based on augmented Squeezeformer architecture, which consists of RelativeMultiheadSelfAttention, Convolution and FeedFoward modules. Learnable BPP’s through 2d convolution are added to the attention scores, and augmented low Signal-to-Noise data is additionally used in training apart from clean training data. Bigger twin-tower model is based on augmented Alpha Fold style architecture, that consists of MSA stack representation and Pair stack representation, that communicate in a criss-cross fashion through Outer Product Mean and Pair Representation Bias.</p>
<h2>Data Preprocessing/Cross Validation</h2>
<p>Training data was split in 4 folds similar to this <a href="https://www.kaggle.com/code/iafoss/rna-starter-0-186-lb/notebook" target="_blank">notebook</a>. There was a stable correlation between local cv vs lb. BPP’s were processed and cached as .npz files. Most of the time, models were evaluated only on fold0. <br>
Twin-tower model was trained only on clean training data and it has the power to predict its confidence/error_estimate similar to pLDDT in AlphaFold. After getting a good generalization from the twin-tower model on the clean data, new training data was created by augmenting the noisy low SNR data with the model’s confidence/error_estimate. For a particular nucleotide position in the low SNR dataset the idea is to combine how confident the model's prediction is with the position's reactivity error from the experiment and thus “fix” the noisy data. This gave significant improvements in the smaller “safer” model which was trained on clean dataset + improved low snr dataset. Even with this useful new data being available for the bigger twin-tower model, it was never trained on this extra data because of time constraints and only the clean dataset was used for final submission. Twin-tower experiments are currently underway on the full dataset.</p>
<h2>Squeezeformer Model</h2>
<p>The smaller “safer” model is based on Squeezeformer architecture which was used in previous Google American Sign Language competition. The inputs to the model were tokenized RNA sequences and BPP matrices. The model consists of 14xSqueezeformer blocks and an output projection layer that predicts chemical reactivities at each position. One Squeezeformer block consists of three modules: Relative MultiHeadSelfAttention module, Convolution module, and FeedForward module.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F14754958%2F55cab60a14c35cb028ecc1de472c6566%2FSqueezeFormerJPG-01.jpg?generation=1702092589333558&alt=media" alt=""></p>
<p>Instead of absolute positional encodings, relative encodings are used for generalization on longer sequences. Attention scores apart from relative pos scores are further affected by BasePairProbability matrices, even though I was reluctant to use them at first since they are created with software that is not capable of detecting long range pseudo-knots and this bias is unfortunately induced in the model. The attention scores in the transformer are calculated in this manner: (content score + relative position score)/sqrt(head_dim) + bpp_bias_score. Bpp_bias_score is obtained by passing the BPP matrices through a 2D convolution block.</p>
<h2>AlphaFold style Twin-tower Model</h2>
<p>This model was inspired from Google's AlphaFold and its derivatives OpenComplex/RhoFold. The original AlphaFold architecture relies on two different input representations for its predictions. It jointly uses MSA(Multiple Sequence Alignment) and Pair Representation features. The MSA representation uses row-wise attention to find intra-sequence features, while the column-wise attention is used to obtain inter-sequence evolutionary signals from the MSA stack. Since MSA for this competition wasn’t helpful in extracting evolutionary information, because of its synthetic nature of the RNA sequences, just the tokenized input sequence was used. Because MSA was not used, the axial-self attention was replaced with relative multi head attention and convolution.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F14754958%2F99a3d1c0bbb83caf86c5a82c2d2a7627%2FChemFoldJPG-01.jpg?generation=1702092825428051&alt=media" alt=""><br>
(At the time of submission, only the Single representation branch was used for predictions, the pair features were completely ignored due to time restrictions)</p>
<h3>The workflow of the data from the input all the way to the prediction is as follows:</h3>
<p>1) Input sequence gets tokenized, sequence and pair masks are generated</p>
<p>2) Tokenized sequence are embedded through embedding networks: MSANet and PairNet</p>
<ul>
<li>PairRepresentation features are embedded using Relative2D positional encodings to provide information about position of residues. Maximum position is clipped at 32, and each position afterwards is considered as “far” away. This inductive bias trains the model not to rely heavily on nucleotide positions and generalize better to any length, as stated in the alpha fold supplemental materials.</li>
<li>MSA representation is passed through a simple Embedding layer (since positional encodings are added later in the transformer layer).</li>
</ul>
<p>3) Embedded MSA and Pair features are passed through the main trunk of the network, which consists of 8 Chemformer blocks illustrated above. Some key points are:</p>
<ul>
<li>MSA representation is updated within the Squeezeformer attention <br>
instead of axial self-attention proposed in the original paper.</li>
<li>Pair representation is updated only with triangular multiplicative updates. Triangular self-attention wasn’t used in this model because of memory constraints but using both should give an increase in score.</li>
<li>The MSA representation updates the pair representation through an element-wise outer product that is summed over the MSA sequence dimension.</li>
<li>The Pair representation updates the MSA representation through a projection of additional logits from the pair stack to bias the MSA attention scores.</li>
<li>Both representations are passed through a 2-layer MLP that acts as transition before the communication.</li>
<li>This communication is repeated within each block, so 8 times in total.</li>
<li>Residual connections and Row-wise/Column-wise dropouts are used.</li>
</ul>
<p>4) The processed representations MSA Features and Pair Features are then passed through various heads to extract meaningful information such as confidence/error_estimate, chemical reactivity, base_pair probabilities etc.</p>
<h2>Training</h2>
<p>Training procedure is as follows:</p>
<p>First the Twin-tower model is trained on the clean dataset for 60 epochs with lr of 1e-3. Batch size of 8 per gpu was used with gradient accumulation of 4, leading to an effective batch size of 64 on 2 GPUs. The optimizer used was AdamW with cosine scheduling, weight decay parameter is 0.05 and warmup is 0.5. Model was trained on 2x4090 for 30 hours total. No extensive parameter tuning was done for this model.<br>
This leads to a 0.13746 public/0.14398 private single model score. After this a synthetic dataset was created by blending the model’s predictions with a subset of noisy data that was between 0.35<data<1 SNR. Selecting a lower SNR threshold increased the training examples but lowered the quality. Current synthetic dataset is created with simple weighing, because of time constraints even though the model was trained and outputs valid plddt/error_estimates, they were not fully utilized. I didn’t have time to experiment with the formula for combining plddt score with experimental reactivity_error on a per nucleotide basis which is guaranteed to produce better synthetic data. <br>
The smaller “safer” model was trained for 200 epochs with lr of 7e-4 a batch size of 64. Optimizer and lr scheduling was the same as before, however no grad accumulation was used. This model was trained both on clean dataset and the synthetically created dataset which doubled the training examples. This single model achieves 0.13865 public/0.14256 private score.</p>
<p>A day before the competition ended I decided to just run longer epochs on the Twin-tower model, so I continued the same training procedure with starting weights from epoch 30 of the previous iteration. This gave improvements just by running the model longer which led to the final 0.13706 public/0.14366 private score of this single model. The final score is a blend of the twin-tower model which was trained only on the input sequence and tried to learn all of the interactions on its own and the squeeze model which contributed with the BPP’s and the synthetic data to the final prediction.</p>
<h2>Not utilized/Can be improved for Twin-tower model</h2>
<ul>
<li>Model was trained only with sequences as input. </li>
<li>BPP’s and other supplemental data can be used. </li>
<li>Synthetic dataset which improved the smaller model should be used.</li>
<li>With my tests, by increasing the depth (8->12) blocks, there is an increase in score, submission is only 8 blocks long. </li>
<li>Deeper model 16,24 blocks will be tested soon by utilizing checkpoiting/rematerialization to counter the memory problem and interleaving the convolution blocks after several MHSA blocks to aid training for this deeper model. </li>
<li>Recycling from the original paper was not utilized.</li>
<li>Only triangular multiplicative updates are used in the Pair Representation, triangular self-attention was not utilized.</li>
<li>Since RNA can take on multiple conformations, dropout can be utilized at inference time and results averaged to get a better estimate of the RNA’s structure</li>
</ul>
<p>All of these adjustments are very likely to provide benefits to the model. Some of the tests are currently underway.</p>
<h2>Comments</h2>
<p>The twin-tower model was a large task, partially because I was competing solo. The model at submission time was trained only on clean dataset, and only the MSA Feature pathway was used in the predictions, the other Pair Feature pathway was completely ignored, but it can be used to try and recreate the BPP’s which should increase the score (tests are underway as im writing). Best submission of the model without BPP’s, loop types or any pre/post processing is 0.13709 public and 0.14366 private. However, the twin-tower model has a bigger gap in generalization compared to the smaller “safer” squeezeformer model which scored 0.13865 public but 0.14265 private, so currently investigating the reason behind it.</p>
<p>I am new to machine learning. I started to learn the field in May of this year, so I am sure there will be a lot of mistakes in the code and in my approach and sorry if my explanation is all over the place, all this is new to me and I am still learning.</p>
<p>Open Sourced Code:<br>
<a href="https://github.com/GosUxD/OpenChemFold" target="_blank">https://github.com/GosUxD/OpenChemFold</a></p>
<h2>References:</h2>
<p>[1] Squeezeformer: An Efficient Transformer for Automatic Speech Recognition<br>
Sehoon Kim, Amir Gholami, Albert Shaw, Nicholas Lee, Karttikeya Mangalam, Jitendra Malik, Michael W. Mahoney, Kurt Keutzer arXiv:2206.00888 [eess.AS] <a href="https://doi.org/10.48550/arXiv.2206.00888" target="_blank">https://doi.org/10.48550/arXiv.2206.00888</a></p>
<p>[2] Winner of Google American Sign Language Fingerspelling Competition<br>
<a href="https://github.com/ChristofHenkel/kaggle-asl-fingerspelling-1st-place-solution" target="_blank">https://github.com/ChristofHenkel/kaggle-asl-fingerspelling-1st-place-solution</a></p>
<p>[3] Jumper, J., Evans, R., Pritzel, A. et al. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583–589 (2021). <a href="https://doi.org/10.1038/s41586-021-03819-2" target="_blank">https://doi.org/10.1038/s41586-021-03819-2</a></p>
<p>[4] OpenComplex github code repository:<br>
<a href="https://github.com/baaihealth/OpenComplex" target="_blank">https://github.com/baaihealth/OpenComplex</a></p>
<p>[5] E2Efold-3D: End-to-End Deep Learning Method for accurate de novo RNA 3D Structure Prediction<br>
Tao Shen, Zhihang Hu, Zhangzhi Peng, Jiayang Chen, Peng Xiong, Liang Hong, Liangzhen Zheng, Yixuan Wang, Irwin King, Sheng Wang, Siqi Sun, Yu Li. arXiv:2207.01586 [q-bio.QM] <a href="https://doi.org/10.48550/arXiv.2207.01586" target="_blank">https://doi.org/10.48550/arXiv.2207.01586</a></p>
<p>i am not working at a company or at any university, just a very curious guy i suppose</p>
<p>The actual "squeeze" idea was not used, but the general architecture and blocks</p>
<p>Hey <a href="https://www.kaggle.com/greySnow" target="_blank">@greySnow</a>, this is the general idea. The final single representation obtained from the MSA Features is ran through the pCDT Head (predicted Chemical Difference Test) which is analogous to the pLDDT (predicted Local Distance Difference Test) in Alpha Fold. </p>
<p>The pCDT head, projects the final single representation into 50 bins, so (batch, seq_len, 2, 50) where each bin is covering a 0.02 chemical distance difference range between 0-1 [*Testing to lower this range between (0-0.5) because most of the errors the model makes are less than 0.5, so half of the bins are not used most of the time]. </p>
<p>Next the final chemical reactivity predictions are scored against the ground truth. The L1 difference between the prediction and ground truth is discretized into 50 bins (with values greater than 1 being placed in the final bin) and used as the target value for the cross-entropy loss. Simply stated the model is trying to guess in which of the 50 bins the current error should be placed. </p>
<p>For evaluation, a pcdt score/confidence percentage is obtained by calculating the expected value of the per-residue pCDT distribution and converted into percentages.</p>
<p>Hmm that is interesting to know. How did you guys setup the recycling in your model?</p>
<p>Affiliation?</p>
<blockquote>
<p>Am I right that for larger model you used BPP as one of the targets to be predicted?</p>
</blockquote>
<p>The submission was with nothing other than the sequences. No BPPs were used as inputs, or targets. Pair Features branch was completely cut off and BPP pipeline wasn't even implemented in the final submission of the model. <br>
But currently testing the new version with BPP's as the models target. Will publish results once they are ready.</p>
<blockquote>
<p>Have you tried to use that loss in Squeezeformer Model?</p>
</blockquote>
<p>Didn't work as much on the smaller model, it was just a back up plan. But it is an interesting experiment how the results will vary!</p> |
State Farm Distracted Driver Detection | #3 BR POWER - Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: State Farm Distracted Driver Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>@Lucian Ionita. Yes, I just resized the images to something about 80x60 and got the euclidean distance 20 nearest neighbors</p>
<p>First of all congratulations to Jacobkie and Z_B_C for winning such amazing competition. @Z_B_C we almost draw.
Also I would like to thanks Kaggle and StateFarm for such a unique competition. We leaned a lot and it is my first DeepNets competition win ;-D</p>
<p>Our solution is most based in the "Time" feature. Just watch the movies in the link and try to figure out why: <a href="https://www.kaggle.com/titericz/state-farm-distracted-driver-detection/just-relax-and-watch-some-cool-movies">https://www.kaggle.com/titericz/state-farm-distracted-driver-detection/just-relax-and-watch-some-cool-movies</a></p>
<p>All pictures in trainset are taken in sequence so we explored that characteristic. Trainset is given in the correct sequence, but Testset not. So taking into account that subsequent images are very close to each one (driver position, light, shadows, vehicle external objects, etc...), we decided to try nearest neighbors on all images. And for our surprise it presented very good results catching the nearest images in the correct trainset sequence. The first neighbor have a subject hit ratio of 100% and class hit rate of 99,5% in trainset. So we used a blend of the 20 nearest neighbors in our solution.</p>
<p>We trained 8 CNN models, but for our final submission we used only 4. All models trained over 5 folds CV:</p>
<p>1)- resnet-152 caffe, CV: 0.31 LB: 0.27. Original dataset, no augmentation</p>
<p>2)- resnet-152 caffe, CV: 0.36 LB: 0.31. Original dataset, some augmentation (under-tuned?)</p>
<p>3)- resnet-152 torch, CV: 0.223 LB: 0.181. Modified images 1, no augmentation</p>
<p>4)- VGG-16 Keras, CV: 0.30 LB: 0.28, . Modified images 2, 2x augmentation</p>
<p>=>Modified Images 1: Took current image and 5 nearest neighbor images. For each one of the 3 RGB channels I replaced the image by:</p>
<p>Channel R: (current image + nearest1)/2</p>
<p>Channel G: (nearest2 + nearest3)/2</p>
<p>Channel B: (nearest4 + nearest5)/2</p>
<p>It built very redundant images. Also on these images we tried to center the steering wheel via regression.</p>
<p>=>Modified Images 2: Took current image and 5 nearest neighbor images. For each one of the 3 RGB channels I replaced the image by:</p>
<p>Channel R: current image - nearest1</p>
<p>Channel G: nearest2 - nearest3</p>
<p>Channel B: nearest4 - nearest5</p>
<p>It build very redundant images and tries to catch drivers movements. </p>
<p>So these 4 models presented high diversity between then and predictions are very stable at Level 1 training.</p>
<p>So we used Level 1 prediction of all 4 models to ensemble at Level 2. But this time merging all 20 neighbors predictions of each image. So each image have its own prediction + 20 neighbors predictions. For Level 2 training we used scipy minimize function and created a custom geometric average function to minimize logloss of all models. That architecture improved CV of each model:</p>
<p>1)- resnet-152 caffe, CV: 0.31 => 0.192</p>
<p>2)- resnet-152 caffe, CV: 0.36 => 0.276</p>
<p>3)- resnet-152 torch, CV: 0.22 => 0.180</p>
<p>4)- VGG-16 Keras, CV: 0.30 => 0.192</p>
<p>Our final solution is and weighted geometric average of these 4 models and CV score is about 0.116 and Class hit Ratio of 96.4%</p>
<p>Also we found via cross-validation that replacing all prediction < 0.00001 to zero improved our scores in CV, but we didn't used that in our last submission. If we had choosen it we would finished #1 :-/</p>
<p>That's it and...</p>
<p>Congratulation again Jacobkie for its late huge jump to #1. And congrats to all top10 teams that didn't overfitted and to everyone that spent much time in this one. Lots of learnings to all.</p>
<p>Thanks again!
Giba</p>
<p>obs. don't forget to upvote the post and script if you like it ;-P </p>
<p>@Guanshuo Xu:
1)- We used the 3 channels of images because we used pretrained models. Also using multiple neighbors images per channel improves stability, adds redundancy and removes outliers from images.
all other questions: YES</p>
<p>We didn't over fitted the testset. We didn't used semisupervised learning on our solution. What I proposed is in a real life problem, use last 20 collected driver images to make the prediction and not only the last one. Once the system starts capturing images and calculating probabilities it need only to predict the last image probabilities in a moving window scheme. The other 19 probabilities are already calculated in the past. So there are no delay or excessive computing doing it.</p>
<p>@David If you built a better single model than any one of my models, congrats! Probably many teams did it. But single models usually don't win competitions.</p>
<p>Also blending models is not a problem but a good and stable technique that generalizes very well and perform MUCH better than single models, but only if you really know how to do it... </p> |
State Farm Distracted Driver Detection | #70: Based on Skin Segmentation | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: State Farm Distracted Driver Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello everyone, my system is not within the top 10, but 70th is not very bad. Especially that I classify each image independently (No semi-supervised learning).
So, I wanted to share my system details maybe they are useful to you :-)
Any questions are welcomed.</p>
<p><img src="https://s14.postimg.io/ama3i7q5t/image.png" alt="enter image description here" title>
<strong>References:</strong>
1 - Rajen Bhatt, Abhinav Dhall, 'Skin Segmentation Dataset', UCI Machine Learning Repository.
2 - X. Zhu, D. Ramanan. "Face detection, pose estimation and landmark localization in the wild" Computer Vision and Pattern Recognition (CVPR) Providence, Rhode Island, June 2012.
3 - Simonyan, Karen, and Andrew Zisserman. "Very deep convolutional networks for large-scale image recognition." arXiv preprint arXiv:1409.1556 (2014).</p>
<p><strong>Notes:</strong></p>
<ul>
<li>Face detection for such problem is hard, the well-known Haar Cascades surely fail.</li>
<li>The example in the image above is easy, but normally driver face is seen from side.</li>
<li>I've tried lots of strategies for upper body Human Pose Estimation (Calvin) and scene/driver segmentation (this and this) but didn't achieve good results.</li>
<li>I used data augmentation. Because the training data size is not large.</li>
</ul>
<p>I trained a Bayesian classifier on the UCI Skin Segmentation Dataset: <a href="https://archive.ics.uci.edu/ml/datasets/Skin+Segmentation" target="_blank">https://archive.ics.uci.edu/ml/datasets/Skin+Segmentation</a> . I earlier wrote a related blog post about that (discussing some other topic but using the same dataset): <a href="https://heraqi.blogspot.com/2016/06/does-more-features-always-mean-higher.html" target="_blank">https://heraqi.blogspot.com/2016/06/does-more-features-always-mean-higher.html</a></p> |
State Farm Distracted Driver Detection | 10th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: State Farm Distracted Driver Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi HeshamEraqi,</p>
<p>Is your question same with Ed53's one ? I answered his question now. Look here.<br>
<a href="https://www.kaggle.com/c/state-farm-distracted-driver-detection/forums/t/22666/10th-place-solution/130027#post130027">https://www.kaggle.com/c/state-farm-distracted-driver-detection/forums/t/22666/10th-place-solution/130027#post130027</a></p>
<p>Hi Ed53,</p>
<p>In this competition, hand-labeling to train data is allowed.<br>
I annotated drivers body in 520 train images. <br>
<a href="https://github.com/toshi-k/kaggle-distracted-driver-detection/tree/master/dataset/train_mask">https://github.com/toshi-k/kaggle-distracted-driver-detection/tree/master/dataset/train_mask</a></p>
<p>I trained detector model as image segmentation. This is also simple supervised learning.</p>
<p>Hi Everyone,</p>
<p>Ending final validation, I share my solution.<br>
My approach is very simple. Detect driver's body and Classify with cropped image.<br>
I don's use semi-supervised learning.</p>
<p>Code: <a href="https://github.com/toshi-k/kaggle-distracted-driver-detection">https://github.com/toshi-k/kaggle-distracted-driver-detection</a></p>
<p><img src="https://raw.githubusercontent.com/toshi-k/kaggle-distracted-driver-detection/master/img/solution.png" alt="conceptual diagram" title=""></p>
<p>[quote=Guanshuo Xu;130048]</p>
<p>Thanks. I feel I have figured out the reason behind. The exact segmentation could miss important part for example cell phones, cups, steer wheels...</p>
<p>[/quote]</p>
<p>Yes, I thought exact segmentation missed some important things.<br>
Heng's method may help us to understand this problem.<br>
<a href="https://www.kaggle.com/c/state-farm-distracted-driver-detection/forums/t/21994/heat-map-of-cnn-output">https://www.kaggle.com/c/state-farm-distracted-driver-detection/forums/t/21994/heat-map-of-cnn-output</a></p>
<p>Hi Wind Bear,</p>
<p>My hardware setting is here.<br></p>
<pre><code>CPU: Intel Core [email protected] GHz
Memory: 32GB
GPU: NVIDIA GTX 980 (device memory 4GB)
</code></pre>
<p>It takes about 1 week to train all models.</p>
<p>Hi Guanshuo Xu,</p>
<p>My detector model is segmentation model. We can get bouding-box from the segmentation result easily.</p>
<p>I tried both approach; <em>(1) cropping by humanbody</em> and <em>(2) crppping by bouding-box</em>.<br>
I found that (2) is better experimentally.</p>
<p>Hi Heng CherKeng,</p>
<p>I didn't have such idea. I don't know if your idea works.<br>
My approach is inspired by <a href="https://www.kaggle.com/c/noaa-right-whale-recognition/details/creating-a-face-detector-for-whales">Right Whale Recognition</a>.<br>
In this competition, everyone used 2 stage (or 3 stage) method.</p>
<p>Hi frankman,</p>
<p>Finally, I use 20 models for ensembling. Look at line 17 in this file.<br>
<a href="https://github.com/toshi-k/kaggle-distracted-driver-detection/blob/master/source/03_averaging.r">https://github.com/toshi-k/kaggle-distracted-driver-detection/blob/master/source/03_averaging.r</a></p> |
State Farm Distracted Driver Detection | A brief summary | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: State Farm Distracted Driver Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Firstly, I would like to thank Kaggle and State Farm to host such an interesting competition. I would also like to thank ZF Turbo, Dong Jiao for their starting code and many others who have kindly share their valuable ideas and experiences on the forum. I have learned a lot during this competition.</p>
<p>Also, sorry for the late summary, I was quite occupied in the past two weeks.</p>
<p>I understand that data set in this competition displays two unique characteristics: </p>
<ol>
<li>The test set is much bigger than the train set, so it is easy to get overfit.</li>
<li>The images are correlated as they are produced as frames from a video clip. Images taken from a single video segment are put into the same category, although if judged individually, a few of them might be labelled differently.</li>
</ol>
<p>My solution focuses mainly on these two characteristics.</p>
<ul>
<li><strong>Models</strong></li>
</ul>
<p>I used two models in this competition: pre-trained Vgg16, and Vgg16_3, a modified version of Vgg16 I devised to deal with overfit.</p>
<p>The best LB score I got with a single Vgg16 model is around 0.3 as a result of overfit. It seemed to me that the model memorized some other features rather than the desired ones. To help the model to concentrate on more essential features, I manually selected two regions of interests for each train image. One is the head region, which I believe is the most informative part of the whole image. And the other is the bottom-right quarter where the appearance of a driver’s right hand cross is highly correlated with c5.</p>
<p>The original train image, together with the two selected regions of interests, are the three inputs of Vgg16_3. </p>
<p>The original vgg16 network except the last output layer are shared cross the three inputs. The three outputs of the second-to-last layer (the second full layer), which correspond to the three inputs, are connected with three separate final output layers. </p>
<p>I used Vgg16_3 to fine-tune saved model files from Vgg16. It improves the single model score by around 0.05. </p>
<ul>
<li><strong>K_Nearest_Neighbor Average</strong></li>
</ul>
<p>I used output of the last Maxpool layer (pool5) of Vgg16 with pertained weights to map each test image to a 512*7*7-dimensional coordinate, and use distances in this space (pool5-feature space) to define similarity among test images. A weighted average of the predictions of each image, together with their 10 nearest neighbors in pool5-feature space, would generally improve a single model score by 0.10-0.12. </p>
<ul>
<li><strong>Ensemble average</strong></li>
</ul>
<p>Ensemble average was applied for each category separately. <br>
From confusion matrices of the CV set, I noticed that some trained models did a decent job at distinguishing a certain category from the rest, while quite confused among the rest categories. So for each category, models among top-ten percent of cross-entropy loss associated with this specific category are chosen and used for calculating ensemble average. This category-specific average outperforms significantly compared with simple arithmetic/geometric average according to my experience.</p>
<ul>
<li><strong>Segment average</strong></li>
</ul>
<p>The last step, segment average, the same as K-nearest-neighbor-average, focuses on the second characteristics of the competition as I mentioned above. I divide the test images into small groups according to their similarities in the 512*7*7-dimensional pool5-feature space. If images in one group display consistent and also confident predictions, I receive this as a strong positive signal and renormalize all the images in that group to share the same predictions. </p>
<p>As I read in the forum, some of you have achieved a single-model LB score below 0.15, which is admirable. I believe you can reduce that by around 40% by using K_Nearest_Neighbor Average alone, single model. I hope all of you had enjoyed this competition as much as I did, thanks again. </p>
<p>@Ellen Gao Jian, Thanks
@SecondPlan, Euclidian distance</p>
<p>@Ellen Gao Jian, Thanks
@SecondPlan, Euclidian distance</p> |
Statoil/C-CORE Iceberg Classifier Challenge | 1st Place Solution overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Statoil/C-CORE Iceberg Classifier Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We kept the original training on the 100+ models, but then we do group2 only because it's a very distinct set vs group1. Group1 is icebergs only and defined by unique inc_angle pattern. Group2 is much more diverse and contains all the ships. Similar techniques have been used in other competitions where subsets of the data with unique characteristics are trained as a separate groups to improve signal to noise.</p>
<p>We’d like to thank Kaggle and the competition sponsors for hosting this exciting competition. The manageable size of the dataset as well as the technical challenge of working with satellite imagery kept us motivated to work on this problem.
I’d like to thank my partner Weimin for being such a great collaborator on this problem. This was a great example of two people coming together with diverse solutions and combining for a gain we wouldn’t have achieved individually.
On to the solution...</p>
<h2><strong>Key findings/learnings:</strong></h2>
<ul>
<li><p>Following first principles of visualizing and generating an <br>
understanding the data were key. Visualizing inc_angle was the key <br>
finding that led us down our solution path.</p></li>
<li><p>inc_angle pattern – inc_angle’s with identical values (out to 4
decimal places) had the same label 97% of the time.</p></li>
<li>There were two distinct groups that once separated can be treated <br>
separately with individual models. inc_angle clusters show unique <br>
alternating pattern of pure iceberg’s 100% of the time in train. <br>
Let’s call this group1. The non-group1 inc_angles have a pattern of
dense iceberg’s in the center and more ships just outside the center
of the distributions. Call these group2.</li>
</ul>
<p>This plot below is just a slice of a simple distribution plot of the individual train data by inc_angle. The plot on the left is the raw training data, the plot on the right is also the training data but from one of our CV’s. Notice the alternating patterns of natural groups. These patterns propagate through the entire distribution of inc_angle and can be identified by basic clustering algorithm. Ward’s, KNN, and density based clustering were among the types we used.</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/273702/8369/inc_angle%20plot_small.jpg" alt="group1group2"></p>
<p>• Log_loss is a very unforgiving metric. When using clustering and predicting on different groups it’s important to understand the group probability in addition to the individual point probability and threshold the risk appropriately. We stayed away from extreme values. We also didn’t trust the leaderboard too much given the sensitivity of log_loss and the fact only 20% of a relatively small dataset being reported on the public LB. We relied on CV as our main form of feedback.</p>
<h2><strong>Solution overview:</strong></h2>
<ol>
<li><p>We started with a CNN pipeline of 100+ diverse models which included architectures such as customized CNNs, VGG, mini-GoogLeNet, several different Densenet variations, and others. Some of these included inc_angle after the fully connected layers and others didn’t.</p></li>
<li><p>We then ensembled and stacked the results and ran the predictions through a post-processing pipeline to transfer the signal from like inc_angles, although since not all like inc_angles had the same label we applied a max threshold by which the prediction could from the original prediction and if this threshold was exceeded we kept the original prediction. From there we ran a clustering algorithm to identify the group1 and group2 inc_angle distributions. Since <em>100%</em> group1 labels from train were icebergs, we applied a threshold of this to test. There were 897 of these points in the test set and it appears from private LB score variants we have they were indeed all icebergs in test.</p></li>
<li><p>We then re-trained the 100+ CNN models on those <em>group2 training samples only</em>. This appears to be the biggest difference in our approach and other top solutions so far and showed a significant improvement in ship detection as it gave us better signal to noise. This group was then ensembled/stacked and concatenated with the original clustering (trained on all data). We then fed this result into a post processing pipeline which ran various clustering algorithms on the result to identify dense regions of like predictions. Each clustering algorithm used its own unique threshold/clipping value determined by CV. The group1 and group2 results were then combined for the final prediction.</p></li>
</ol>
<p>So in summary, 200+ CNN models, a handful of clustering algorithms, a few different thresholds, a heavy reliance on inc_angle, don’t push log_loss thresholds to the extreme, trust your local CV, and you have a formula for a winning solution. Cheers!</p> |
Statoil/C-CORE Iceberg Classifier Challenge | 2nd Place Solution Overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Statoil/C-CORE Iceberg Classifier Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I used <a href="http://xgboost.readthedocs.io/en/latest/">XGB</a> with the following settings:</p>
<p><code>{'lambda': 1., 'nthread': 4, 'booster': 'gbtree', 'silent': 1, 'eval_metric': 'logloss', 'objective': 'binary:logistic'}</code></p>
<ul>
<li>min_child_weight in [3, 5, 8, 10, 20]</li>
<li>eta in [0.003, 0.005, 0.01, 0.05]</li>
<li>colsample_bytree in [0.3, 0.4, 0.5]</li>
<li>max_depth in [3, 4, 5]</li>
<li>subsample in [0.85, 0.9, 0.95]</li>
</ul>
<p>Xgboost gives you probability so no need for additional voting scheme. </p>
<p>Here is an example <a href="https://www.kaggle.com/gaborfodor/fix-cv-folds-based-on-inc-angle/">Kernel</a>.
The main idea is that same angles should belong to the same fold.</p>
<p>Here is an example <a href="https://www.kaggle.com/gaborfodor/fix-cv-folds-based-on-inc-angle/">Kernel</a>.</p>
<p>The main idea is that same angles should belong to the same fold.</p>
<p>I generate the folds once at the beginning for each id and save it in a file. Whenever I use a new machine (e.g. ec2 pxlarge) I just copy the original fold split. </p>
<p>I would like to thank to everyone who contributed in kernels or discussions during the competition.</p>
<p>I found the following kernels and threads particularly helpful.</p>
<ul>
<li><a href="https://www.kaggle.com/jirivrany/my-best-single-model-simple-cnn-lb-0-1541">https://www.kaggle.com/jirivrany/my-best-single-model-simple-cnn-lb-0-1541</a></li>
<li><a href="https://www.kaggle.com/cbryant/keras-cnn-statoil-iceberg-lb-0-1995-now-0-1516">https://www.kaggle.com/cbryant/keras-cnn-statoil-iceberg-lb-0-1995-now-0-1516</a></li>
<li><a href="https://www.kaggle.com/brassmonkey381/viewing-leak-and-machine-images">https://www.kaggle.com/brassmonkey381/viewing-leak-and-machine-images</a></li>
<li><a href="https://www.kaggle.com/brassmonkey381/exploring-leaky-incidence-angle">https://www.kaggle.com/brassmonkey381/exploring-leaky-incidence-angle</a></li>
<li><a href="https://www.kaggle.com/devm2024/keras-model-for-beginners-0-210-on-lb-eda-r-d">https://www.kaggle.com/devm2024/keras-model-for-beginners-0-210-on-lb-eda-r-d</a></li>
<li><p><a href="https://www.kaggle.com/mpekalski/fft-per-band-for-normal-normalized-signal">https://www.kaggle.com/mpekalski/fft-per-band-for-normal-normalized-signal</a></p></li>
<li><p><a href="https://www.kaggle.com/c/statoil-iceberg-classifier-challenge/discussion/47062">Sharing my experience (some tips)</a></p></li>
<li><a href="https://www.kaggle.com/c/statoil-iceberg-classifier-challenge/discussion/48102">How I get 0.0893</a></li>
<li><a href="https://www.kaggle.com/c/statoil-iceberg-classifier-challenge/discussion/45265">Best single model you have?</a></li>
</ul>
<h1>Summary</h1>
<p>Thanks for Statoil, C-CORE and kaggle for organizing this interesting and challenging competition!
Congrats to the other gold medal winners especially to the teams who got below 0.09 logloss. It was a close finish and without some luck I could not finish in the top 3.</p>
<p><strong>TL;DR</strong> Hundreds of XGBs trained on hundreds of CNNs with tricky incident angle features could lead you to the top. The brute force approach was useful to get stable results in terms of local/public/private validation scores.</p>
<h1>Layer1 - CNN</h1>
<p>Hundreds of CNNs with different random parameters were trained with fix 5 fold cross validation. The folds were based on incident angle in order to avoid leakage in my validation results. Inc_angle was dropped the networks were trained on the dual polarization imagages. I kept the training images with missing incident angles.
I used three main model families:</p>
<ul>
<li>Most of the time I used <strong>custom</strong> CNNs trained from scratch on 2 channel input.</li>
<li>I tried finetuned <strong>VGG16</strong> too by adding third channel as (band1 + band2)/2.</li>
<li>I was able to double my training set with <strong>pseudo</strong> labeling although it did not help much.</li>
</ul>
<p>Due to the relatively small train set I rather trained an army of weaker but different models instead of trying to optimize my best single model. For augmentation I used flip, flop, zoom, rotation and shift. Random parameter search was tried with the following setup.</p>
<ul>
<li>preprocessing in ['mean_std', 'minmax', 'raw']</li>
<li>cnn_type in ['simple', 'double']</li>
<li>conv_layers in [[32, 48, 64], [32, 64, 128], [64, 128, 256], [64, 128, 128, [32, 64, 128, 256], [64, 100, 128, 150]]</li>
<li>dense_layers in [[64, 32], [128, 64], [256, 128], [512, 256], [128, 64, 32]]</li>
<li>conv_dropout in [False, 0.1, 0.2]</li>
<li>dense_dropout in [False, 0.25, 0.5]</li>
<li>learning_rate in [0.0004, 0.0008, 0.001, 0.002, 0.004, 0.005]</li>
<li>optim_type in ['Adam', 'RMSProp', 'SGD']</li>
<li>flip0 in [0, 0.25, 0.5]</li>
<li>flip1 in [0, 0.25, 0.5]</li>
<li>rotation in [(0, 0, 0), (0.25, 0, 20), (0.5, 0, 20)]</li>
<li>zoom in [(0, 0, 0), (0.25, 0.8, 1.2), (0.5, 0.8, 1.2)]</li>
<li>shift in [(0, 0, 0), (0.25, 0, 0.1), (0.5, 0, 0.1)]</li>
</ul>
<p>TTA was in plan but eventually was not used.
The training had some stability issues (stopped with logloss higher than 0.6) so I saved each model oof prediction for each fold separately.</p>
<h1>Layer2 - XGBoost</h1>
<p>Dropped observations with missing incident angle.
The best N = 5, 10, 15, 20 models for each fold were selected from the three model families (custom, VGG16, pseudo).
Each category was tested on the leaderboard they had scores between 0.165 - 0.173 on both public and private LB.</p>
<p>Additional features were extracted for each</p>
<p>Observation</p>
<ul>
<li>L1 prediction stats (min, mean, geomean, median, max, std)</li>
<li>Raw inc_angle</li>
<li>Raw stats for both HH and HV (min, mean, median, max, std, quantiles)</li>
</ul>
<p>Distinct incident angle groups</p>
<ul>
<li>Count, train count, ship count, iceberg rate</li>
<li>L1 prediction stats (min, mean, geomean, median, max)</li>
</ul>
<p>Closest K incident angle groups</p>
<ul>
<li>Distance</li>
<li>Mean Prediction</li>
</ul>
<p>Incident angle bins</p>
<ul>
<li>Mean Prediction</li>
<li>Count, train count, iceberg count</li>
</ul>
<p>XGBs were trained with random parameter search on the same fix CV splits as in the previous layer.</p>
<h1>Layer3 - Model average</h1>
<p>9 days before the end I jumped to the first place with my first submitted L2 model. I spent lot of time during the last week trying to improve the submission. I tried bigger CNNs, even trickier angle based features, more XGBs... My training CV went down from 0.116 below 0.1 but I could not improve my public score and others started to leapfrog me.</p>
<p>On the final day I retrained thousand XGBs with more conservative setting to avoid CV overfitting as much as possible.</p>
<ul>
<li>round all features to the 3rd decimal</li>
<li>min_child_weight >= 3</li>
<li>colsample <= 0.5</li>
<li>max_depth in [3, 4, 5]</li>
<li>switched off early stopping</li>
</ul>
<h3>Final submission</h3>
<ul>
<li>95% model average of the 100 best xgb models.</li>
<li>5% model average of the 100 best xgb model without using inc_angle</li>
</ul>
<p>Quite a few interesting postprocessing technique were suggested in kernels and discussions. I experimented with clipping, geometric mean, rounding for confident predictions. They did not gave stable improvement in my local validation so I rejected any postprocessing. To be on the safe side I wanted to clip my final submission anyway but model averaging already solved the problem all my predictions were in (0.005, 0.995).
I selected my best public LB and latest submission the best would be 8th on the private leaderboard while the final gave me the 2nd place.</p>
<h1>Env</h1>
<p>I used keras with tensorflow for the cnns. I do not have my own GPU which is actually quite painful in computer vision challenges. I used to rent spot p2.xlarge EC2 machines with K80. Unfortunately they are not exactly cheap (0.37$/hour) and they are actually quite slow compared to the best cards. In general I prefer more data however here I enjoyed the quick training times and fast iteration a lot (single model could be trained in minutes instead of hours or even days!). At the beginning I used my custom AMI with manual installed libs. It could be painful to keep everything uptodate so I was happy that free (at least free in terms of software) maintained <a href="https://aws.amazon.com/machine-learning/amis/">Deep Learnning AMI</a>s are available now.</p>
<p>During the competition I used 300 hours GPU time for experiments.
Single CNN takes 10 minutes on average to train. My actual ensemble does not use more than 100CNNs so I think the whole pipeline could be retrained in a day.</p> |
Statoil/C-CORE Iceberg Classifier Challenge | 3rd-Place Solution Overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Statoil/C-CORE Iceberg Classifier Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Actually, it was one of nearest neighbors features in my spatial model. I calculated distance between current image and all other images with the same 'inc_angle'. I used a few principal components of PCA of entire image set. Also I added the label of nearest neighbor in term of PCA distance if any. To evaluate how good is this distance, I compared statistics on the shortest distance in groups of objects with the same 'inc_angle' and statistics on distance between objects that are close to each other, but not in the same group. The difference was about 1.15 sigma.</p>
<p>First of all, I want to thanks organizers and all those who contributed to the kernels and forum discussions. Congrats to winners!</p>
<p>Here is a brief summary of my final solution:</p>
<ol>
<li>I trained 7 ANNs on images. To prevent overfitting and get smooth predictions, I trained all ANNs in a stacking way using 5 folds and 30 repeats. The difference between ANNs was in architecture, data normalization/preprocessing, and data augmentation.</li>
<li>I mixed ANNs predictions using xgboost. It was also in a stacking way using 7 folds and 1000 repeats.</li>
<li>I also used a spatial model. As described in <a href="https://www.kaggle.com/brassmonkey381/viewing-leak-and-machine-images">https://www.kaggle.com/brassmonkey381/viewing-leak-and-machine-images</a> 'inc_angle' contains a valuable information: objects with the same 'inc_angle' have the same label in most cases. I performed deeper analysis of 'inc_angle' and found a lot of other interesting things. In many cases objects have label as neighborhood in terms of 'inc_angle'. An object with no neighbors in some radius in ~90% cases is a ship, maybe it is because icebergs moves as groups of debris. I extracted nearest neighbors features from 'inc_angle' and made prediction on this features using xgboost in a stacking way using 5 folds and 1000 repeats.</li>
<li>Thus there were two predictions: ANNs mix based on image data and spatial model based on 'inc_angle' neighborhood, this predictions must be independent. Accordingly, I mixed ANNs mix predictions and spatial model predictions using formula for independent probabilities: p = p1*p2 / ( p1*p2 + (1-p1)*(1-p2) ). This mix has scores: 0.0921 public, 0.0871 private.</li>
<li>I retrained 5 best ANNs using pseudo-labeling, because I had no time to retrain last 2 ANNs(</li>
<li>I mixed ANNs pseudo-labeling predictions using xgboost. It was also in a stacking way using 7 folds and 1000 repeats.</li>
<li>I retrained spatial model using pseudo-labeling.</li>
<li>I mixed ANNs pseudo-labeling mix predictions and spatial model pseudo-labeling predictions using formula for independent probabilities, and got the final submit. There were some minor variants of final mixing (ANNs mixing, clipping, spatial model parameters), the best of my mixes got 0.0853 in private, but I didn't select it for final scoring(</li>
</ol>
<p>A more detailed description will be later.</p>
<p>It were predictions of artificial neural networks (ANNs). The predictions were smooth due to in fact, it were an average of 5*30=150 predictions of models. This models had the same network architectures and training schemes, but different random initializations and training set subsamples as it appears in stacking. I used 7 different arcitectures on the first level, hence I had 7 smooth predictions.</p>
<p>I inferred this formula from probability theory. Maybe it has an official name, I don't know</p>
<p>Here is a good illustration of stacking from Faron:
<a href="https://www.kaggle.com/getting-started/18153">https://www.kaggle.com/getting-started/18153</a></p>
<p>"1000 repeats" means stacking repeated 1000 times with different fold splits</p>
<p>Of course, PCA is not a best choice, but I had no time for more complex models.
Scores for "617f88fc":
ANNs mix: 0.0037
spatial model: 0.84
Final: 0.098 (there were clipping with eps=0.02 for ANNs mix and spatial model)</p>
<p>Here is simple intuitive explanation.</p>
<p>You are going to visit a resort. You are interested in beach and food, and asking two friends for their opinions. Obviously, beach and food are independent.</p>
<p>The formula in action:</p>
<p>Friend1: food is good (p1=0.7)
Friend2: beach is good (p2=0.7)
This is a very good place for your vacation (p=0.84)</p>
<p>Friend1: food is good (p1=0.7)
Friend2: beach is ok (p2=0.5)
This is a good place for your vacation (p=0.7)</p>
<p>Friend1: food is good (p1=0.7)
Friend2: beach is bad (p2=0.3)
This is neither good nor bad place for your vacation (p=0.5)</p>
<p>Friend1: food is ok (p1=0.5)
Friend2: beach is ok (p2=0.5)
This is neither good nor bad place for your vacation (p=0.5)</p>
<p>by adjusting the random seed only</p> |
Statoil/C-CORE Iceberg Classifier Challenge | 4th Place Approach | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Statoil/C-CORE Iceberg Classifier Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It's always better to be on the safe side when there are so few training samples. </p>
<p>Apart of that (0.001, 0.999) was giving a small score boost on local CV.</p>
<p>Step 4) can be interpreted as: "Your inner circle defines who you are. If everything around you is an iceberg, you are most likely an iceberg too", the same applies to the boats. Step 3) captures statistics only about objects with exactly the same inc_angle, step 4) complements that and looks at surroundings.</p>
<p>Local train CV scores (this are not public leaderboard scores) were like this:</p>
<ul>
<li>step 1) NN scores range: 0.1886 - 0.1962</li>
<li>step 2) mean prediction of the best 3 NN models: 0.1838</li>
<li>step 5) LightGBM score: 0.0920. Yes, data leak related to incidence angle is really important.</li>
</ul>
<p>Our train CV scores were always a bit higher than PLB score.</p>
<p>Thanks to everyone for the competition! Our 'best submission' gave score '0.0854' on the private leaderboard, but wasn't selected by us as the score on the public leaderboard was '0.0951'. From the final results one can see that there was a minor shake up within people who stayed on the top, so my guess is it was really hard to be sure about which submission is the best, either due to the size of the training data or outliers in the test set.</p>
<p>Our final solution looks as follows:</p>
<ol>
<li>Train 5 NNs on image data (excluding incidence angle). Each model is using CV5 with the same split. NN tuning was done in the following areas: 1) NN architecture; 2) best augmentation parameters; 3) best CV5 seed that gives similar scores on all folds.</li>
<li>Take subset of NNs, with mean prediction that gives the best CV score on train data.</li>
<li>Group mean predictions from the previous step by incidence angle (train and test data together). For each group calculate the following features: mean prediction, median prediction, total number of samples in a group. During this calculation original train labels can be used to boost precision even further, as long as they don't propagate to the sample predictions itself.</li>
<li>Run KNN regressor on predictions from step 2) and incidence angle.</li>
<li>Train LightGBM model on the features from the steps 2), 3) and 4) (only 5 features in total).</li>
<li>Fine tune LightGBM using CV on train data.</li>
<li>Clip final probabilities towards interval (0.001, 0.999), mainly as a precaution.</li>
</ol>
<p>Additional notes:</p>
<ul>
<li>we ended up using a simple NN architecture from this paper: <a href="https://arxiv.org/pdf/1612.00983.pdf">https://arxiv.org/pdf/1612.00983.pdf</a></li>
<li>It's important to skip incidence angle while training NNs in step 1), so that they can prioritize image features.</li>
<li>It's important to calculate group features in step 3) on both train + test data, to avoid overfitting towards train data distribution.</li>
<li>it's important to do a proper CV at every step any parameters are tuned in the algorithm.</li>
<li>the fact that images with similar incident angle have similar object class on them is exploited by LightGBM in step 6), based on features in step 3).</li>
</ul>
<p>Looking forward to see other solutions!</p>
<p>Code: <a href="https://github.com/asydorchuk/kaggle/blob/master/statoil/README.md">https://github.com/asydorchuk/kaggle/blob/master/statoil/README.md</a></p>
<p>Please check my reply to Kevin Lu comment.</p>
<p>Step 1) and 2) don't use incidence angle at all. Step 3) uses incidence angle to group together predictions and calculate statistics based on those predictions for objects in the same group. Incidence angle is not used as a feature by itself, just as a groupby key. Please let me know if I should explain it better.</p>
<p>Step 3 explanation. Let's say you have a group of 3 elements with the same incidence angle:</p>
<ol>
<li>Iceberg from the train. With NN prediction after CV (0.6) and true label (1.0)</li>
<li>Iceberg from the train. With NN prediction after CV (0.8) and true label (1.0)</li>
<li>Unknown object from the test. With NN prediction (0.4), true label unknown.</li>
</ol>
<p>Let's calculate mean of this group. If you use only NN predictions the mean will be the same for all objects: (0.6 + 0.8 + 0.4) / 3 = 0.6. However, if you use true labels of objects the mean will differ on case by case basis:</p>
<ol>
<li>Object 1 mean = (0.6 + 1.0 + 0.4) / 3 = 0.66. You can take true label of the second sample, but not true label of the first sample.</li>
<li>Object 2 mean = (1.0 + 0.8 + 0.4) / 3 = 0.73. You can take true label of the first sample, but not the second one.</li>
<li>Object 3 mean = (1.0 + 1.0 + 0.4) / 3 = 0.8. You can take true labels of the 1st and 2nd samples.</li>
</ol>
<p>So you can take true label of the object in a group, in case you are not calculating statistics for the object itself, otherwise you introduce significant label leak.</p>
<p>And yes, 3 out of 5 features used in LightGBM, are exactly mean, median and number of samples calculated in the groupby operation.</p>
<p>Indeed NN is not that good at exploiting data leak properly. I spent a lot of time in the first half of the competition thinking about the best way to use it. Over the timeline of the competition our approach shifted from using it inside the neural network -> applying custom heuristics -> feature engineering + LightGBM.</p>
<p>Another advantage of exploiting incidence angle data leak after neural network is that you can build features that include test set predictions as well.</p>
<p>Will do, it just needs to be cleaned up a little.</p>
<p>As Kirill said we tried both approaches. In case of training a single model on all train data, we tried to use test data and test predictions (as targets) to understand when to stop the training. It's reasonable approach, since test predictions are augmented by incidence angle leak.</p>
<p>There were no bins used, just angle value itself.</p>
<p>Correct the subset of 5 NNs was used. It happened that 3 NNs with the best scores, gave the best result when averaged on train CV.</p>
<p>Step 4) had only averaged predictions: one for each sample. The distance space was defined using inc_angle (1D). Same trick as in step 3) was used: apply true labels of the train data, when it doesn't introduce target label leak.</p>
<p>The code is shared, please check the link in the end of the article.</p>
<p>Please check the link in the end of the post.</p> |
Statoil/C-CORE Iceberg Classifier Challenge | 6th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Statoil/C-CORE Iceberg Classifier Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you everybody:)
Published my leaky features here:
<a href="https://www.kaggle.com/azakhtyamov/simple-logreg-with-only-3-leaky-features">https://www.kaggle.com/azakhtyamov/simple-logreg-with-only-3-leaky-features</a> </p>
<p>Hello, dear Kaggle community! Here I will share my approach to this competition. Without further ado let's begin.</p>
<h2>Leak or feature?</h2>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/273251/8367/1.png" alt="Image">
As it was stated during the discussions, some additional information was hidden in the angle value, which was the key in this competition. To understand this, plot the distribution of classes with respect to angles. Red color stands for the ships, while blue for the icebergs. On this plot, periodical blue lines are easy to distinguish, some ship's clusters are also noticeable. Based on that I generated 3 features. : amount of samples for this particular angle (including the test set), mean target value for this particular angle (excluding current observation to prevent leakage), mean target value for some small neighborhood of the particular angle (R = 0.025 was the best). Logistic regression trained on this three values alone was able to score nearly 0.2 on the public leaderboard :)</p>
<h2>NN architecture</h2>
<p>During my experiments, I have found that the best way to utilize leaky features is the following: add 3 channels filled with leaky values accordingly (LOL). Models with two inputs were hard to train (you have to train them separately, then merge and train again) and worse on the leaderboard. </p>
<p>My NN architecture: </p>
<pre><code>inputs = Input((75,75,5))
conv1 = Conv2D(64, (9, 9), padding='valid', activation='elu')(inputs)
conv1 = BatchNormalization(momentum = 0.99)(conv1)
pool1 = MaxPool2D(pool_size=(2, 2))(conv1)
drop1 = Dropout(0.1)(pool1)
conv2 = Conv2D(64, (5, 5), padding='valid', activation='elu')(drop1)
conv2 = BatchNormalization(momentum = 0.95)(conv2)
pool2 = AvgPool2D(pool_size=(2, 2))(conv2)
drop2 = Dropout(0.1)(pool2)
conv3 = Conv2D(64, (3, 3), padding='valid', activation='elu')(drop2)
conv3 = BatchNormalization(momentum = 0.95)(conv3)
pool3 = MaxPool2D(pool_size=(2, 2))(conv3)
drop3 = Dropout(0.1)(pool3)
conv4 = Conv2D(64, (3, 3), padding='valid', activation='elu')(drop3)
pool4 = AvgPool2D(pool_size=(2, 2))(conv4)
gp = GlobalMaxPooling2D()(pool4)
out = Dense(1, activation = 'sigmoid')(gp)
</code></pre>
<p>Notice, that it has only 200K parameters. I have started the competition with this model. Finetuning had no results on the leaderboard so I decided to spend my submissions for checking the ideas. I trained 200 models with different seeds and take the median of top 100 models on validation. This blending scored 0.1065 on public leaderboard and 0.0946 on private.</p>
<h2>Augmentation and pseudo labeling</h2>
<p>During the first model training, I couldn't improve the score using image augmentation. Also, some part of the data was excluded from the training set since the angle value was unknown for this observations. Since the first model was pretty confident for the most of the test data, there also was a place for pseudo labeling. Thus, I added all images from the test dataset (except generated) with predictions less than 0.01 or more than 0.99, updated my pipeline with rotations (±5, ±10, ±15, ±30, ±45, ±60). Amount of pseudo labeled images was nearly 3000. In my new model with augmented data I was not using any angle information, therefore, there was no need to neglect the data with missing angles. Again, I trained 200 models, used the median for top 100 models. Afterwards, I averaged the predictions for this two models, where the first model was not confident enough (from 0.1 to 0.9). This model scored 0.0940 on public and 0.0910 on private. After some analysis, I've found, that the first model tended to make mistakes on the validation set if the mean was equal to 0.5. Then I decided to average two models only if the mean is approximately 0.5. Sadly, but on public it scored only 0.0984, so I decided to submit the previous blending. However, on private it scored 0.0873. </p>
<p>Interestingly, that all the models based on the leaky features achieved a better score on the private set. </p>
<p>FYI: all the models were trained with 1080 Ti. However, any GPU with 4gb ram should be enough, which is rare in modern CV competitions. </p>
<p>Thank you for your attention.
Also, thanks to Statoil and Kaggle for this competition :)</p>
<p>Thank you! However, I couldn't manage to use periodicity. It looks like the bottleneck is for samples, which have got no other objects on the same angle. </p> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.