arXiv

This repository contains the models trained as experimental support for the paper "Towards Understanding Why Label Smoothing Degrades Selective Classification and How to Fix It" published at ICLR 2025.

The code is based on TorchUncertainty and available on GitHub.

List of models

This repository contains:

  • for classification on ImageNet with ViTs: 4 ViTs-S/16 trained with label-smoothing coefficients in [0, 0.1, 0.2, 0.3]
  • for classification on ImageNet with ResNets: 4 ResNet-50 trained with label-smoothing coefficients in [0, 0.1, 0.2, 0.3]
  • for classification on CIFAR-100: 4 DenseNet-BC trained with label-smoothing coefficients in [0, 0.1, 0.2, 0.3]
  • for segmentation: 4 DeepLabv3+ Resnet-101 trained with label-smoothing coefficients in [0, 0.1, 0.2, 0.3]
  • for nlp: one CE-based and one LS-based (LS coefficient 0.6) LSTM-MLP

The rest of the models (notably on tabular data) used in the paper are trainable on CPU in the dedicated notebooks.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Dataset used to train ENSTA-U2IS/Label-smoothing-Selective-classification