You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

ImagenetTail Dataset Card

English / 日本語

The ImagenetTail dataset, curated by SystemK AI Team, is a specialized subset of the gmongaras/Imagenet21K dataset. It is designed to serve as a standardized benchmark for evaluating the effectiveness and scaling laws of visual model pre-training, particularly in scenarios involving transfer learning to long-tailed data distributions.


Dataset Details

Dataset Description

ImagenetTail addresses the challenge of evaluating pre-trained models without information leakage by providing a unique partitioning of ImageNet-21K classes. It simulates a realistic workflow where models are pre-trained on a large public dataset and then fine-tuned on a smaller, domain-specific dataset with a different distribution.

The dataset is divided into four distinct subsets:

  • head: Contains the top 5,000 ImageNet-21K classes with the most samples, totaling approximately 7.17 million images. This subset is intended solely for pre-training.

  • tail: Comprises all remaining classes from ImageNet-21K (excluding head classes), with approximately 5.98 million images. This represents a broad, long-tailed data environment.

  • tail-balanced: Derived from the tail subset (over 8,500 classes), where each class is downsampled to 196 samples. This results in approximately 1.7 million images, simulating a scenario with limited labeled samples per class.

  • tail3k (Default): Created by randomly sampling 3,000 classes from the tail subset (after excluding classes with fewer than 100 samples). It preserves the long-tail characteristic while maintaining a computational cost comparable to ImageNet-1K training, making it ideal for efficient experiments.

This orthogonal setup ensures that pre-training on head and fine-tuning/evaluation on tail, tail-balanced, or tail3k avoids data overlap, allowing for a more accurate assessment of a model's generalization capabilities.

  • Curated by: SystemK AI Team

  • License: ImageNet License

Dataset Sources


Uses

Direct Use

The primary use case for ImagenetTail is to study transfer learning. The recommended workflow is:

  1. Pre-training: Train a model on the head subset.

  2. Fine-tuning: Fine-tune the pre-trained model on the tail, tail-balanced, or tail3k subset.

  3. Evaluation: Evaluate the model's performance using the validation and test splits of tail-balanced and tail3k.

This ensures a fair evaluation of a model's performance on new domains and data distributions, as the pre-training and fine-tuning/evaluation classes are disjoint.

Out-of-Scope Use

  • Commercial Use: This dataset is strictly for non-commercial research and educational purposes.

  • Generative AI Training: The dataset gmongaras/Imagenet21K has had approximately 2,000 categories removed without explanation. This undocumented removal likely compromises data diversity, rendering the dataset unsuitable for training generative AI models.


Dataset Structure

All ImagenetTail subsets share a consistent data structure with three columns:

  • image: A PIL.Image.Image object containing the image data.

  • label: A datasets.ClassLabel object (integer index). Each subset has an independent label mapping (ID to class name), accessible via the features attribute.

  • uid: A string corresponding to the original filename (without extension) from the ImageNet dataset.

Data Splits and Sizes:

Subset Train Validation Test Total
head 7,170,771 - - 7,170,771
tail 5,982,709 - - 5,982,709
tail-balanced 1,550,010 100,000 50,000 1,700,010
tail3k 1,582,473 100,000 50,000 1,732,473

Important Notes:

  • Label Independence: Labels across different subsets are not interchangeable; each subset's labels start from 0. If merging subsets, consult their respective features for label mappings to convert IDs to unified class names or remap them.

  • Pre-shuffled Data: All subsets are pre-shuffled and not sorted by class. This allows researchers to create smaller, representative subsets for quick experiments by reading the first N chunks.


Dataset Creation

Curation Rationale

The primary motivation for ImagenetTail was to address data contamination in evaluating pre-trained models. Traditional evaluations often involve pre-training on ImageNet-21K and fine-tuning on subsets like ImageNet-1K, leading to data overlap. ImagenetTail provides a cleaner, non-overlapping benchmark by strictly partitioning classes into head and tail, enabling a more accurate measurement of knowledge transfer.

Source Data

Data Collection and Processing

All data is derived from the gmongaras/Imagenet21K dataset on the Hugging Face Hub, chosen for its completeness and original resolution.

The processing pipeline involved:

  1. Loading gmongaras/Imagenet21K and counting samples per class.

  2. head subset: Selecting the top 5,000 classes by sample count and including all their images.

  3. tail subset: Including all classes not present in the head subset and their images.

  4. tail-balanced subset: Sampling 196 images per class from approximately the 8,500 most frequent classes.

  5. tail3k subset: Excluding tail classes with fewer than 100 samples, then randomly sampling 3,000 classes from the remainder.

All subsets were globally shuffled before finalization.

Who are the source data producers?

The original ImageNet dataset was created by research teams from Princeton University and Stanford University. The curation and release of ImageNet-21K involved contributions from the Alibaba-MIIL team.

Annotations

This dataset utilizes the original annotations from ImageNet-21K without any modifications or new annotations.

Personal and Sensitive Information

The ImageNet dataset, from which ImagenetTail is derived, contains images scraped from the internet. It may inadvertently include personal or sensitive information. The original dataset creators did not perform anonymization. Users are responsible for understanding and mitigating any risks associated with the use of this data.


Bias, Risks, and Limitations

  • Inherited Bias: As a subset of ImageNet, this dataset inherits known biases, including uneven distributions across geography, culture, race, and gender. For example, studies have shown that ImageNet contains a disproportionate number of images from Western cultures and certain demographic groups are underrepresented. Models trained on this dataset may learn and amplify these biases.

  • Incomplete Source Data: The gmongaras/Imagenet21K dataset, the source for ImagenetTail, states it contains approximately 19,000 classes, not the full 21,000. While this is not believed to impact the research goals of ImagenetTail, users should be aware of this discrepancy.

  • Non-Commercial Use: The strict ImageNet license limits this dataset to non-commercial research and educational purposes only.

Recommendations

Users should be fully aware of the inherent biases within the ImageNet dataset and consider their potential impact when analyzing results and deploying models. It is highly recommended to consult relevant literature on ImageNet biases before using this dataset.


Citation

BibTeX:



Licensing Information

By using the ImageNet database (the "Database") from Princeton University and Stanford University, the Researcher agrees to the following terms:

  1. Non-Commercial Use: The Database is to be used solely for non-commercial research and educational purposes.

  2. No Warranties: Princeton University and Stanford University provide no representations or warranties regarding the Database, including non-infringement or fitness for a particular purpose.

  3. Researcher Responsibility: The Researcher assumes full responsibility for their use of the Database and agrees to defend and indemnify the ImageNet team, Princeton University, and Stanford University (including their employees, Trustees, officers, and agents) against any claims arising from the Researcher's use of the Database, including copyrighted image copies.

  4. Access for Associates: Research associates and colleagues may access the Database only after agreeing to these terms and conditions.

  5. Termination of Access: Princeton University and Stanford University reserve the right to terminate Researcher's access at any time.

  6. Commercial Entity Employment: If the Researcher is employed by a for-profit, commercial entity, the employer is also bound by these terms, and the Researcher represents full authorization to enter this agreement on behalf of such employer.

  7. Governing Law: All disputes under this agreement are governed by the law of the State of New Jersey.

Downloads last month
85