ImagenetTail Dataset Card
The ImagenetTail dataset, curated by SystemK AI Team, is a specialized subset of the gmongaras/Imagenet21K
dataset. It is designed to serve as a standardized benchmark for evaluating the effectiveness and scaling laws of visual model pre-training, particularly in scenarios involving transfer learning to long-tailed data distributions.
Dataset Details
Dataset Description
ImagenetTail addresses the challenge of evaluating pre-trained models without information leakage by providing a unique partitioning of ImageNet-21K classes. It simulates a realistic workflow where models are pre-trained on a large public dataset and then fine-tuned on a smaller, domain-specific dataset with a different distribution.
The dataset is divided into four distinct subsets:
head
: Contains the top 5,000 ImageNet-21K classes with the most samples, totaling approximately 7.17 million images. This subset is intended solely for pre-training.tail
: Comprises all remaining classes from ImageNet-21K (excludinghead
classes), with approximately 5.98 million images. This represents a broad, long-tailed data environment.tail-balanced
: Derived from thetail
subset (over 8,500 classes), where each class is downsampled to 196 samples. This results in approximately 1.7 million images, simulating a scenario with limited labeled samples per class.tail3k
(Default): Created by randomly sampling 3,000 classes from thetail
subset (after excluding classes with fewer than 100 samples). It preserves the long-tail characteristic while maintaining a computational cost comparable to ImageNet-1K training, making it ideal for efficient experiments.
This orthogonal setup ensures that pre-training on head
and fine-tuning/evaluation on tail
, tail-balanced
, or tail3k
avoids data overlap, allowing for a more accurate assessment of a model's generalization capabilities.
Curated by: SystemK AI Team
License: ImageNet License
Dataset Sources
Repository: Alibaba-MIIL/ImageNet21K
Uses
Direct Use
The primary use case for ImagenetTail is to study transfer learning. The recommended workflow is:
Pre-training: Train a model on the
head
subset.Fine-tuning: Fine-tune the pre-trained model on the
tail
,tail-balanced
, ortail3k
subset.Evaluation: Evaluate the model's performance using the validation and test splits of
tail-balanced
andtail3k
.
This ensures a fair evaluation of a model's performance on new domains and data distributions, as the pre-training and fine-tuning/evaluation classes are disjoint.
Out-of-Scope Use
Commercial Use: This dataset is strictly for non-commercial research and educational purposes.
Generative AI Training: The dataset
gmongaras/Imagenet21K
has had approximately 2,000 categories removed without explanation. This undocumented removal likely compromises data diversity, rendering the dataset unsuitable for training generative AI models.
Dataset Structure
All ImagenetTail subsets share a consistent data structure with three columns:
image
: APIL.Image.Image
object containing the image data.label
: Adatasets.ClassLabel
object (integer index). Each subset has an independent label mapping (ID to class name), accessible via thefeatures
attribute.uid
: A string corresponding to the original filename (without extension) from the ImageNet dataset.
Data Splits and Sizes:
Subset | Train | Validation | Test | Total |
---|---|---|---|---|
head |
7,170,771 | - | - | 7,170,771 |
tail |
5,982,709 | - | - | 5,982,709 |
tail-balanced |
1,550,010 | 100,000 | 50,000 | 1,700,010 |
tail3k |
1,582,473 | 100,000 | 50,000 | 1,732,473 |
Important Notes:
Label Independence: Labels across different subsets are not interchangeable; each subset's labels start from 0. If merging subsets, consult their respective
features
for label mappings to convert IDs to unified class names or remap them.Pre-shuffled Data: All subsets are pre-shuffled and not sorted by class. This allows researchers to create smaller, representative subsets for quick experiments by reading the first N chunks.
Dataset Creation
Curation Rationale
The primary motivation for ImagenetTail was to address data contamination in evaluating pre-trained models. Traditional evaluations often involve pre-training on ImageNet-21K and fine-tuning on subsets like ImageNet-1K, leading to data overlap. ImagenetTail provides a cleaner, non-overlapping benchmark by strictly partitioning classes into head
and tail
, enabling a more accurate measurement of knowledge transfer.
Source Data
Data Collection and Processing
All data is derived from the gmongaras/Imagenet21K
dataset on the Hugging Face Hub, chosen for its completeness and original resolution.
The processing pipeline involved:
Loading
gmongaras/Imagenet21K
and counting samples per class.head
subset: Selecting the top 5,000 classes by sample count and including all their images.tail
subset: Including all classes not present in thehead
subset and their images.tail-balanced
subset: Sampling 196 images per class from approximately the 8,500 most frequent classes.tail3k
subset: Excludingtail
classes with fewer than 100 samples, then randomly sampling 3,000 classes from the remainder.
All subsets were globally shuffled before finalization.
Who are the source data producers?
The original ImageNet dataset was created by research teams from Princeton University and Stanford University. The curation and release of ImageNet-21K involved contributions from the Alibaba-MIIL team.
Annotations
This dataset utilizes the original annotations from ImageNet-21K without any modifications or new annotations.
Personal and Sensitive Information
The ImageNet dataset, from which ImagenetTail is derived, contains images scraped from the internet. It may inadvertently include personal or sensitive information. The original dataset creators did not perform anonymization. Users are responsible for understanding and mitigating any risks associated with the use of this data.
Bias, Risks, and Limitations
Inherited Bias: As a subset of ImageNet, this dataset inherits known biases, including uneven distributions across geography, culture, race, and gender. For example, studies have shown that ImageNet contains a disproportionate number of images from Western cultures and certain demographic groups are underrepresented. Models trained on this dataset may learn and amplify these biases.
Incomplete Source Data: The
gmongaras/Imagenet21K
dataset, the source for ImagenetTail, states it contains approximately 19,000 classes, not the full 21,000. While this is not believed to impact the research goals of ImagenetTail, users should be aware of this discrepancy.Non-Commercial Use: The strict ImageNet license limits this dataset to non-commercial research and educational purposes only.
Recommendations
Users should be fully aware of the inherent biases within the ImageNet dataset and consider their potential impact when analyzing results and deploying models. It is highly recommended to consult relevant literature on ImageNet biases before using this dataset.
Citation
BibTeX:
Licensing Information
By using the ImageNet database (the "Database") from Princeton University and Stanford University, the Researcher agrees to the following terms:
Non-Commercial Use: The Database is to be used solely for non-commercial research and educational purposes.
No Warranties: Princeton University and Stanford University provide no representations or warranties regarding the Database, including non-infringement or fitness for a particular purpose.
Researcher Responsibility: The Researcher assumes full responsibility for their use of the Database and agrees to defend and indemnify the ImageNet team, Princeton University, and Stanford University (including their employees, Trustees, officers, and agents) against any claims arising from the Researcher's use of the Database, including copyrighted image copies.
Access for Associates: Research associates and colleagues may access the Database only after agreeing to these terms and conditions.
Termination of Access: Princeton University and Stanford University reserve the right to terminate Researcher's access at any time.
Commercial Entity Employment: If the Researcher is employed by a for-profit, commercial entity, the employer is also bound by these terms, and the Researcher represents full authorization to enter this agreement on behalf of such employer.
Governing Law: All disputes under this agreement are governed by the law of the State of New Jersey.
- Downloads last month
- 85