Papers
arxiv:2504.21478

CAE-DFKD: Bridging the Transferability Gap in Data-Free Knowledge Distillation

Published on Apr 30
Authors:
,
,
,
,
,
,

Abstract

Category-Aware Embedding Data-Free Knowledge Distillation enhances the transferability and generalization of learned representations in data-free knowledge distillation.

AI-generated summary

Data-Free Knowledge Distillation (DFKD) enables the knowledge transfer from the given pre-trained teacher network to the target student model without access to the real training data. Existing DFKD methods focus primarily on improving image recognition performance on associated datasets, often neglecting the crucial aspect of the transferability of learned representations. In this paper, we propose Category-Aware Embedding Data-Free Knowledge Distillation (CAE-DFKD), which addresses at the embedding level the limitations of previous rely on image-level methods to improve model generalization but fail when directly applied to DFKD. The superiority and flexibility of CAE-DFKD are extensively evaluated, including: \textbf{i.)} Significant efficiency advantages resulting from altering the generator training paradigm; \textbf{ii.)} Competitive performance with existing DFKD state-of-the-art methods on image recognition tasks; \textbf{iii.)} Remarkable transferability of data-free learned representations demonstrated in downstream tasks.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2504.21478 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2504.21478 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2504.21478 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.