Efficient Machine Unlearning via Influence Approximation
Abstract
The paper introduces the Influence Approximation Unlearning (IAU) algorithm, which leverages incremental learning principles to efficiently address the computational challenges of influence-based unlearning in machine learning models.
Due to growing privacy concerns, machine unlearning, which aims at enabling machine learning models to ``forget" specific training data, has received increasing attention. Among existing methods, influence-based unlearning has emerged as a prominent approach due to its ability to estimate the impact of individual training samples on model parameters without retraining. However, this approach suffers from prohibitive computational overhead arising from the necessity to compute the Hessian matrix and its inverse across all training samples and parameters, rendering it impractical for large-scale models and scenarios involving frequent data deletion requests. This highlights the difficulty of forgetting. Inspired by cognitive science, which suggests that memorizing is easier than forgetting, this paper establishes a theoretical link between memorizing (incremental learning) and forgetting (unlearning). This connection allows machine unlearning to be addressed from the perspective of incremental learning. Unlike the time-consuming Hessian computations in unlearning (forgetting), incremental learning (memorizing) typically relies on more efficient gradient optimization, which supports the aforementioned cognitive theory. Based on this connection, we introduce the Influence Approximation Unlearning (IAU) algorithm for efficient machine unlearning from the incremental perspective. Extensive empirical evaluations demonstrate that IAU achieves a superior balance among removal guarantee, unlearning efficiency, and comparable model utility, while outperforming state-of-the-art methods across diverse datasets and model architectures. Our code is available at https://github.com/Lolo1222/IAU.
Community
Machine Unlearning by Incremental Learning
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Machine Unlearning for Streaming Forgetting (2025)
- Certified Unlearning for Neural Networks (2025)
- Orthogonal Soft Pruning for Efficient Class Unlearning (2025)
- Train Once, Forget Precisely: Anchored Optimization for Efficient Post-Hoc Unlearning (2025)
- LoReUn: Data Itself Implicitly Provides Cues to Improve Machine Unlearning (2025)
- UNO: Unlearning via Orthogonalization in Generative models (2025)
- OPC: One-Point-Contraction Unlearning Toward Deep Feature Forgetting (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper