Awesome Machine Unlearning
A collection of academic articles, published methodology, and datasets on the subject of machine unlearning .
A sortable version is available here: https://awesome-machine-unlearning.github.io/
Please read and cite our paper:
Thanh Tam Nguyen, Thanh Trung Huynh, Zhao Ren, Phi Le Nguyen, Alan Wee-Chung Liew, Hongzhi Yin, and Quoc Viet Hung Nguyen. 2025. A Survey of Machine Unlearning. ACM Trans. Intell. Syst. Technol. Just Accepted (July 2025). https://doi.org/10.1145/3749987
📌 We are actively tracking the latest research and welcome contributions to our repository and survey paper. If your studies are relevant, please feel free to create an issue or a pull request.
📰 2025-07-22: Our work has been published in ACM Transactions on Intelligent Systems and Technology . Thanks a lot for your continuous supports.
📰 2025-02-01: Our work has been cited in the International AI Safety Report 2025 in which machine unlearning is a pioneering paradigm to remove sensitive information or harmful data from trained AI models.
@article{10.1145/3749987,
author = {Nguyen, Thanh Tam and Huynh, Thanh Trung and Ren, Zhao and Nguyen, Phi Le and Liew, Alan Wee-Chung and Yin, Hongzhi and Nguyen, Quoc Viet Hung},
title = {A Survey of Machine Unlearning},
year = {2025},
volume = {16},
number = {5},
pages = {1--46},
journal = {ACM Trans. Intell. Syst. Technol.},
}
@article{nguyen2022survey,
title={A Survey of Machine Unlearning},
author={Nguyen, Thanh Tam and Huynh, Thanh Trung and Ren, Zhao and Nguyen, Phi Le and Liew, Alan Wee-Chung and Yin, Hongzhi and Nguyen, Quoc Viet Hung},
journal={arXiv preprint arXiv:2209.02299},
year={2022}
}
A Framework of Machine Unlearning
Frameworks provide standardized environments, benchmarks, and reproducible research pipelines for studying and evaluating machine unlearning. These works typically focus on methodology, reproducibility, and infrastructure, rather than proposing new unlearning algorithms.
Model-Agnostic Approaches
Model-agnostic machine unlearning methodologies include unlearning processes or frameworks that are applicable for different models. In some cases, they provide theoretical guarantees for only a class of models (e.g. linear models). But we still consider them model-agnostic as their core ideas are applicable to complex models (e.g. deep neural networks) with practical results.
Paper Title
Year
Author
Venue
Model
Code
Type
Machine Unlearning of Personally Identifiable Information in Large Language Models
2025
Parii et al.
NLLP
PerMUtok
Code
Knowledge Adaptation
The forget-set identification problem
2025
D'Angelo et al.
Machine Learning
-
-
Dataset Selection
ESC: Erasing Space Concept for Knowledge Deletion
2025
Lee et al.
CVPR
ESC
[Code]
Knowledge Deletion
Decoupled Distillation to Erase: A General Unlearning Method for Any Class-centric Tasks
2025
Zhou et al.
CVPR
DELETE
[Code]
Remain-data Free
The Right to be Forgotten in Pruning: Unveil Machine Unlearning on Sparse Models
2025
Xiao et al.
NeurIPS-RegML
Un-pruning
-
Weight Pruning
Communication Efficient and Provable Federated Unlearning
2024
Tao et al.
VLDB
FATS
[Code]
Federated Unlearning
Sequential Informed Federated Unlearning: Efficient and Provable Client Unlearning in Federated Optimization
2024
Fraboni et al.
AISTATS
SIFU
[Code]
Differential Privacy, Federated Unlearning
Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew Resilience
2024
Huynh et al.
ECML-PKDD
Fast-FedUL
[Code]
Federated Unlearning
FRAMU: Attention-based Machine Unlearning using Federated Reinforcement Learning
2024
Shaik et al.
TKDE
FRAMU
-
Federated Learning, Reinforcement Learning
Layer Attack Unlearning: Fast and Accurate Machine Unlearning via Layer Level Attack and Knowledge Distillation
2024
Kim et al.
AAAI
LAU
-
Knowledge Adapation
Federated Unlearning: a Perspective of Stability and Fairness
2024
Shao et al.
arXiv
Stability, Fairness, Verification
-
Federated Unlearning
On the Trade-Off between Actionable Explanations and the Right to be Forgotten
2024
Pawelczyk et al.
arXiv
-
-
Post-Training Attribute Unlearning in Recommender Systems
2024
Chen et al.
arXiv
-
-
PoT-AU
CovarNav: Machine Unlearning via Model Inversion and Covariance Navigation
2024
Abbasi et al.
arXiv
CovarNav
-
Partially Blinded Unlearning: Class Unlearning for Deep Networks a Bayesian Perspective
2024
Panda et al.
arXiv
PBU
-
Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning
2024
Liang et al.
arXiv
UBT
-
∇τ: Gradient-based and Task-Agnostic machine Unlearning
2024
Trippa et al.
arXiv
-
-
Towards Independence Criterion in Machine Unlearning of Features and Labels
2024
Han et al.
arXiv
-
-
Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning
2024
Fan et al.
arXiv
-
[Code]
Corrective Machine Unlearning
2024
Goel et al.
ICLR DMLR
-
[Code]
Fair Machine Unlearning: Data Removal while Mitigating Disparities
2024
Oesterling et al.
AISTATS
fair machine unlearning
[Code]
Label-Agnostic Forgetting: A Supervision-Free Unlearning in Deep Models
2024
Shen et al.
arXiv
Label-Agnostic Forgetting
[Code]
CaMU: Disentangling Causal Effects in Deep Model Unlearning
2024
Shen et al.
arXiv
CaMU
[Code]
SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation
2024
Fan et al.
ICLR
SalUn
[Code]
Weight Saliency
Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening
2024
Foster et al.
AAAI
SSD
[Code]
Retraining-free
Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers
2024
Cha et al.
AAAI
instance-wise unlearning
[Code]
Parameter-tuning-free data entry error unlearning with adaptive selective synaptic dampening
2024
Schoepf et al.
arXiv
ASSD
[Code]
Zero-Shot Machine Unlearning at Scale via Lipschitz Regularization
2024
Foster et al.
arXiv
JIT
[Code]
Zero-shot
Is Retain Set All You Need in Machine Unlearning? Restoring Performance of Unlearned Models with Out-Of-Distribution Images
2024
Bonato et al.
arXiv
SCAR
[Code]
Knowledge Adaptation
FedCIO: Efficient Exact Federated Unlearning with Clustering, Isolation, and One-shot Aggregation
2023
Qiu et al.
BigData
FedCIO
-
Federated Unlearning, One-Shot
Towards bridging the gaps between the right to explanation and the right to be forgotten
2023
Krishna et al.
ICML
-
-
e
Fast Model DeBias with Machine Unlearning
2023
Chen et al.
NIPS
DeBias
[Code]
DUCK: Distance-based Unlearning via Centroid Kinematics
2023
Cotogni et al.
arXiv
DUCK
[Code]
Open Knowledge Base Canonicalization with Multi-task Unlearning
2023
Liu et al.
arXiv
MulCanon
-
Unlearning via Sparse Representations
2023
Shah et al.
arXiv
DKVB
-
Zero-shot Unlearning
SecureCut: Federated Gradient Boosting Decision Trees with Efficient Machine Unlearning
2023
Zhang et al.
arXiv
SecureCut
-
Vertical Federated Learning
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
2023
Di et al.
NeurIPS
-
-
Camouflaged data poisoning attacks
Model Sparsity Can Simplify Machine Unlearning
2023
Jia et al.
NeurIPS
l1-sparse
[Code]
Weight Pruning
Fast Model Debias with Machine Unlearning
2023
Chen et al.
arXiv
-
-
Tight Bounds for Machine Unlearning via Differential Privacy
2023
Huang et al.
arXiv
-
-
Machine Unlearning Methodology base on Stochastic Teacher Network
2023
Zhang et al.
ADMA
Model Reconstruction
-
Knowledge Adaptation
Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening
2023
Foster et al.
arXiv
SSD
[Code]
From Adaptive Query Release to Machine Unlearning
2023
Ullah et al.
arXiv
-
-
Exact Unlearning
Towards Adversarial Evaluations for Inexact Machine Unlearning
2023
Goel et al.
arXiv
EU-k, CF-k
[Code]
KGA: A General Machine Unlearning Framework Based on Knowledge Gap Alignment
2023
Wang et al.
ACL
KGA
[Code]
Knowledge Adaptation
On the Trade-Off between Actionable Explanations and the Right to be Forgotten
2023
Pawelczyk et al.
arXiv
-
-
Towards Unbounded Machine Unlearning
2023
Kurmanji et al.
arXiv
SCRUB
[Code]
approximate unlearning
Netflix and Forget: Efficient and Exact Machine Unlearning from Bi-linear Recommendations
2023
Xu et al.
arXiv
Unlearn-ALS
-
Exact Unlearning
To Be Forgotten or To Be Fair: Unveiling Fairness Implications of Machine Unlearning Methods
2023
Zhang et al.
arXiv
-
[Code]
Certified Data Removal in Sum-Product Networks
2022
Becker and Liebig
ICKG
UNLEARNSPN
[Code]
Certified Removal Mechanisms
Learning with Recoverable Forgetting
2022
Ye et al.
ECCV
LIRF
-
Continual Learning and Private Unlearning
2022
Liu et al.
CoLLAs
CLPU
[Code]
Verifiable and Provably Secure Machine Unlearning
2022
Eisenhofer et al.
arXiv
-
[Code]
Certified Removal Mechanisms
VeriFi: Towards Verifiable Federated Unlearning
2022
Gao et al.
arXiv
VERIFI
-
Certified Removal Mechanisms
FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information
2022
Cao et al.
S&P
FedRecover
-
recovery method
Fast Yet Effective Machine Unlearning
2022
Tarun et al.
arXiv
UNSIR
-
Membership Inference via Backdooring
2022
Hu et al.
IJCAI
MIB
[Code]
Membership Inferencing
Forget Unlearning: Towards True Data-Deletion in Machine Learning
2022
Chourasia et al.
ICLR
-
-
noisy gradient descent
Zero-Shot Machine Unlearning
2022
Chundawat et al.
arXiv
-
-
Efficient Attribute Unlearning: Towards Selective Removal of Input Attributes from Feature Representations
2022
Guo et al.
arXiv
attribute unlearning
-
Few-Shot Unlearning
2022
Yoon et al.
ICLR
-
-
Federated Unlearning: How to Efficiently Erase a Client in FL?
2022
Halimi et al.
UpML Workshop
-
-
federated learning
Machine Unlearning Method Based On Projection Residual
2022
Cao et al.
DSAA
-
-
Projection Residual Method
Hard to Forget: Poisoning Attacks on Certified Machine Unlearning
2022
Marchant et al.
AAAI
-
[Code]
Certified Removal Mechanisms
Athena: Probabilistic Verification of Machine Unlearning
2022
Sommer et al.
PoPETs
ATHENA
-
FP2-MIA: A Membership Inference Attack Free of Posterior Probability in Machine Unlearning
2022
Lu et al.
ProvSec
FP2-MIA
-
inference attack
Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning
2022
Gao et al.
PETS
-
-
Prompt Certified Machine Unlearning with Randomized Gradient Smoothing and Quantization
2022
Zhang et al.
NeurIPS
PCMU
-
Certified Removal Mechanisms
The Right to be Forgotten in Federated Learning: An Efficient Realization with Rapid Retraining
2022
Liu et al.
INFOCOM
-
[Code]
Backdoor Defense with Machine Unlearning
2022
Liu et al.
INFOCOM
BAERASER
-
Backdoor defense
Markov Chain Monte Carlo-Based Machine Unlearning: Unlearning What Needs to be Forgotten
2022
Nguyen et al.
ASIA CCS
MCU
-
MCMC Unlearning
Can Bad Teaching Induce Forgetting? Unlearning in Deep Networks using an Incompetent Teacher
2022
Chundawat et al.
arXiv
-
-
Knowledge Adaptation
Efficient Two-Stage Model Retraining for Machine Unlearning
2022
Kim and Woo
CVPR Workshop
-
-
Learn to Forget: Machine Unlearning Via Neuron Masking
2021
Ma et al.
IEEE
Forsaken
-
Mask Gradients
Adaptive Machine Unlearning
2021
Gupta et al.
NeurIPS
-
[Code]
Differential Privacy
Descent-to-Delete: Gradient-Based Methods for Machine Unlearning
2021
Neel et al.
ALT
-
-
Certified Removal Mechanisms
Remember What You Want to Forget: Algorithms for Machine Unlearning
2021
Sekhari et al.
NeurIPS
-
-
FedEraser: Enabling Efficient Client-Level Data Removal from Federated Learning Models
2021
Liu et al.
IWQoS
FedEraser
[Code]
Federated Unlearning
Machine Unlearning via Algorithmic Stability
2021
Ullah et al.
COLT
TV
-
Certified Removal Mechanisms
EMA: Auditing Data Removal from Trained Models
2021
Huang et al.
MICCAI
EMA
[Code]
Certified Removal Mechanisms
Knowledge-Adaptation Priors
2021
Khan and Swaroop
NeurIPS
K-prior
[Code]
Knowledge Adaptation
PrIU: A Provenance-Based Approach for Incrementally Updating Regression Models
2020
Wu et al.
NeurIPS
PrIU
-
Knowledge Adaptation
Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks
2020
Golatkar et al.
CVPR
-
-
Certified Removal Mechanisms
Learn to Forget: User-Level Memorization Elimination in Federated Learning
2020
Liu et al.
arXiv
Forsaken
-
Certified Data Removal from Machine Learning Models
2020
Guo et al.
ICML
-
-
Certified Removal Mechanisms
Class Clown: Data Redaction in Machine Unlearning at Enterprise Scale
2020
Felps et al.
arXiv
-
-
Decremental Learning
A Novel Online Incremental and Decremental Learning Algorithm Based on Variable Support Vector Machine
2019
Chen et al.
Cluster Computing
-
-
Decremental Learning
Making AI Forget You: Data Deletion in Machine Learning
2019
Ginart et al.
NeurIPS
-
-
Decremental Learning
Lifelong Anomaly Detection Through Unlearning
2019
Du et al.
CCS
-
-
Learning Not to Learn: Training Deep Neural Networks With Biased Data
2019
Kim et al.
CVPR
-
-
Efficient Repair of Polluted Machine Learning Systems via Causal Unlearning
2018
Cao et al.
ASIACCS
KARMA
[Code]
Understanding Black-box Predictions via Influence Functions
2017
Koh et al.
ICML
-
[Code]
Certified Removal Mechanisms
Towards Making Systems Forget with Machine Unlearning
2015
Cao and Yang
S&P
-
Towards Making Systems Forget with Machine Unlearning
2015
Cao et al.
S&P
-
-
Statistical Query Learning
Incremental and decremental training for linear classification
2014
Tsai et al.
KDD
-
[Code]
Decremental Learning
Multiple Incremental Decremental Learning of Support Vector Machines
2009
Karasuyama et al.
NIPS
-
-
Decremental Learning
Incremental and Decremental Learning for Linear Support Vector Machines
2007
Romero et al.
ICANN
-
-
Decremental Learning
Decremental Learning Algorithms for Nonlinear Langrangian and Least Squares Support Vector Machines
2007
Duan et al.
OSB
-
-
Decremental Learning
Multicategory Incremental Proximal Support Vector Classifiers
2003
Tveit et al.
KES
-
-
Decremental Learning
Incremental and Decremental Proximal Support Vector Classification using Decay Coefficients
2003
Tveit et al.
DaWak
-
-
Decremental Learning
Incremental and Decremental Support Vector Machine Learning
2000
Cauwenberg et al.
NeurIPS
-
-
Decremental Learning
Model-Intrinsic Approaches
The model-intrinsic approaches include unlearning methods designed for a specific type of models. Although they are model-intrinsic, their applications are not necessarily narrow, as many ML models can share the same type.
The approaches fallen into this category use data partition, data augmentation and data influence to speed up the retraining process. Methods of attack by data manipulation (e.g. data poisoning) are also included for reference.
Dataset
#Items
Disk Size
Downstream Application
Description
Adult
48K+
10MB
Classification
Breast Cancer
569
<1MB
Classification
Diabetes
442
<1MB
Regression
Dataset
#Items
Disk Size
Downstream Application
Description
OGB
100M+
59MB
Classification
Cora
2K+
4.5MB
Classification
MovieLens
1B+
3GB+
Recommender Systems
Metrics
Formula/Description
Usage
Accuracy
Accuracy on unlearned model on forget set and retrain set
Evaluating the predictive performance of unlearned model
Completeness
The overlapping (e.g. Jaccard distance) of output space between the retrained and the unlearned model
Evaluating the indistinguishability between model outputs
Unlearn time
The amount of time of unlearning request
Evaluating the unlearning efficiency
Relearn Time
The epochs number required for the unlearned model to reach the accuracy of source model
Evaluating the unlearning efficiency (relearn with some data sample)
Layer-wise Distance
The weight difference between original model and retrain model
Evaluate the indistinguishability between model parameters
Activation Distance
An average of the L2-distance between the unlearned model and retrained model’s predicted probabilities on the forget set
Evaluating the indistinguishability between model outputs
JS-Divergence
Jensen-Shannon divergence between the predictions of the unlearned and retrained model
Evaluating the indistinguishability between model outputs
Membership Inference Attack
Recall (#detected items / #forget items)
Verify the influence of forget data on the unlearned model
ZRF score
$\mathcal{ZRF} = 1 - \frac{1}{nf}\sum\limits_{i=0}^{n_f} \mathcal{JS}(M(x_i), T_d(x_i))$
The unlearned model should not intentionally give wrong output $(\mathcal{ZRF} = 0)$ or random output $(\mathcal{ZRF} = 1)$ on the forget item
Anamnesis Index (AIN)
$AIN = \frac{r_t (M_u, M_{orig}, \alpha)}{r_t (M_s, M_{orig}, \alpha)}$
Zero-shot machine unlearning
Epistemic Uncertainty
if $\mbox{i(w;D) > 0}$ , then $\mbox{efficacy}(w;D) = \frac{1}{i(w; D)}$ ; otherwise $\mbox{efficacy}(w;D) = \infty$
How much information the model exposes
Model Inversion Attack
Visualization
Qualitative verifications and evaluations
Unlearn PII - Benchmark designed to evaluate the effectiveness of PII unlearning methods, addressing limitations like evaluation implicit knowledge unlearning and assesing all tokens equally
Open Unlearning - An easily extensible framework unifying LLM unlearning evaluation benchmarks
Vision Unlearning - A framework for unlearning algorithms, datasets, metrics, and evaluation methodologies commonly used in Machine Unlearning for vision-related tasks, such as image classification and image generation
Unlearning Comparator - A visual analytics system for comparative evaluation of machine unlearning methods [Paper] [Demo]
Disclaimer
Feel free to contact us if you have any queries or exciting news on machine unlearning. In addition, we welcome all researchers to contribute to this repository and further contribute to the knowledge of machine unlearning fields.
If you have some other related references, please feel free to create a Github issue with the paper information. We will glady update the repos according to your suggestions. (You can also create pull requests, but it might take some time for us to do the merge)
Backup Statistics