Scrub-and-Learn: Category-Aware Weight Modification for Machine Unlearning
Abstract
1. Introduction
- We propose Scrub-and-Learn, a novel machine unlearning method that enables effective forgetting of specific data categories, supporting privacy protection and regulatory compliance. Unlike existing approaches, it does not require Hessian inverse computation, access to the original training data, or the creation of auxiliary datasets.
- We introduce a submaximum one-hot label encoding strategy for unlearning, assigning a probability of 1 to the second-highest predicted class and 0 to others to signal the removal of category-specific knowledge.
- We analyze the weight-sharing patterns across data categories, revealing how neural network weights contribute to cross-category representations. Additionally, we explore catastrophic forgetting in continual learning, where forgetting the target class disrupts other classes.
- Extensive experiments on benchmark datasets, including MNIST, FashionMNIST, SVHN, CIFAR-10, and CIFAR-100, demonstrate that Scrub-and-Learn effectively forgets targeted classes while preserving the performance of retained ones. The method generalizes well across datasets of varying sizes and model architectures.
2. Related Work
3. Method
3.1. Weight Dynamics and Category Sharing
3.2. Category-Aware Weight Modification
Algorithm 1 Submaximum one-hot encoding for forgotten class samples label. |
Require: sample , ground-truth labels , forgotten class index , Model Ensure: Submaximum one-hot encoding 1: Probability distribution of samples 2: Identify forgotten-class sample indices: 3: for do 4: Remove column : 5: Compute predicted alternative class: 6: Update labels: 7: end for 8: |
3.3. Challenges in the Unlearning Process
3.4. Sample Selection for Unlearning
4. Experiment
4.1. Experimental Setting
4.2. Evaluation of Unlearning Networks
4.3. Comparison with Other Unlearning Methods
4.4. Analyze the Impact of Learning Rate on the Unlearning Methods
4.5. T-SNE Visualization of the Classification Results of the Unlearning Model
5. Discussion
5.1. Method Analysis
5.2. Limitations and Future Work
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
FIM | Fisher Information Matrix |
ETF | Equiangular Tight Frame |
RAM | Random-access memory |
GPU | Graphics Processing Unit |
VRAM | Video RAM |
CUDA | Compute Unified Devices Architecture |
PBU | Partially Blinded Unlearning |
GKT | Gated Knowledge Transfer |
WF-Net | Weight Filtering-Net |
NG-IR | Noise Generation-Impair Repair |
ASCI | Augmented Skin Conditions Image |
References
- Xu, J.; Wu, Z.; Wang, C.; Jia, X. Machine Unlearning: Solutions and Challenges. IEEE Trans. Emerg. Top. Comput. Intell. 2024, 8, 2150–2168. [Google Scholar] [CrossRef]
- Zhang, H.; Nakamura, T.; Isohara, T.; Sakurai, K. A review on machine unlearning. SN Comput. Sci. 2023, 4, 337. [Google Scholar] [CrossRef]
- Shaik, T.; Tao, X.; Xie, H.; Li, L.; Zhu, X.; Li, Q. Exploring the Landscape of Machine Unlearning: A Comprehensive Curvey and Taxonomy. IEEE Trans. Neural Netw. Learn. Syst. 2024; early access. [Google Scholar]
- Cevallos, I.D.; Benalcázar, M.E.; Valdivieso Caraguay, Á.L.; Zea, J.A.; Barona-López, L.I. A Systematic Literature Review of Machine Unlearning Techniques in Neural Networks. Computers 2025, 14, 150. [Google Scholar] [CrossRef]
- Li, N.; Zhou, C.; Gao, Y.; Chen, H.; Zhang, Z.; Kuang, B.; Fu, A. Machine unlearning: Taxonomy, metrics, applications, challenges, and prospects. IEEE Trans. Neural Netw. Learn. Syst. 2025; early access. [Google Scholar]
- De Min, T.; Mancini, M.; Lathuilière, S.; Roy, S.; Ricci, E. Unlearning Personal Data from a Single Image. Transactions on Machine Learning Research. March 2025. Available online: https://openreview.net/pdf?id=VxC4PZ71Ym (accessed on 18 May 2025).
- Panda, S.; Sourav, S. Partially Blinded Unlearning: Class Unlearning for Deep Networks from Bayesian Perspective. In Proceedings of the AAAI Conference on Artificial Intelligence, Philadelphia, PA, USA, 25 February–4 March 2025; Volume 39, pp. 6372–6380. [Google Scholar]
- Chen, K.; Huang, Y.; Wang, Y.; Zhang, X.; Mi, B.; Wang, Y. Privacy preserving machine unlearning for smart cities. Ann. Telecommun. 2024, 79, 61–72. [Google Scholar] [CrossRef]
- Chen, K.; Wang, Z.; Mi, B. Private Data Protection with Machine Unlearning in Contrastive Learning Networks. Mathematics 2024, 12, 4001. [Google Scholar] [CrossRef]
- Wei, S.; Zhang, M.; Zha, H.; Wu, B. Shared adversarial unlearning: Backdoor mitigation by unlearning shared adversarial examples. Adv. Neural Inf. Process. Syst. 2023, 36, 25876–25909. [Google Scholar]
- Guo, Y.; Zhao, Y.; Hou, S.; Wang, C.; Jia, X. Verifying in the dark: Verifiable machine unlearning by using invisible backdoor triggers. IEEE Trans. Inf. Forensics Secur. 2023, 19, 708–721. [Google Scholar] [CrossRef]
- Protection, Formerly Data. General Data Protection Regulation (GDPR). Intersoft Consulting. 2018, p. 24. Available online: https://gdpr-info.eu/ (accessed on 14 October 2024).
- Zhang, D.; Pan, S.; Hoang, T.; Xing, Z.; Staples, M.; Xu, X.; Yao, L.; Lu, Q.; Zhu, L. To be forgotten or to be fair: Unveiling fairness implications of machine unlearning methods. AI Ethics 2024, 4, 83–93. [Google Scholar] [CrossRef]
- Chen, R.; Yang, J.; Xiong, H.; Bai, J.; Hu, T.; Hao, J.; Feng, Y.; Zhou, J.T.; Wu, J.; Liu, Z. Fast Model Debias with Machine Unlearning. Adv. Neural Inf. Process. Syst. 2023, 36, 14516–14539. [Google Scholar]
- Wang, J.; Bie, H.; Jing, Z.; Zhi, Y.; Fan, Y. Weight Masking in Image Classification Networks: Class-Specific Machine Unlearning. Knowl. Inf. Syst. 2025, early access, 1–21. [Google Scholar] [CrossRef]
- Dolatabadi, H.M.; Erfani, S.M.; Leckie, C. Adversarial coreset selection for efficient robust training. Int. J. Comput. Vis. 2023, 131, 3307–3331. [Google Scholar] [CrossRef]
- Maalouf, A.; Eini, G.; Mussay, B.; Feldman, D.; Osadchy, M. A unified approach to coreset learning. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 6893–6905. [Google Scholar] [CrossRef] [PubMed]
- Huang, Y.; Yuan, X.; Wang, H.; Du, Y. Coreset selection can accelerate quantum machine learning models with provable generalization. Phys. Rev. Appl. 2024, 22, 014074. [Google Scholar] [CrossRef]
- Zhao, P.; Zhang, K.; Zhang, H.; Chen, H. Alternating minimization differential privacy protection algorithm for the novel dual-mode learning tasks model. Expert Syst. Appl. 2024, 259, 125279. [Google Scholar] [CrossRef]
- Guo, C.; Goldstein, T.; Hannun, A.; Van Der Maaten, L. Certified Data Removal from Machine Learning Models. arXiv 2019, arXiv:1911.03030. [Google Scholar]
- Sekhari, A.; Acharya, J.; Kamath, G.; Suresh, A.T. Remember What You Want to Forget: Algorithms for Machine Unlearning. Adv. Neural Inf. Process. Syst. 2021, 34, 18075–18086. [Google Scholar]
- Suriyakumar, V.; Wilson, A.C. Algorithms that Approximate Data Removal: New Results and Limitations. Adv. Neural Inf. Process. Syst. 2022, 35, 18892–18903. [Google Scholar]
- Peste, A.; Alistarh, D.; Lampert, C.H. SSSE: Efficiently Erasing Samples from Trained Machine Learning Models. arXiv 2021, arXiv:2107.03860. [Google Scholar]
- Zhang, Y.; Lu, Z.; Zhang, F.; Wang, H.; Li, S. Machine unlearning by reversing the continual learning. Appl. Sci. 2023, 13, 9341. [Google Scholar] [CrossRef]
- Mahadevan, A.; Mathioudakis, M. Certifiable unlearning pipelines for logistic regression: An experimental study. Mach. Learn. Knowl. Extr. 2022, 4, 591–620. [Google Scholar] [CrossRef]
- Nguyen, Q.P.; Low, B.K.H.; Jaillet, P. Variational Bayesian Unlearning. Adv. Neural Inf. Process. Syst. 2020, 33, 16025–16036. [Google Scholar]
- Fan, C.; Liu, J.; Zhang, Y.; Wong, E.; Wei, D.; Liu, S. SalUn: Empowering Machine Unlearning via Gradient-Based Weight Saliency in Both Image Classification and Generation. arXiv 2023, arXiv:2310.12508. [Google Scholar]
- Kurmanji, M.; Triantafillou, P.; Hayes, J.; Triantafillou, E. Towards Unbounded Machine Unlearning. Adv. Neural Inf. Process. Syst. 2023, 36, 1957–1987. [Google Scholar]
- Trippa, D.; Campagnano, C.; Bucarelli, M.S.; Tolomei, G.; Silvestri, F. ∇τ: Gradient-Based and Task-Agnostic Machine Unlearning. CoRR 2024, arXiv:2403.14339. [Google Scholar]
- Cotogni, M.; Bonato, J.; Sabetta, L.; Pelosin, F.; Nicolosi, A. DUCK: Distance-Based Unlearning via Centroid Kinematics. arXiv 2023, arXiv:2312.02052. [Google Scholar]
- Poppi, S.; Sarto, S.; Cornia, M.; Baraldi, L.; Cucchiara, R. Multi-Class Unlearning for Image Classification via Weight Filtering. IEEE Intell. Syst. 2024, 39, 40–47. [Google Scholar] [CrossRef]
- Wang, W.; Zhang, C.; Tian, Z.; Yu, S. Machine Unlearning via Representation Forgetting with Parameter Self-Sharing. IEEE Trans. Inf. Forensics Secur. 2023, 19, 1099–1111. [Google Scholar] [CrossRef]
- Chundawat, V.S.; Tarun, A.K.; Mandal, M.; Kankanhalli, M. Zero-Shot Machine Unlearning. IEEE Trans. Inf. Forensics Secur. 2023, 18, 2345–2354. [Google Scholar] [CrossRef]
- Tarun, A.K.; Chundawat, V.S.; Mandal, M.; Kankanhalli, M. Fast yet Effective Machine Unlearning. IEEE Trans. Neural Netw. Learn. Syst. 2023, 35, 13046–13055. [Google Scholar] [CrossRef]
- Abbasi, A.; Thrash, C.; Akbari, E.; Zhang, D.; Kolouri, S. CovarNav: Machine Unlearning via Model Inversion and Covariance Navigation. arXiv 2023, arXiv:2311.12999. [Google Scholar]
- Yoon, Y.; Nam, J.; Yun, H.; Lee, J.; Kim, D.; Ok, J. Few-Shot Unlearning by Model Inversion. arXiv 2022, arXiv:2205.15567. [Google Scholar]
- Ma, Z.; Liu, Y.; Liu, X.; Liu, J.; Ma, J.; Ren, K. Learn to forget: Machine unlearning via neuron masking. IEEE Trans. Dependable Secur. Comput. 2022, 20, 3194–3207. [Google Scholar] [CrossRef]
- De Lange, M.; Aljundi, R.; Masana, M.; Parisot, S.; Jia, X.; Leonardis, A.; Slabaugh, G.; Tuytelaars, T. A Continual Learning Survey: Defying Forgetting in Classification Tasks. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3366–3385. [Google Scholar]
- Wang, L.; Zhang, X.; Su, H.; Zhu, J. A comprehensive survey of continual learning: Theory, method and application. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 5362–5383. [Google Scholar] [CrossRef]
- Parisi, G.I.; Kemker, R.; Part, J.L.; Kanan, C.; Wermter, S. Continual Lifelong Learning with Neural Networks: A Review. Neural Netw. 2019, 113, 54–71. [Google Scholar] [CrossRef] [PubMed]
- Masana, M.; Liu, X.; Twardowski, B.; Menta, M.; Bagdanov, A.D.; Van De Weijer, J. Class-Incremental Learning: Survey and Performance Evaluation on Image Classification. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 5513–5533. [Google Scholar] [CrossRef] [PubMed]
- Kong, Y.; Liu, L.; Chen, H.; Kacprzyk, J.; Tao, D. Overcoming Catastrophic Forgetting in Continual Learning by Exploring Eigenvalues of Hessian Matrix. IEEE Trans. Neural Netw. Learn. Syst. 2023, 35, 16196–16210. [Google Scholar] [CrossRef]
- Peng, J.; Tang, B.; Jiang, H.; Li, Z.; Lei, Y.; Lin, T.; Li, H. Overcoming Long-Term Catastrophic Forgetting Through Adversarial Neural Pruning and Synaptic Consolidation. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 4243–4256. [Google Scholar] [CrossRef]
- Zhang, M.; Li, H.; Pan, S.; Chang, X.; Zhou, C.; Ge, Z.; Su, S. One-Shot Neural Architecture Search: Maximising Diversity to Overcome Catastrophic Forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 2921–2935. [Google Scholar] [CrossRef]
- French, R.M. Catastrophic Forgetting in Connectionist Networks. Trends Cogn. Sci. 1999, 3, 128–135. [Google Scholar] [CrossRef]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-Based Learning Applied to Document Recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- Xiao, H.; Rasul, K.; Vollgraf, R. Fashion-MNIST: A Novel Image Dataset for Benchmarking Machine Learning Algorithms. arXiv 2017, arXiv:1708.07747. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet Classification with Deep Convolutional Neural Networks. Adv. Neural Inf. Process. Syst. 2012, 25, 2. [Google Scholar] [CrossRef]
- Netzer, Y.; Wang, T.; Coates, A.; Bissacco, A.; Wu, B.; Ng, A.Y. Reading Digits in Natural Images with Unsupervised Feature Learning. In Proceedings of the NIPS Workshop on Deep Learning and Unsupervised Feature Learning, Granada, Spain, 10 December 2011; Volume 2011, p. 4. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Krizhevsky, A.; Hinton, G. Learning Multiple Layers of Features from Tiny Images. 2009. Available online: http://www.cs.utoronto.ca/~kriz/learning-features-2009-TR.pdf (accessed on 31 October 2024).
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Lee, S.H.; Lee, S.; Song, B.C. Vision transformer for small-size datasets. arXiv 2021, arXiv:2112.13492. [Google Scholar]
- Naqvi, S.A.R. Augmented Skin Conditions Image Dataset. Kaggle. 2023. Available online: https://www.kaggle.com/datasets/syedalinaqvi/augmented-skin-conditions-image-dataset (accessed on 31 October 2024).
- Bourtoule, L.; Chandrasekaran, V.; Choquette-Choo, C.A.; Jia, H.; Travers, A.; Zhang, B.; Lie, D.; Papernot, N. Machine unlearning. In Proceedings of the 2021 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 24–27 May 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 141–159. [Google Scholar]
Dataset | Model | Original Network | Unlearning Network | ΔAcc_R(%)↑ | ||
---|---|---|---|---|---|---|
Acc_R(%) | Acc_F(%) | Acc_R(%) | Acc_F(%)↓ | |||
MNIST | MLP | 98.49 | 99.39 | 98.04 | 1.22 | −0.45 |
LeNet | 99.09 | 99.59 | 98.57 | 0.00 | −0.52 | |
FashionMNIST | MLP | 90.49 | 86.20 | 89.14 | 1.30 | −1.35 |
LeNet | 90.83 | 87.50 | 90.18 | 2.00 | −0.65 | |
AlexNet | 92.50 | 87.50 | 89.84 | 2.20 | −2.66 | |
SVHN | AlexNet | 93.97 | 94.38 | 92.73 | 4.58 | −1.24 |
VGG11 | 95.68 | 96.62 | 95.32 | 0.23 | −0.36 | |
ResNet18 | 96.17 | 96.62 | 94.40 | 3.15 | −1.77 | |
CIFAR10 | VGG16 | 89.52 | 92.80 | 89.50 | 0.00 | −0.02 |
ResNet34 | 89.10 | 91.00 | 87.33 | 0.70 | −1.77 | |
InceptionV3 | 93.31 | 93.80 | 91.91 | 4.90 | −1.40 | |
ViT-S | 95.84 | 97.10 | 93.19 | 3.10 | −2.65 | |
CIFAR100 | VGG16 | 64.99 | 86.00 | 64.45 | 0.00 | −0.54 |
ResNet50 | 66.05 | 88.00 | 64.60 | 1.00 | −1.45 | |
InceptionV3 | 75.15 | 88.00 | 72.17 | 0.00 | −2.98 | |
ViT-S | 82.78 | 92.00 | 79.26 | 0.00 | −3.52 | |
ASCI | ResNet50 | 98.00 | 92.41 | 94.25 | 2.53 | −3.75 |
ViT-S | 97.00 | 94.94 | 94.25 | 3.80 | −2.75 |
Dataset | Model | Original Network | Unlearning Network | ΔAcc_R(%)↑ | ||
---|---|---|---|---|---|---|
Acc_R(%) | Acc_F(%) | Acc_R(%) | Acc_F(%)↓ | |||
MNIST | MLP | 98.52 | 98.83 | 97.44 | 4.17 | −1.08 |
LeNet | 99.04 | 99.54 | 97.20 | 3.56 | −1.84 | |
FashionMNIST | MLP | 91.23 | 85.40 | 91.23 | 3.60 | 0.00 |
LeNet | 91.71 | 84.60 | 91.68 | 4.20 | −0.03 | |
SVHN | VGG11 | 96.31 | 95.62 | 94.25 | 0.02 | −2.05 |
ResNet18 | 96.07 | 96.86 | 95.50 | 1.89 | −0.57 | |
CIFAR10 | VGG16 | 89.46 | 91.40 | 89.91 | 0.44 | 0.45 |
ResNet34 | 89.16 | 89.80 | 87.74 | 3.50 | −1.42 | |
CIFAR100 | VGG16 | 64.60 | 67.60 | 50.66 | 0.10 | −13.94 |
ResNet50 | 65.84 | 68.00 | 57.59 | 4.65 | −8.25 |
Dataset | Method | Model | Original Network | Unlearning Network | ΔAcc_R(%)↑ | ||
---|---|---|---|---|---|---|---|
Acc_R(%) | Acc_F(%) | Acc_R(%) | Acc_F(%)↓ | ||||
MNIST | PBU | ResNet18 | 99.00 | 99.19 | 96.24 | 0.21 | −2.76 |
AllCNN | 99.49 | 99.37 | 98.40 | 0.07 | −1.09 | ||
ResNet34 | 99.45 | 99.43 | 96.02 | 0.01 | −3.43 | ||
GKT(zero-shot) | AllCNN | 97.84 | 99.61 | 97.12 | 0.00 | −0.72 | |
LeNet | 98.15 | 99.59 | 95.79 | 0.00 | −2.36 | ||
ResNet9 | 98.57 | 99.10 | 94.57 | 0.00 | −4.00 | ||
WF-Net | VGG16 | 99.60 | 99.60 | 73.20 | 0.00 | −26.40 | |
ResNet18 | 99.60 | 99.60 | 94.00 | 9.68 | −5.60 | ||
ViT-T | 98.90 | 98.90 | 93.50 | 0.00 | −5.40 | ||
Ours | MLP | 98.49 | 99.39 | 98.04 | 1.22 | −0.45 | |
LeNet | 99.09 | 99.59 | 98.57 | 0.00 | −0.52 | ||
SVHN | GKT(zero-shot) | AllCNN | 94.52 | 95.16 | 92.43 | 0.00 | −2.09 |
LeNet | 85.69 | 81.42 | 78.27 | 0.00 | −7.42 | ||
ResNet9 | 82.76 | 87.11 | 39.44 | 0.00 | −43.32 | ||
Ours | AlexNet | 93.97 | 94.38 | 92.73 | 4.59 | −1.24 | |
VGG11 | 95.68 | 96.62 | 95.32 | 0.23 | −0.36 | ||
ResNet18 | 96.17 | 96.62 | 94.40 | 3.15 | −1.77 | ||
CIFAR10 | PBU | ResNet18 | 76.54 | 71.82 | 66.16 | 4.50 | −10.38 |
AllCNN | 84.92 | 79.64 | 76.15 | 1.04 | −8.77 | ||
ResNet34 | 76.78 | 68.52 | 68.61 | 0.59 | −8.10 | ||
GKT(zero-shot) | AllCNN | 94.05 | 87.49 | 81.97 | 0.00 | −12.08 | |
LeNet | 59.8 | 62.25 | 41.32 | 0.00 | −18.48 | ||
ResNet9 | 84.83 | 88.50 | 56.83 | 0.00 | −28.00 | ||
NG-IR | ResNet18 | 77.86 | 81.01 | 71.60 | 0.00 | −6.26 | |
AllCNN | 82.64 | 91.02 | 73.90 | 0.00 | −8.74 | ||
WF-Net | VGG16 | 93.00 | 93.00 | 80.20 | 18.30 | −12.80 | |
ResNet18 | 93.90 | 94.00 | 79.70 | 9.25 | −14.20 | ||
ViT-T | 78.00 | 78.00 | 73.50 | 0.00 | −4.50 | ||
Ours | VGG16 | 89.52 | 92.80 | 89.50 | 0.00 | −0.02 | |
ResNet34 | 89.10 | 91.00 | 87.33 | 0.70 | −1.77 | ||
InceptionV3 | 93.31 | 93.80 | 91.91 | 4.90 | −1.40 | ||
ViT-S | 95.84 | 97.10 | 93.19 | 3.10 | −2.65 | ||
CIFAR100 | PBU | Resnet18 | 76.06 | 80.11 | 69.55 | 1.50 | −6.51 |
Resnet50 | 75.95 | 78.44 | 69.29 | 0.33 | −6.66 | ||
Resnet34 | 75.21 | 84.00 | 65.34 | 0.17 | −9.87 | ||
NG-IR | ResNet18 | 78.68 | 83.00 | 75.36 | 0.00 | −3.32 | |
MobileNetV2 | 77.43 | 90.00 | 75.76 | 0.00 | −1.67 | ||
WF-Net | VGG16 | 93.00 | 93.00 | 80.20 | 18.30 | −12.80 | |
ResNet18 | 93.90 | 94.00 | 79.70 | 9.25 | −14.20 | ||
ViT-T | 78.00 | 78.00 | 73.50 | 0.00 | −4.50 | ||
Ours | VGG16 | 64.99 | 86.00 | 64.44 | 0.00 | −0.55 | |
ResNet50 | 66.05 | 88.00 | 64.60 | 1.00 | −1.46 | ||
InceptionV3 | 75.15 | 88.00 | 72.17 | 0.00 | −2.98 | ||
ViT-S | 82.78 | 92.00 | 79.26 | 0.00 | −3.52 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, J.; Bie, H.; Jing, Z.; Zhi, Y. Scrub-and-Learn: Category-Aware Weight Modification for Machine Unlearning. AI 2025, 6, 108. https://doi.org/10.3390/ai6060108
Wang J, Bie H, Jing Z, Zhi Y. Scrub-and-Learn: Category-Aware Weight Modification for Machine Unlearning. AI. 2025; 6(6):108. https://doi.org/10.3390/ai6060108
Chicago/Turabian StyleWang, Jiali, Hongxia Bie, Zhao Jing, and Yichen Zhi. 2025. "Scrub-and-Learn: Category-Aware Weight Modification for Machine Unlearning" AI 6, no. 6: 108. https://doi.org/10.3390/ai6060108
APA StyleWang, J., Bie, H., Jing, Z., & Zhi, Y. (2025). Scrub-and-Learn: Category-Aware Weight Modification for Machine Unlearning. AI, 6(6), 108. https://doi.org/10.3390/ai6060108