Preprint Article Version 1 This version is not peer-reviewed

Machine Unlearning Application for CNN, DNN and GNN

Version 1 : Received: 25 September 2024 / Approved: 25 September 2024 / Online: 25 September 2024 (13:38:34 CEST)

How to cite: Milner, L. Machine Unlearning Application for CNN, DNN and GNN. Preprints 2024, 2024092026. https://doi.org/10.20944/preprints202409.2026.v1 Milner, L. Machine Unlearning Application for CNN, DNN and GNN. Preprints 2024, 2024092026. https://doi.org/10.20944/preprints202409.2026.v1

Abstract

Machine unlearning, the process of removing the influence of specific data points from trained machine learning models, has become increasingly important in light of modern data privacy regulations, such as the GDPR and the "right to be forgotten." This paper explores the challenges and solutions associated with implementing machine unlearning in three widely used neural network architectures: Convolutional Neural Networks (CNNs), Deep Neural Networks (DNNs), and Graph Neural Networks (GNNs). Each of these networks presents unique challenges due to their distinct architectures and learning processes. Techniques such as selective retraining, influence functions, and knowledge distillation have been proposed to address these challenges. The paper also introduces the concept of full parameter unlearning, which adjusts all trainable parameters using two key techniques: gradient ascent based on first-order information and Fisher information based on second-order information. These methods ensure comprehensive unlearning, but also introduce computational complexity. We discuss examples, potential solutions, and future research directions to make full parameter unlearning more scalable and efficient, thus providing a framework for balancing data privacy with model performance.

Keywords

convolutional neural networks; deep neural networks; Graph neural networks; gradient ascent; machine unlearning

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.