Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Exact Unlearning with Convex and Non-Convex Functions

Version 1 : Received: 25 September 2024 / Approved: 25 September 2024 / Online: 26 September 2024 (16:59:14 CEST)

How to cite: Lindstrom, C. Exact Unlearning with Convex and Non-Convex Functions. Preprints 2024, 2024092061. https://doi.org/10.20944/preprints202409.2061.v1 Lindstrom, C. Exact Unlearning with Convex and Non-Convex Functions. Preprints 2024, 2024092061. https://doi.org/10.20944/preprints202409.2061.v1

Abstract

Machine unlearning, the process of selectively forgetting or removing the influence of specific data points from a machine learning model, is increasingly important for privacy and compliance with regulations like the GDPR. This paper explores the concept of exact unlearning, focusing on its implementation in models trained using convex and non-convex functions. Convex functions, due to their well-behaved optimization landscapes, lend themselves to efficient unlearning through methods such as inverse optimization, duality-based approaches, and incremental learning. In contrast, non-convex functions, common in deep learning models, present more complex challenges due to their multiple local minima and high-dimensional parameter spaces. Techniques like checkpoint-based retraining, gradient inversion, and meta-learning are discussed as viable, though computationally expensive, methods for non-convex exact unlearning. The paper also highlights real-world applications in fields such as finance and healthcare, where exact unlearning can enhance privacy and security without compromising model performance. Finally, it outlines key challenges and future research directions, particularly the need for more efficient unlearning algorithms in non-convex settings and the development of secure, adversarial-resistant methods for sensitive data removal.

Keywords

exact unlearning; machine unlearning; convex function; non-convex function

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.