Preprint Article Version 1 This version is not peer-reviewed

Threat Models to Machine Unlearning

Version 1 : Received: 25 September 2024 / Approved: 26 September 2024 / Online: 26 September 2024 (09:55:31 CEST)

How to cite: Milner, L. Threat Models to Machine Unlearning. Preprints 2024, 2024092068. https://doi.org/10.20944/preprints202409.2068.v1 Milner, L. Threat Models to Machine Unlearning. Preprints 2024, 2024092068. https://doi.org/10.20944/preprints202409.2068.v1

Abstract

Machine unlearning is a critical process designed to allow machine learning models to forget or remove specific data upon request, particularly for privacy protection. While the primary objective of unlearning is to safeguard sensitive information, it introduces substantial privacy and security risks. This paper explores the two major categories of threats to machine unlearning: privacy attacks and security attacks. Privacy attacks, such as membership inference and model inversion, exploit residual information to infer or reconstruct sensitive data that should have been erased. Security attacks, particularly specific data poisoning, involve the injection of malicious data into the training process, which may leave lingering effects even after the unlearning process. In this paper, we provide an in-depth examination of these threats and propose several defense mechanisms. Differential privacy, adversarial training, and query limitations are highlighted as key defenses against privacy attacks, while data validation, adversarial examples, and post-unlearning audits are critical in mitigating security risks. Additionally, we discuss emerging methodologies like data provenance tracking and fine-tuning as crucial for ensuring unlearning processes are thorough and effective. Through these analyses, we aim to provide a comprehensive framework for strengthening the security and privacy of machine unlearning systems, enabling their broader adoption across industries such as healthcare, finance, and AI-driven technologies.

Keywords

threat model; machine unlearning; data-driven model; privacy model

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.