Preprint Article Version 1 This version is not peer-reviewed

Survey of Security and Data Attacks on Machine Unlearning In Financial and E-Commerce

Version 1 : Received: 1 October 2024 / Approved: 2 October 2024 / Online: 3 October 2024 (08:22:55 CEST)

How to cite: Brodzinski, C. Survey of Security and Data Attacks on Machine Unlearning In Financial and E-Commerce. Preprints 2024, 2024100198. https://doi.org/10.20944/preprints202410.0198.v1 Brodzinski, C. Survey of Security and Data Attacks on Machine Unlearning In Financial and E-Commerce. Preprints 2024, 2024100198. https://doi.org/10.20944/preprints202410.0198.v1

Abstract

This paper surveys the landscape of security and data attacks on machine unlearning, with a focus on financial and e-commerce applications. We discuss key privacy threats such as Membership Inference Attacks and Data Reconstruction Attacks, where adversaries attempt to infer or reconstruct data that should have been removed. In addition, we explore security attacks including Machine Unlearning Data Poisoning, Unlearning Request Attacks, and Machine Unlearning Jailbreak Attacks, which target the underlying mechanisms of unlearning to manipulate or corrupt the model. To mitigate these risks, various defense strategies are examined, including differential privacy, robust cryptographic guarantees, and Zero-Knowledge Proofs (ZKPs), offering verifiable and tamper-proof unlearning mechanisms. These approaches are essential for safeguarding data integrity and privacy in high-stakes financial and e-commerce contexts, where compromised models can lead to fraud, data leaks, and reputational damage. This survey highlights the need for continued research and innovation in secure machine unlearning, as well as the importance of developing strong defenses against evolving attack vectors.

Keywords

Machine Unlearning; Convex Function; Graph Neural Network; Differential privacy; Robust cryptographic mechanisms

Subject

Computer Science and Mathematics, Computer Science

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.