Preprint Article Version 1 This version is not peer-reviewed

SSIM-based Autoencoder Modeling to Defeat Adversarial Patch Attacks

Version 1 : Received: 30 July 2024 / Approved: 1 August 2024 / Online: 2 August 2024 (04:01:27 CEST)

How to cite: Lee, S.; Hong, S.; Kim, G.; Ha, J. SSIM-based Autoencoder Modeling to Defeat Adversarial Patch Attacks. Preprints 2024, 2024080064. https://doi.org/10.20944/preprints202408.0064.v1 Lee, S.; Hong, S.; Kim, G.; Ha, J. SSIM-based Autoencoder Modeling to Defeat Adversarial Patch Attacks. Preprints 2024, 2024080064. https://doi.org/10.20944/preprints202408.0064.v1

Abstract

Object detection systems are widely used in various fields such as autonomous vehicles and facial recognition. In particular, object detection using deep learning networks enables real-time processing in low-performance edge devices, and can maintain high detection rates. However, edge devices that operate far from administrators are vulnerable to various physical attacks by malicious adversaries. In this paper, we implement a function for detecting traffic signs by using three versions of You Only Look Once (YOLO) and Faster-RCNN, which can be adopted by edge devices of autonomous vehicles. Then, assuming the role of a malicious attacker, we executed adversarial patch attacks with Adv-Patch and Dpatch. As a result of experimenting with misdetection of traffic stop signs using Adv-Patch and Dpatch, we confirmed the attacks can succeed with a high probability. To defeat these attacks, we propose an image reconstruction method using an autoencoder and the Structural Similarity Index Measure (SSIM). We confirm that the proposed method can sufficiently defend against an attack, attaining an AP above 91.46% even when two adversarial attacks are launched

Keywords

Object detection; YOLO; Adversarial patch attack; Structural Similarity Index Measure; Autoencoder

Subject

Engineering, Safety, Risk, Reliability and Quality

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.