Preprint Article Version 1 This version is not peer-reviewed

Increasing the Robustness of Image Quality Assessment Models Through Adversarial Training

Version 1 : Received: 22 October 2024 / Approved: 23 October 2024 / Online: 24 October 2024 (08:15:14 CEST)

How to cite: Chistyakova, A.; Antsiferova, A.; Khrebtov, M.; Lavrushkin, S.; Arkhipenko, K.; Vatolin, D.; Turdakov, D. Increasing the Robustness of Image Quality Assessment Models Through Adversarial Training. Preprints 2024, 2024101803. https://doi.org/10.20944/preprints202410.1803.v1 Chistyakova, A.; Antsiferova, A.; Khrebtov, M.; Lavrushkin, S.; Arkhipenko, K.; Vatolin, D.; Turdakov, D. Increasing the Robustness of Image Quality Assessment Models Through Adversarial Training. Preprints 2024, 2024101803. https://doi.org/10.20944/preprints202410.1803.v1

Abstract

The adversarial robustness of image quality assessment (IQA) models to adversarial attacks is emerging as a critical issue. Adversarial training has been widely used to improve the robustness of neural networks to adversarial attacks, but little in-depth research has examined adversarial training as a way to improve IQA model robustness. This study introduces an enhanced adversarial training approach tailored to IQA models; it adjusts the perceptual quality scores of adversarial images during training to enhance the correlation between an IQA model’s quality and the subjective quality scores. We also propose a new method for comparing IQA model robustness by measuring the Integral Robustness Score; this method evaluates the IQA model resistance to a set of adversarial perturbations with different magnitudes. We used our adversarial training approach to increase the robustness of five IQA models. Additionally, we tested the robustness of adversarially trained IQA models to 16 adversarial attacks and conducted an empirical probabilistic estimation of this feature. The code is available at https://github.com/wianluna/metrics_at.

Keywords

adversarial robustness; adversarial training; image quality assessment; adversarial defense

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.