Version 1
: Received: 22 October 2024 / Approved: 23 October 2024 / Online: 24 October 2024 (08:15:14 CEST)
How to cite:
Chistyakova, A.; Antsiferova, A.; Khrebtov, M.; Lavrushkin, S.; Arkhipenko, K.; Vatolin, D.; Turdakov, D. Increasing the Robustness of Image Quality Assessment Models Through Adversarial Training. Preprints2024, 2024101803. https://doi.org/10.20944/preprints202410.1803.v1
Chistyakova, A.; Antsiferova, A.; Khrebtov, M.; Lavrushkin, S.; Arkhipenko, K.; Vatolin, D.; Turdakov, D. Increasing the Robustness of Image Quality Assessment Models Through Adversarial Training. Preprints 2024, 2024101803. https://doi.org/10.20944/preprints202410.1803.v1
Chistyakova, A.; Antsiferova, A.; Khrebtov, M.; Lavrushkin, S.; Arkhipenko, K.; Vatolin, D.; Turdakov, D. Increasing the Robustness of Image Quality Assessment Models Through Adversarial Training. Preprints2024, 2024101803. https://doi.org/10.20944/preprints202410.1803.v1
APA Style
Chistyakova, A., Antsiferova, A., Khrebtov, M., Lavrushkin, S., Arkhipenko, K., Vatolin, D., & Turdakov, D. (2024). Increasing the Robustness of Image Quality Assessment Models Through Adversarial Training. Preprints. https://doi.org/10.20944/preprints202410.1803.v1
Chicago/Turabian Style
Chistyakova, A., Dmitriy Vatolin and Denis Turdakov. 2024 "Increasing the Robustness of Image Quality Assessment Models Through Adversarial Training" Preprints. https://doi.org/10.20944/preprints202410.1803.v1
Abstract
The adversarial robustness of image quality assessment (IQA) models to adversarial attacks is emerging as a critical issue. Adversarial training has been widely used to improve the robustness of neural networks to adversarial attacks, but little in-depth research has examined adversarial training as a way to improve IQA model robustness. This study introduces an enhanced adversarial training approach tailored to IQA models; it adjusts the perceptual quality scores of adversarial images during training to enhance the correlation between an IQA model’s quality and the subjective quality scores. We also propose a new method for comparing IQA model robustness by measuring the Integral Robustness Score; this method evaluates the IQA model resistance to a set of adversarial perturbations with different magnitudes. We used our adversarial training approach to increase the robustness of five IQA models. Additionally, we tested the robustness of adversarially trained IQA models to 16 adversarial attacks and conducted an empirical probabilistic estimation of this feature. The code is available at https://github.com/wianluna/metrics_at.
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.