In this paper, we propose an unsupervised blind restoration model for images in hybrid degradation scenes. The proposed model encodes the content information and degradation information of images and then uses the attention module to disentangle the two kinds of information. It can improve the ability of disentangled presentation learning for a generative adversarial network (GAN) to restore the images in hybrid degradation scenes, enhance the detailed features of restored image and remove the artifact combining the adversarial loss, cycle-consistency loss, and perception loss. The experimental results on the DIV2K dataset and medical images show that the proposed method outperforms existing unsupervised image restoration algorithms in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and subjective visual evaluation.
Keywords:
Subject: Computer Science and Mathematics - Applied Mathematics
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.