Preprint
Review

Review of Matrix Rank Constraint Model for Impulse Interference Image Inpainting

Altmetrics

Downloads

88

Views

21

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

30 October 2023

Posted:

30 October 2023

You are already at the latest version

Alerts
Abstract
Camera malfunctions or loss of storage elements in imaging devices may lead to loss of important image information or random impulse noise interference. Low rank is one of the important prior information in image optimization processing. This paper uses different low-rank constraint models of the image matrices to recover the impulse interference satellite images. Firstly, an overview of image inpainting models based on nuclear norm, truncated nuclear norm, weighted nuclear norm, and matrix-factorization-based F-norm is presented, and corresponding optimiza-tion iterative algorithms are provided. Then, we conducted experiments under three types of pulse interference and provided visual and numerical results. Finally, it is concluded that the WSVT_ADMM method based on the weighted nuclear norm can obtain the best image inpainting results; The UV_ADMM method based on F norm of matrix factorization has the least computa-tion time and can be used for large-scale low-rank matrix computation; The WSVT_ADMM method based on weighted nuclear norm and the TSVT_ ADMM method based on truncated nuclear norms can significantly improve the repair effect compared to the nuclear norm-based methods such as SVT, SVP, and n_ADMM.
Keywords: 
Subject: Computer Science and Mathematics  -   Computer Vision and Graphics

1. Introduction

In machine vision applications, images often suffer from impulse interference due to various factors, such as pulse noise caused by detector pixel failure in the camera or loss of storage elements in imaging [1]. Satellite images, unmanned aerial vehicle (UAV) images, etc. generally have local smoothness, so their two-dimensional representation matrix usually has obvious low rankness. Low-rank prior information has an excellent performance in image denoising [1], inpainting [2,3], reconstruction [4], deblurring [5], and other signal optimization processing fields. In the existing low-matrix-rank based image inpainting methods, the low-rank prior information is mainly divided into: the low rank of the signal itself, such as the inherent low rank of the matrix, the similarity of local map block information [6], the similarity between frames of video [7], etc. The Hankel-like structure low-rank matrix can be constructed in the Fourier domain by using the annihilating filter relationship [2,4,8]; The high-order tensor rank can be obtained under various tensor decomposition frameworks, such as CANECOMP/PARAFAC (CP), Tucker [9,10], tensor train (TT) [11], and tensor singular value decomposition (T-SVD) [12,13].
Besides low-rank prior, early image denoising methods assumed that the image had sparse representations in its transformation domains, such as difference domain, wavelet domain, etc. [14,15,16,17], and then constrained the sparse prior information to recover the image from the noise. Due to the effectiveness of low rank and sparsity in constrained image optimization problems, image processing schemes that combine constrained sparsity with low-rank prior information have been continuously proposed [18,19,20]. Some image denoising methods use decomposition models of low-rank and sparse components, such as the robust principal component analysis (RPCA) method [21,22], aimed at separating low-rank images from sparse interference images. With the research and development of tensor decomposition tools in the field of mathematics, such as t-SVD, TT decomposition etc. [11,23,24], related image or video optimization applications based on low-rank tensor are also being developed [25,26,27,28,29].
In addition to using low-rank prior information to construct the constraint model for impulse interference image inpainting, many theories, methods, and technologies in the field of signal processing can be used to solve image inpainting problems, such as various matrix/tensor completion theories in mathematics [30,31,32], finite innovation rate (FIR) theory [33], image and video enhancing (such as Hankel like matrix based technology [22]), denoising schemes (such as famous BM3D image denoising technology [34], non-local TV denoising technology [14,15,35]), etc. Various tensor-decomposition-based completion methods, convex optimization schemes, and fast optimization algorithm research systems can also be used to optimize image inpainting methods.
This paper uses the low-rank property of the image matrix to optimize the image inpainting modeling and algorithm under three kinds of pulse interference. Image inpainting modeling schemes based on nuclear norm, truncated nuclear norm, weighted nuclear norm, and matrix-factorization-based F norm are reviewed, and corresponding optimization iterative algorithms, such as TSVT_ADMM algorithm, WSVT_ ADMM algorithm, and UV_ ADMM algorithm are given. The experimental results of various inpainting methods are displayed visually and numerically, and a comparative analysis is given.
The structure and content of this paper are arranged as follows: Section 1, introduction; Section 2, based on the matrix low-rank constraint inpainting model and solution algorithm; Section 3, experimental comparison; The last section is conclusions.

2. The Matrix Low-Rank Constrained Inpainting Model and Its Solution Algorithm

Image inpainting models based on low-rank matrix are generally as follows:
Preprints 89062 i001
where X represents the image to be recovered (XR2 grayscale value image, XR3 RGB image, grayscale value video, etc.). ΘΩ represents the interference operator, in which the set of interference pixel positions is Ω. X ^ represents the optimal solution. Y represents the interfered image. ε represents the error, generally set as a small constant value matrix, such as constant 10-14. Φ represents low-rank transformation operation. ΦX is to transform X into a matrix or tensor with low ranks, such as a low-rank matrix transformed by the similar blocks of local images, the low-rank structure matrix [1,2,4,22] transformed by the relationship of the annihilating Filter, and the low-rank matrix [7] transformed by the similarity between frames. If the videos or RGB images are treated as third-order or higher-order tensors, the rank property may come from the tensor Tucker rank [36], TT rank [26], etc. Under the interference of impulse noise, ΘΩ operator generally has three representations.
The first is random valued impulse noise (RVIN)[1].
Preprints 89062 i002
The value in V is random, and its value range is within the range of X's pixel value, such as the range of 0~255, or the range of normalized 0~1. p is the interference rate, that is, the percentage of the number of interfering pixels in the total number of pixels in the image.
The second is salt and pepper noise, a special case of RVIN [1],
Preprints 89062 i003
where Vmax is the maximum value of salt pepper noise and Vmin is the minimum value of salt pepper noise.
In addition, random pixel loss is also a typical problem in the research field of image repair [2,6,9], namely
Preprints 89062 i004
The low-rank property is another form of sparsity essentially. Sparse constraints on matrices essentially minimize the zero norm of matrix elements, while low-rank constraints minimize the zero norm of singular values of the constraint matrix. The low-rank constraint on a matrix is actually the l0 norm constraint on the singular values of the matrix, i.e. min X r a n k ( Φ X ) min X Φ X 0 . Since min X Φ X 0 is nonconvex, the lp norm form min X Φ X p is commonly used for convex substitution[37], where 0 ≤ p ≤ 1, Φ X p = i = 1 n σ i p , and σi are singular values of matrix ΦX of size n1 × n2, n = min(n1, n2). The special case of the lp norm is the nuclear norm Φ X * = i = 1 n σ i , which means p = 1. Whether the low-rank constraint form used can accurately convex approximation has a significant impact on the repair effect. Let Φ X p = i n g p ( σ i ) , where function gp(σi) = σ i p , 0 ≤ p ≤ 1. For the l0 and lp norm, the function gp(σi) is
Preprints 89062 i005
Normalize σi within the range of 0-1, and plot the curves of function gp(σi) at p=0, 0.3, 0.5, 0.7, and 1. The visualization of convex approximation is shown in Figure 1. It can be seen that the smaller the p is, the closer the convex approximation function gp(σi) curve is to the l0 norm curve.
As the simplest convex substitution of l0 norm, the nuclear norm is the most common in low rank constraint modeling. To further improve the accuracy of low-rank approximation, we can use the weighted l1 norm of the singular values of the matrix, that is, the weighted nuclear norm [38,39,40,41], or use the truncated nuclear norm [42,43,44,45] to replace the nuclear norm. Common regularization constraint schemes for low-rank matrices are summarized as follows.

2.1. Nuclear norm ‖X‖

For example, we use the minimized nuclear norm as a low rank constraint to establish an image inpainting model, as follows.
Preprints 89062 i006
Y is the impulse interference image of size n1 × n2. The regularization parameter λ is introduced to convert model (2) into an unconstrained form:
Preprints 89062 i007
Three algorithms can solve equation (3). The commonly used algorithm is the singular value shrinkage/threshold (SVT) algorithm [46].
The SVT algorithm is shown below.
First, perform singular value decomposition for Y, UVH = SVD(Y), ∑ = diag( {σi}1≤in ), where diag( . )is the diagonal matrix operation of the elements, n = min(n1, n2). Then, a soft threshold operation Dλ(σi) = max(0, σi − λ) is performed on the singular values [47]. Then, set ∑SVT = diag( {Dλ(σi)}1≤in ). Finally, we obtained the solution results X ^ = USVTVH.
Jain P. proposed the SVP algorithm for solving model (2) problem [48]. With the development of large-scale data processing and distributed computing, the alternating direction method of multipliers (ADMM) algorithm has become the mainstream optimization algorithm [49]. When using the ADMM algorithm to solve (3), it needs to first introduce auxiliary variables Z = ΦX and residuals L to transform model (3) into multiple sub-problems for an iterative solution:
Preprints 89062 i008
where, ρ > 0 is the introduced penalty parameter, and the SVT method is used to solve the sub problem Z ^ .
In this paper, we use the SVT algorithm, SVP algorithm, and ADMM algorithm [50,51] to solve the nuclear-norm-based image inpainting model, which corresponds to name as the SVT method, SVP method, and n_ADMM method respectively. The details of the SVT algorithm, SVP algorithm, and n_ADMM algorithm are shown in Table 1, Table 2 and Table 3 respectively.

2.2. Weighted nuclear norm i f u n ( σ i )

The weighted nuclear norm i f u n ( σ i ) is a scheme that uses weighted singular value constraints to approximate l0 constrained singular values [38,39,40,41]. It is a balanced constraint scheme that makes large singular values smaller and small singular values larger. It can be more accurate than the nuclear norm (i.e., l1 constraints on singular values), where fun( . ) is the weighted function of each singular value σi of matrix ΦX, and [U, d i a g { σ i } i = 1 : min ( n 1 , n 2 ) , V] = SVDX). We use the weighted nuclear norm as a low-rank constraint to establish an image inpainting model, as follows.
Preprints 89062 i009
Then, we introduce the regularization parameter λ and converted model (4) into an unconstrained form:
Preprints 89062 i010
There are many kinds of weighting functions fun( . ), and the p-norm (0<p<1) is the simplest weighting scheme, namely gp(σi) = fun(σi). Reference [39] reviewed various weighting functions that approximate the l0 norm of singular values, such as SCAD [52], MCP [53], Logarithm [54], Geman [55], Laplace [56,57], etc., of which the Logarithm scheme is the most classic. In the experiment part of this paper, we choose the Logarithm scheme for comparison. The weighting function in the Logarithm scheme is shown below.
Preprints 89062 i011
where γ > 0 is a parameter that is determined based on experience.
The simplest and most direct solution for model (4) is the weighted SVT (WSVT) algorithm. Set the weight wi = fun(σi), i = 1, 2, …, n, and then X ^ = UWSVTVH, where ∑WSVT = diag({Dλ(wiσi)}1≤in).
We use ADMM to solve the weighted nuclear norm image inpainting problem (5). We introduce auxiliary variables Z = ΦX and residuals L to transform model (5) into multiple sub-problems for iterative solution:
Preprints 89062 i012
where ρ > 0 is the introduced penalty parameter, and the WSVT algorithm is used to solve the sub problem Z ^ . The combination of the weighted SVT algorithm and ADMM algorithm can obtain more accurate iterative estimation. We use the ADMM algorithm to solve the weighted-nuclear-norm-based image inpainting model (7), and name it WSVT_ADMM method. The details of WSVT_ADMM algorithm used to solve model (7) are shown in Table 4.

2.3. Truncated nuclear norm

In general, the singular value curve of a low-rank matrix exhibits an exponential extreme decay trend from large to small, and the singular value values sorted backward will approach 0. Therefore, the minimization of the nuclear norm is mainly to constrain the minimization of large singular values. To fully utilize the small singular values, a truncated nuclear norm minimization scheme can be used. The purpose of truncated nuclear norm minimization is to constrain the minimization of small singular values [42,43,44,45]. We use the truncated nuclear norm as a low-rank constraint to establish an image inpainting model, as follows.
Preprints 89062 i013
where T r is a truncation operation that extracts the first r larger diagonal elements in the diagonal matrix UHΦXV, and ΦX‖ T r (UHΦXV) means that the first r larger diagonal elements in the diagonal matrix UHΦXV are zeroed, and the last r smaller diagonal elements are retained [58,59]. We introduce a regularization parameter λ and converted (8) into an unconstrained form:
Preprints 89062 i014
U, V is the truncated left and right singular value vector group matrix of ΦX. The essence of truncated nuclear norm minimization is to minimize the sum of smaller singular value elements of the constrained low-rank matrices.
The truncated-nuclear-norm-based model can be solved by APGL or ADMM algorithm. This paper combines the ADMM algorithm with the SVT algorithm to solve the truncated-nuclear-norm-based image inpainting model (9), and abbreviates it as the TSVT (Truncated SVT) algorithm. The details of the TSVT algorithm used to solve model (9) are shown in Table 5.

2.4. The F norm of UV matrix factorization

The process of solving the nuclear norm minimization problem involves time-consuming matrix singular value decomposition. Early Srebro N. [60] proposed the property X * = min U V H = X 1 2 ( U F 2 + V F 2 ) and proved it. Later, in many applications, the F norm of UV matrix factorization was used instead of the nuclear norm to simplify the calculation time [1,61,62,63,64]. We use the minimized F norm of UV matrix factorization as a low-rank constraint to establish an image inpainting model, as follows.
Preprints 89062 i015
Then, we introduce the regularization parameter λ and penalty parameter ρ > 0 to convert model (10) into an unconstrained form:
Preprints 89062 i016
where L is the residual variable. The initial values of U and V can be initialized by the LMaFit method [2,65]. The model (11) is commonly solved by the ADMM algorithm, and we name it UV_ ADMM method. The details of UV_ ADMM algorithm are shown in Table 6.
Compared with the n_ADMM method, the F-norm of UV-matrix based UV_ ADMM method avoids the time-consuming SVD in each iteration, making it more suitable for large matrix modeling with low-rank constraints. This method and the weighted nuclear norm method are commonly used in low-rank matrix constrained models.
The above model and its solving algorithm are summarized in Table 7. In addition, other algorithms that can solve the above model. For example, commonly used algorithms in sparsity-solving models. Sparsity constraint on signal refers to minimization of l0 norm of signal elements, while low-rank constraint on signal refers to minimization of zero norms of the singular value of the signal matrix. Therefore, the optimization solution based on low-rank constraint model has a lot in common with the optimization solution based on the sparse constraint model in solving the algorithm. Iterative optimization algorithms based on sparse constraint models can be applied to optimization solutions based on matrix low-rank constraint models, such as convex relaxation algorithms, which find a sparse or low-rank approximation of signals by transforming nonconvex problems into convex problems through iterative solutions. Among them, the CG algorithm, IST algorithm [66], Split Bregman algorithm [67], and MM (major minimize) algorithm [58,68] can be flexibly changed according to different optimization models.

3. Comparative Experiments

In this section, we conduct a comparison of the above methods for solving satellite image inpainting problems. We simulated the impulse interference on satellite images with an interference rate of 30%1. The three kinds of impulse interference were: A. Random impulse interference; B. Spicy salt impulse interference; C. Random pixels missing. The satellite images in this paper are sourced from https://captain-whu.github.io/ DOTA/dataset.html public dataset DOTA v.2.0, images provided by China Resources Satellite Data and Application Center, satellite GF-1, satellite GF-2, etc. The comparison methods: SVT, SVP, n_ADMM, TSVT_ ADMM, WSVT_ ADMM, and UV_ ADMM. For a fair comparison, every method is conducted with its optimal parameters to ensure every method has the best performance.
The relative least normalized error (RLNE) and the structural similarity (SSIM) [69] are used as image inpainting quality indicators. The RLNE is an index based on the error between pixels, and the SSIM index is more consistent with human visual perception in image visual evaluation. Generally, the larger the SSIM value is, the better the image inpainting quality is. All simulations were carried out on Windows 10 and MATLAB R2019a running on a PC with an Intel Core i7 CPU 2.8GHz and 16GB of memory.
A gray satellite image and its singular values curve are shown in Figure 2a,b respectively. The singular values of the image are descending rapidly from large to small, and most of them tend to be zero. This indicates that the image has the characteristics of low rank. The three examples of impulse interference satellite images are shown in Figure 3, where the original image is the image in Figure 1a. It can be seen that the 30% interference rate has caused obvious information loss on the building shape, layout, gray value shading, and other features in the original image.
The comparison of the average values (RLNE, SSIM, and running time) of six image inpainting methods under the interference of random impulse, salt and pepper noise, and pixel missing is shown in Table 8. The visual comparison of the six image inpainting methods under the interference of salt and pepper noise is shown in Figure 4.
Based on the above visual and numerical comparison, we analyze the experimental methods below.
The matrix rank constraint method based on the F norm of UV-matrix-factorization (i.e. UV_ADMM method) is equivalent to the method based on nuclear norm constraint (i.e. n_ADMM method) in terms of effectiveness. Overall, the n_ADMM method is slightly better, increasing the RLNE index by about 0.3% and the SSIM index by 0.3~1.
Due to that the nuclear-norm-based SVT, SVP, n_ADMM methods, the weighted nuclear norm-based WSVT_ADMM methods, and the truncated nuclear norm-based TSVT methods all involve time-consuming SVD calculations in each iteration, the UV_ADMM method based on UV matrix factorization F norm have absolute advantages in terms of runtime. However, the UV_ADMM method did not achieve better accurate results compared to other methods, because the UV_ADMM method needs to initially set the estimated rank. Such as using the LMaFit method to initialize the rank. However, the rank is initialization estimated and the accuracy of the rank is not high, which leads to inaccurate low-rank constraints. So, this UV-matrix-factorization-based method is more commonly used for large-scale low-rank matrix calculations, and it can greatly reduce the inpainting time of large-scale matrix optimization due to avoiding SVD of each iteration.
Since the weighted and truncated nuclear norm can better convex approach the singular value l0 norm, the WSVT_ADMM and TSVT methods are significantly better than that of the nuclear-norm-based methods (SVT, SVP, n_ADMM) in terms of the accuracy of inpainting.

4. Conclusions

In the application of machine vision, satellite images may suffer from three forms of impulse noise interference. In this paper, we use the low-rank characteristics of the image matrix to optimize and repair the images under the three kinds of impulse interference and provide the optimization algorithm. Firstly, image inpainting modeling schemes based on nuclear norm, truncated nuclear norm, weighted nuclear norm, and matrix factorization F norm are reviewed. Then, the corresponding optimization iterative algorithm is given, such as the TSVT_ ADMM algorithm, WSVT_ ADMM algorithm, UV_ ADMM algorithm, etc. Finally, the experimental results of various matrix-rank-constraint-based methods are presented visually and numerically, and a comparative analysis is given. The experimental results show that all the mentioned matrix-rank-constraint-based methods can repair images to a certain extent and suppress certain interference noise. Among them, methods based on weighted nuclear norm and methods based on truncated nuclear norm can achieve better repair effect, while methods based on matrix factorization F norm take the shortest time and can be used for large-scale matrix low-rank calculation.

Author Contributions

Conceptualization, S.M.; methodology, S.M.; investigation, W.Y. and Z.L.; resources, S.M. and Z.L.; writing—original draft preparation, S.M. and F.C.; writing—review and editing, S.M., W.Y. and S.F.; supervision, L.L. and S.F.; funding acquisition, S.M. and L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Key Laboratory of Science and Technology on Space Microwave, No. HTKJ2021KL504012; Supported by the Science and Technology Innovation Cultivation Fund of Space Engineering University, No. KJCX-2021-17; Supported by the Information Security Laboratory of National Defense Research and Experiment, No.2020XXAQ02.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Notes

1
The interference rate is the percentage of the number of interference pixels in the total number of image pixels.

References

  1. Kyong, H.; Jong, C. Sparse and Low-Rank Decomposition of a Hankel Structured Matrix for Impulse Noise Removal. IEEE Trans. Image Process. 2018, 27, 1448–1461. [Google Scholar] [CrossRef]
  2. Kyong, H.; Ye, J. Annihilating filterbased lowrank Hankel matrix approach for image inpainting. IEEE Trans. Image Process. 2015, 24, 3498–511. [Google Scholar] [CrossRef]
  3. Balachandrasekaran, A.; Magnotta, V.; Jacob, M. Recovery of damped exponentials using structured low rank matrix completion. IEEE Trans. Med. Imaging 2017, 36, 2087–2098. [Google Scholar] [CrossRef] [PubMed]
  4. Haldar, J. Low-rank modeling of local-space neighborhoods (LORAKS) for constrained MRI. IEEE Trans. Med. Imaging 2014, 33, 668–680. [Google Scholar] [CrossRef] [PubMed]
  5. Ren, W.; Cao, X.; Pan, J.; et al. Image deblurring via enhanced low-rank prior. IEEE Trans. Image Process. 2016, 25, 3426–3437. [Google Scholar] [CrossRef]
  6. Xu, Z.; Sun, J. Image inpainting by patch propagation using patch sparsity. IEEE Trans. Image Process. 2010, 9, 1153–1165. [Google Scholar] [CrossRef]
  7. Zhao, B.; Haldar, J.; Christodoulou, A.; et al. Image reconstruction from highly undersampled (k, t)-space data with joint partial separability and sparsity constraints. IEEE Trans. Med. Imaging 2012, 31, 1809–20. [Google Scholar] [CrossRef]
  8. Kyong, H.; Jong, C. Annihilating Filter-Based Low-Rank Hankel Matrix Approach for Image Inpainting. IEEE Trans. Image Process. 2018, 27, 1448–1461. [Google Scholar] [CrossRef]
  9. Long, Z.; Liu, Y.; Chen, L.; et al. Low rank tensor completion for multiway visual data. Signal Process. 2019, 155, 301–316. [Google Scholar] [CrossRef]
  10. Kolda, T.; Bader, B. Tensor decompositions and applications. SIAM Review 2009, 51, 455–500. [Google Scholar] [CrossRef]
  11. Oseledets, I. Tensor-train decomposition. SIAM J. Scien. Comput. 2011, 33, 2295–2317. [Google Scholar] [CrossRef]
  12. Shi, Q.; Cheung, M.; Lou, J. Robust Tensor SVD and Recovery With Rank Estimation. IEEE Trans. Cyber. 2022, 52, 10667–10682. [Google Scholar] [CrossRef] [PubMed]
  13. Wu, F.; Li, C.; Li, Y.; Tang, N. Robust low-rank tensor completion via new regularized model with approximate SVD. Inform. Sciences 2023, 629, 646–666. [Google Scholar] [CrossRef]
  14. Huang, J.; Yang, F. Compressed magnetic resonance imaging based on wavelet sparsity and nonlocal total variation. IEEE 9th International Symposium on Biomedical Imaging: From Nano to Macro, 2012, 5, 968–971. [Google Scholar] [CrossRef]
  15. Zhang, X.; Chan, T. Wavelet inpainting by nonlocal total variation. Inverse Problems and Imaging 2010, 4, 191–210. [Google Scholar] [CrossRef]
  16. Wang, W.; Chen, J. Adaptive rate image compressive sensing based on the hybrid sparsity estimation model. Digit. Signal Process. 2023, 139, 104079. [Google Scholar] [CrossRef]
  17. Ou, Y.; Li, B.; Swamy, M. Low-rank with sparsity constraints for image denoising. Information Sciences 2023, 637, 118931. [Google Scholar] [CrossRef]
  18. Lingala, S.; Hu, Y.; Dibella, E.; et al. Accelerated dynamic MRI exploiting sparsity and lowrank structure: kt SLR. IEEE Trans. Med. Imaging 2011, 30, 1042–1054. [Google Scholar] [CrossRef] [PubMed]
  19. Zhang, Y.; Huang, L.; Li, Y.; Zhang, K.; Yin, C. Low-Rank and Sparse Matrix Recovery for Hyperspectral Image Reconstruction Using Bayesian Learning. Sensors 2022, 22, 343. [Google Scholar] [CrossRef] [PubMed]
  20. Zhao, X.; Li, M.; Nie, T.; Han, C.; Huang, L. An Innovative Approach for Removing Stripe Noise in Infrared Images. Sensors 2023, 23, 6786. [Google Scholar] [CrossRef]
  21. Tremoulheac, B.; Dikaios, N.; Atkinson, D.; et al. Dynamic MR image reconstructionseparation from undersampled (k, t)space via lowrank plus sparse prior. IEEE Trans. Med. Imaging 2014, 33, 1689–1701. [Google Scholar] [CrossRef] [PubMed]
  22. Kyong, H.; Ye, J. Annihilating filter-based low-rank Hankel matrix approach for image inpainting. IEEE Trans. Image Process. 2015, 24, 3498–511. [Google Scholar] [CrossRef] [PubMed]
  23. Zhao, Q.; Zhou, G.; Xie, S.; et al. Tensor ring decomposition. 2016, arXiv:1606.05535. [Google Scholar] [CrossRef]
  24. Kilmer, M.; Braman, K.; Hao, N. Third-order tensors as operators on matrices: a theoretical and computational framework with applications in imaging. Siam J. Matrix Anal. A. 2013, 34, 148–172. [Google Scholar] [CrossRef]
  25. Zhang, Z.; Aeron, S. Exact tensor completion using t-SVD. IEEE Trans. Signal Process. 2017, 65, 1511–1526. [Google Scholar] [CrossRef]
  26. Bengua, J. Efficient tensor completion for color image and video recovery: Lowrank tensor train. IEEE Trans. Image Process. 2017, 26, 1057–7149. [Google Scholar] [CrossRef] [PubMed]
  27. Ma, S.; Du, H.; Mei, W. Dynamic MR image reconstruction from highly undersampled (k, t)-space data exploiting low tensor train rank and sparse prior. IEEE Access 2020, 8, 28690–28703. [Google Scholar] [CrossRef]
  28. Ma, S.; Ai, J.; Du, H.; Fang, L.; Mei, W. Recovering low-rank tensor from limited coefficients in any ortho-normal basis using tensor-singular value decomposition. IET Signal Process. 2021, 19, 162–181. [Google Scholar] [CrossRef]
  29. Tang, T.; Kuang, G. SAR Image Reconstruction of Vehicle Targets Based on Tensor Decomposition. Electronics 2022, 11, 2859. [Google Scholar] [CrossRef]
  30. Gross, D. Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Infor. Theory 2011, 57, 1548–1566. [Google Scholar] [CrossRef]
  31. Jain P.; Oh S. Provable tensor factorization with missing data. In: Proceedings of the 27th International Conference on Neural Information Processing Systems (NIPS) 2014, 1: 1431–1439.
  32. Zhang, Z.; Aeron, S. Exact tensor completion using t-SVD. IEEE Trans. Signal Process. 2017, 65, 1511–1526. [Google Scholar] [CrossRef]
  33. Vetterli, M.; Marziliano, P.; Blu, T. Sampling signals with finite rate of innovation. IEEE trans. Signal Process. 2002, 50, 1417–1428. [Google Scholar] [CrossRef]
  34. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–209. [Google Scholar] [CrossRef] [PubMed]
  35. Lou, Y.; Zhang, X.; Osher, S.; Bertozzi, A. Image recovery via nonlocal operators. Journal of Scientific Computing 2010, 42, 185–197. [Google Scholar] [CrossRef]
  36. Filipovi, M.; Juki´, A. Tucker factorization with missing data with application to low-n-rank tensor completion. Multidim Syst. Sign. Process. 2015, 26, 677–692. [Google Scholar] [CrossRef]
  37. Wang, X.; Kong, L.; Wang, L.; Yang, Z. High-Dimensional Covariance Estimation via Constrained Lq-Type Regularization. Mathematics 2023, 11, 1022. [Google Scholar] [CrossRef]
  38. Candès, E.; Wakin, M.; Boyd, S. Enhancing sparsity by reweighted l1 minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar] [CrossRef]
  39. Lu, C.; Tang, J.; Yan, S.; et al. Nonconvex Nonsmooth Low-Rank Minimization via Iteratively Reweighted Nuclear Norm. IEEE Trans. Image Process. 2015, 25, 829–839. [Google Scholar] [CrossRef]
  40. Zhang, J.; Lu, J.; Wang, C.; Li, S. Hyperspectral and multispectral image fusion via superpixel-based weighted nuclear norm minimization. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–12. [Google Scholar] [CrossRef]
  41. Li, Z.; Yan, M.; Zeng, T.; Zhang, G. Phase retrieval from incomplete data via weighted nuclear norm minimization. Pattern Recognition. 2022, 125, 108537. [Google Scholar] [CrossRef]
  42. Cao, F.; Chen, J.; Ye, H.; et al. Recovering low-rank and sparse matrix based on the truncated nuclear norm. Neural Networks 2017, 85, 10–20. [Google Scholar] [CrossRef]
  43. Hu, Y.; Zhang, D.; Ye, J.; et al. Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2117–2130. [Google Scholar] [CrossRef] [PubMed]
  44. Fan, Q.; Liu, Y.; Yang, T.; Peng, H. Fast and accurate spectrum estimation via virtual coarray interpolation based on truncated nuclear norm regularization. IEEE Signal Process. Lett. 2022, 29, 169–173. [Google Scholar] [CrossRef]
  45. Yadav, S.; George, N. Fast direction-of-arrival estimation via coarray interpolation based on truncated nuclear norm regularization. IEEE Trans. Circuits Syst. II, Exp. Briefs 2021, 68, 1522–1526. [Google Scholar] [CrossRef]
  46. Cai, J.; Candès, E.; Shen, Z. A singular value thresholding algorithm for matrix completion. Siam J. Optimiz. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
  47. Xu, J.; Fu, Y.; Xiang, Y. An edge map-guided acceleration strategy for multi-scale weighted nuclear norm minimization-based image denoising. Digit. Signal Process. 2023, 134, 103932. [Google Scholar] [CrossRef]
  48. Jain P.; Meka R. Guaranteed rank minimization via singular value projection. Available online: http://arxiv.org/ abs/0909.5457, 2009. [CrossRef]
  49. Stephen, B. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends in Mach. Le. 2010, 3, 1–122. [Google Scholar] [CrossRef]
  50. Zhao, Q.; Lin, Y.; Wang, F. Adaptive weighting function for weighted nuclear norm based matrix/tensor completion. Int. J. Mach. Learn. Cyber. 2023. [Google Scholar] [CrossRef]
  51. Liu, X.; Hao, C.; Su, Z. Image inpainting algorithm based on tensor decomposition and weighted nuclear norm. Multimed Tools Appl. 2023, 82, 3433–3458. [Google Scholar] [CrossRef]
  52. Fan, J.; Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
  53. Friedman, J. Fast sparse regression and classification. Int. J. Forecasting 2012, 28, 722–738. [Google Scholar] [CrossRef]
  54. Zhang, C.-H. Nearly unbiased variable selection under minimax concave penalty. Ann. Statist. 2010, 38, 894–942. [Google Scholar] [CrossRef]
  55. Geman, D.; Yang, C. Nonlinear image recovery with half-quadratic regularization. IEEE Trans. Image Process. 1995, 4, 932–946. [Google Scholar] [CrossRef] [PubMed]
  56. Trzasko, J.; Manduca, A. Highly undersampled magnetic resonance image reconstruction via homotopic 0-minimization. IEEE Trans. Med. Imag. 2009, 28, 106–121. [Google Scholar] [CrossRef] [PubMed]
  57. Liu, Q. A truncated nuclear norm and graph-Laplacian regularized low-rank representation method for tumor clustering and gene selection. BMC Bioinformatics. 2021, 22, 436. [Google Scholar] [CrossRef]
  58. Zhang, Q.; Li, X.; Mao, H.; Huang, Z.; Xiao, Y.; Chen W., g; Xian, J.; Bi, Y. Improved sparse low-rank model via periodic overlapping group shrinkage and truncated nuclear norm for rolling bearing fault diagnosis. Measurement Sci. Technol. 2023, 34. [Google Scholar] [CrossRef]
  59. Ran, J.; Bian, J.; Chen, G.; Zhang, Y.; Liu, W. A truncated nuclear norm regularization model for signal extraction from GNSS coordinate time series. Adv. Space Res. 2022, 70, 336–349. [Google Scholar] [CrossRef]
  60. Signoretto M.; Cevher V.; Suykens J. An SVD-free approach to a class of structured low rank matrix optimization problems with application to system identification. In: IEEE Conference on Decision and Control, EPFL-CONF-184990, 2013.
  61. Srebro N. Learning with matrix factorizations. Cambridge, MA, USA: Massachusetts Institute of Technology, 2004.
  62. Recht, B.; Fazel, M.; Parrilo, P. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM review 2010, 52, 471–501. [Google Scholar] [CrossRef]
  63. Ma, S.; Du, H.; Mei, W. A two-step low rank matrices approach for constrained MR image reconstruction. Magn. Reson. Imaging 2019, 60. [Google Scholar] [CrossRef]
  64. Yang, G.; Zhang, L.; Wan, M. Exponential Graph Regularized Non-Negative Low-Rank Factorization for Robust Latent Representation. Mathematics 2022, 10, 4314. [Google Scholar] [CrossRef]
  65. Wen, Z.; Yin, W.; Zhang, Y. Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Program. Comput. 2012, 4, 333–361. [Google Scholar] [CrossRef]
  66. Daubechies, I.; Defrise, M.; Mol, C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 2004, 57, 1413–1457. [Google Scholar] [CrossRef]
  67. Goldstein, T.; Osher, S. The split Bregman method for L1regularized problems. Siam J. Imaging Sci. 2009, 2, 323–343. [Google Scholar] [CrossRef]
  68. Jacobson, M.; Fessler, J. An Expanded Theoretical Treatment of IterationDependent MajorizeMinimize Algorithms. IEEE Trans. Image Process. 2007, 16, 2411–2422. [Google Scholar] [CrossRef] [PubMed]
  69. Wang, Z.; Bovik, A.; Sheikh, H.; et al. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The curves of gp(σi), where p=0, 0.3, 0.5, 0.7 and 1.
Figure 1. The curves of gp(σi), where p=0, 0.3, 0.5, 0.7 and 1.
Preprints 89062 g001
Figure 2. A gray satellite image and its singular values curve.
Figure 2. A gray satellite image and its singular values curve.
Preprints 89062 g002
Figure 3. The interference satellite image (interference rate 30%). (a) Random impulse interference pattern; (b) Salt pepper impulse interference pattern; (c) Random pixel missing pattern.
Figure 3. The interference satellite image (interference rate 30%). (a) Random impulse interference pattern; (b) Salt pepper impulse interference pattern; (c) Random pixel missing pattern.
Preprints 89062 g003
Figure 4. Visual comparison of six image inpainting methods under the interference of salt and pepper noise. (a) Original image; (b) Interference image; (c) SVT; (d) SVP; (e) n_ADMM; (f) TSVT_ADMM; (g) WSVT_ADMM; (h) UV_ADMM.
Figure 4. Visual comparison of six image inpainting methods under the interference of salt and pepper noise. (a) Original image; (b) Interference image; (c) SVT; (d) SVP; (e) n_ADMM; (f) TSVT_ADMM; (g) WSVT_ADMM; (h) UV_ADMM.
Preprints 89062 g004
Table 1. The SVT algorithm for solving model (3).
Table 1. The SVT algorithm for solving model (3).
Input: Y, ΘΩ, ρ, λ, the maximum number of iteration tmax, and convergence condition ηtol = e − 6.
Initialization: (m, n) = size(Y), X(0) = zeros(m, n), Y d ( 0 ) = Y, t = 1.
While t < tmax and η(t) < ηtol do
  [U, S, V] = SVD( Y d ( t 1 ) ).
δ0 = diag(S), τ = 1 ρ , δ = 1 – min( τ δ 0 , 1), δ0 = δ0δ.
 Update X(t) = |Udiag(δ0) ∗ VH|.
 Update Y d ( t ) = | Y d ( t 1 ) + λ(YX(t))|.
 Update η(t) = X ( t ) X ( t 1 ) F X ( t ) F .
t = t + 1.
End while
X = X(t), ΘΩX = ΘΩY.
Output: X.
Table 2. The SVP algorithm for solving model (2).
Table 2. The SVP algorithm for solving model (2).
Input: Y, ΘΩ, τ = 0.01, the rank r, the maximum number of iteration tmax, and convergence condition ηtol = e − 6.
Initialization: (m, n) = size(Y), X(0) = zeros(m, n), Y d ( 0 ) = Y, t = 1.
While t < tmax and η(t) < ηtol do
Y d ( t ) = X(t−1)τ(X(t−1)Y).
 ΘΩ Y d ( t ) = ΘΩY.
 [U, S, V] = SVD( Y d ( t ) )
δ0 = diag(S), δ0 = δ0(1:r,1).
 Update X(t) = |U(:,1:r) ∗ diag(δ0) ∗ V(:,1:r)H|.
 Update η(t) = X ( t ) X ( t 1 ) F X ( t ) F .
 ΘΩX(t) = ΘΩY, t = t + 1.
End while
X = X(t).
Output: X.
Table 3. The n_ADMM algorithm for solving model (3).
Table 3. The n_ADMM algorithm for solving model (3).
Input: Y, ΘΩ, ρ, λ, the maximum number of iteration tmax, and convergence condition ηtol = e − 6.
Initialization: (m, n) = size(Y), X(0) = zeros(m, n), Y d ( 0 ) = Y, Z(0) = zeros(m, n), L(0) = zeros(m, n), t = 1.
While t < tmax and η(t) < ηtol do
 Update X(t) = ( Y d ( t 1 ) + λρ(Z(t−1)L(t−1)))./(ΘΩ + λρ).
 [U, S, V] = SVD(X(t) + L(t−1)).
δ0 = diag(S) − 1 ρ , δ0(find(δ0)) < 0) = 0.
 Update Z(t) = Udiag(δ0) ∗ VH.
 Update L(t) = L(t−1) + X(t)Z(t).
 Update η(t) = X ( t ) X ( t 1 ) F X ( t ) F .
t = t + 1.
End while
ΘΩX(t) = ΘΩY, X = |X(t)|.
Output: X.
Table 4. The WSVT_ADMM algorithm for solving model (7).
Table 4. The WSVT_ADMM algorithm for solving model (7).
Input: Y, ΘΩ, ρ, θ, λ, γ, the maximum number of iteration tmax, decay factor ς = 0.9 , and convergence condition ηtol = e − 6.
Initialization: (m, n) = size(Y), X(0) = zeros(m, n), L(0) = zeros(m, n), Y d ( 0 ) = Y, λ = ς ∗ max(|Y(:)|), t = 1.
While t < tmax and η(t) < ηtol do
 [U, S, V] = SVD( Y d ( t 1 ) ).
δ0 = diag(S), w = fun(δ0, γ, λ), δ0 = δ0w 1 ρ , δ0(find(δ0)) < 0) = 0.
 Update X(t) = Udiag(δ0) ∗ VH.
 Update Y d ( t ) = | Y d ( t 1 ) + θ(YX(t))|.
 Update η(t) = X ( t ) X ( t 1 ) F X ( t ) F .
λ = ς λ, ΘΩX(t) = ΘΩY, t = t + 1.
End while
X = X(t).
Output: X.
Table 5. The TSVT algorithm for solving the model (9).
Table 5. The TSVT algorithm for solving the model (9).
Input: Y, ΘΩ, ρ, λ, the maximum number of iteration tmax,, the truncated rank r, and convergence condition ηtol = e − 6.
Initialization: (m, n) = size(Y), X(0) = zeros(m, n), Y d ( 0 ) = Y, Z(0) = zeros(m, n), L(0) = zeros(m, n), t = 1.
While t < tmax and η(t) < ηtol do
τ = 1 ρ , T = Z(t−1) – τ ∗ L(t−1), [U, S, V] = SVD(T).
δ0 = diag(S), δ = 1 – min( τ δ 0 , 1), δ0 = δ0δ.
 Update X(t) = Udiag(δ0) ∗ VH.
 [Uz, Sz, Vz] = SVD(Z(t−1)), B = Uz(:,1:r).HVz(:,1:r).
 Update Z(t) = X(t) + τ ∗ (Z(t−1) + B), ΘΩZ(t) = ΘΩY.
 Update L(t) = L(t−1) + ρ(X(t)Z(t)).
 Update η(t) = X ( t ) X ( t 1 ) F X ( t ) F .
t = t + 1.
End while
ΘΩX(t) = ΘΩY, X = X(t).
Output: X.
Table 6. The UV_ ADMM algorithm for solving the model (11).
Table 6. The UV_ ADMM algorithm for solving the model (11).
Input: Y, ΘΩ, ρ, λ, the maximum number of iteration tmax, and convergence condition ηtol.
Initialization:  U n ( 0 ) and V n ( 0 ) by the LMaFit method [65], (m, n) = size(Y), X(0) = zeros(m, n), Y d ( 0 ) = Y, L(0) = zeros(m, n), t = 1.
While t < tmax and η < ηtol do
 Update X(t) = (Y(t) + λρ ∗ (U(t−1)V(t−1)HL(t−1)))./(ΘΩ + λρ).
 Update U(t) = ρ ∗ (X(t) + L(t−1)) ∗ V(t−1)inv(eye(r) + ρV(t−1)V(t−1)H).
 Update V(t) = ρ ∗ (X(t) + L(t−1)) ∗ U(t)inv(eye(r) + ρU(t)HU(t)).
 Update L(t) = L(t−1) + X(t)U(t)V(t)H.
 Update ηt+1 = X n t + 1 ( : ) X n t ( : ) F X n t ( : ) F .
t = t + 1.
End while
ΘΩX(t) = ΘΩY, X = X(t).
Output: X.
Table 7. Modeling and solving algorithm based on the low-rank constraint in this paper.
Table 7. Modeling and solving algorithm based on the low-rank constraint in this paper.
Constrained Modeling Nuclear Norm Truncated Nuclear Norm Weighted Nuclear Norm Matrix Decomposition F Norm
Solution algorithm SVT SVP ADMM ADMM ADMM ADMM
Method abbreviation SVT SVP n_ADMM TSVT_ADMM WSVT_ADMM UV_ADMM
Table 8. Numerical comparison of six image inpainting methods under three types of interference.
Table 8. Numerical comparison of six image inpainting methods under three types of interference.
Noise forms Methods Untreated SVT SVP n_ADMM TSVT_
ADMM
WSVT_
ADMM
UV_
ADMM
Indices
Random impulse RLNE (%) 45.88 18.36 9.83 19.70 8.43 8.19 20.01
SSIM (%) 34.77 77.50 92.19 74.96 94.25 94.49 74.30
Running time (s) / 11.7357 0.5484 1.3883 1.0401 2.0523 0.3375
Salt and pepper noise RLNE (%) 69.72 19.96 9.76 19.73 8.41 8.22 20.29
SSIM (%) 18.63 74.18 92.25 74.79 94.21 94.39 73.80
Running time(s) / 5.34 0.4793 1.1451 1.2996 2.216 0.1993
Pixel missing RLNE (%) 54.77 12.96 9.84 8.73 8.43 8.19 8.9
SSIM (%) 32.46 87.91 92.25 94.02 94.25 94.49 93.69
Running time (s) / 9.9569 0.5418 0.5194 0.9842 2.3475 0.2543
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated