1. Introduction
In machine vision applications, images often suffer from impulse interference due to various factors, such as pulse noise caused by detector pixel failure in the camera or loss of storage elements in imaging [
1]. Satellite images, unmanned aerial vehicle (UAV) images, etc. generally have local smoothness, so their two-dimensional representation matrix usually has obvious low rankness. Low-rank prior information has an excellent performance in image denoising [
1], inpainting [
2,
3], reconstruction [
4], deblurring [
5], and other signal optimization processing fields. In the existing low-matrix-rank based image inpainting methods, the low-rank prior information is mainly divided into: the low rank of the signal itself, such as the inherent low rank of the matrix, the similarity of local map block information [
6], the similarity between frames of video [
7], etc. The Hankel-like structure low-rank matrix can be constructed in the Fourier domain by using the annihilating filter relationship [
2,
4,
8]; The high-order tensor rank can be obtained under various tensor decomposition frameworks, such as CANECOMP/PARAFAC (CP), Tucker [
9,
10], tensor train (TT) [
11], and tensor singular value decomposition (T-SVD) [
12,
13].
Besides low-rank prior, early image denoising methods assumed that the image had sparse representations in its transformation domains, such as difference domain, wavelet domain, etc. [
14,
15,
16,
17], and then constrained the sparse prior information to recover the image from the noise. Due to the effectiveness of low rank and sparsity in constrained image optimization problems, image processing schemes that combine constrained sparsity with low-rank prior information have been continuously proposed [
18,
19,
20]. Some image denoising methods use decomposition models of low-rank and sparse components, such as the robust principal component analysis (RPCA) method [
21,
22], aimed at separating low-rank images from sparse interference images. With the research and development of tensor decomposition tools in the field of mathematics, such as t-SVD, TT decomposition etc. [
11,
23,
24], related image or video optimization applications based on low-rank tensor are also being developed [
25,
26,
27,
28,
29].
In addition to using low-rank prior information to construct the constraint model for impulse interference image inpainting, many theories, methods, and technologies in the field of signal processing can be used to solve image inpainting problems, such as various matrix/tensor completion theories in mathematics [
30,
31,
32], finite innovation rate (FIR) theory [
33], image and video enhancing (such as Hankel like matrix based technology [
22]), denoising schemes (such as famous BM3D image denoising technology [
34], non-local TV denoising technology [
14,
15,
35]), etc. Various tensor-decomposition-based completion methods, convex optimization schemes, and fast optimization algorithm research systems can also be used to optimize image inpainting methods.
This paper uses the low-rank property of the image matrix to optimize the image inpainting modeling and algorithm under three kinds of pulse interference. Image inpainting modeling schemes based on nuclear norm, truncated nuclear norm, weighted nuclear norm, and matrix-factorization-based F norm are reviewed, and corresponding optimization iterative algorithms, such as TSVT_ADMM algorithm, WSVT_ ADMM algorithm, and UV_ ADMM algorithm are given. The experimental results of various inpainting methods are displayed visually and numerically, and a comparative analysis is given.
The structure and content of this paper are arranged as follows:
Section 1, introduction;
Section 2, based on the matrix low-rank constraint inpainting model and solution algorithm;
Section 3, experimental comparison; The last section is conclusions.
2. The Matrix Low-Rank Constrained Inpainting Model and Its Solution Algorithm
Image inpainting models based on low-rank matrix are generally as follows:
where
X represents the image to be recovered (
X ∈
R2 grayscale value image,
X ∈
R3 RGB image, grayscale value video, etc.). Θ
Ω represents the interference operator, in which the set of interference pixel positions is Ω.
represents the optimal solution.
Y represents the interfered image.
ε represents the error, generally set as a small constant value matrix, such as constant 10
-14. Φ represents low-rank transformation operation. Φ
X is to transform
X into a matrix or tensor with low ranks, such as a low-rank matrix transformed by the similar blocks of local images, the low-rank structure matrix [
1,
2,
4,
22] transformed by the relationship of the annihilating Filter, and the low-rank matrix [
7] transformed by the similarity between frames. If the videos or RGB images are treated as third-order or higher-order tensors, the rank property may come from the tensor Tucker rank [
36], TT rank [
26], etc. Under the interference of impulse noise, Θ
Ω operator generally has three representations.
The first is random valued impulse noise (RVIN)[
1].
The value in V is random, and its value range is within the range of X's pixel value, such as the range of 0~255, or the range of normalized 0~1. p is the interference rate, that is, the percentage of the number of interfering pixels in the total number of pixels in the image.
The second is salt and pepper noise, a special case of RVIN [
1],
where
Vmax is the maximum value of salt pepper noise and
Vmin is the minimum value of salt pepper noise.
In addition, random pixel loss is also a typical problem in the research field of image repair [
2,
6,
9], namely
The low-rank property is another form of sparsity essentially. Sparse constraints on matrices essentially minimize the zero norm of matrix elements, while low-rank constraints minimize the zero norm of singular values of the constraint matrix. The low-rank constraint on a matrix is actually the
l0 norm constraint on the singular values of the matrix, i.e.
. Since
is nonconvex, the
lp norm form
is commonly used for convex substitution[
37], where 0 ≤
p ≤ 1,
, and
σi are singular values of matrix Φ
X of size
n1 ×
n2,
n = min(
n1,
n2). The special case of the
lp norm is the nuclear norm
, which means
p = 1. Whether the low-rank constraint form used can accurately convex approximation has a significant impact on the repair effect. Let
, where function
gp(
σi) =
, 0 ≤
p ≤ 1. For the
l0 and
lp norm, the function
gp(
σi) is
Normalize
σi within the range of 0-1, and plot the curves of function
gp(
σi) at p=0, 0.3, 0.5, 0.7, and 1. The visualization of convex approximation is shown in
Figure 1. It can be seen that the smaller the p is, the closer the convex approximation function
gp(
σi) curve is to the
l0 norm curve.
As the simplest convex substitution of
l0 norm, the nuclear norm is the most common in low rank constraint modeling. To further improve the accuracy of low-rank approximation, we can use the weighted
l1 norm of the singular values of the matrix, that is, the weighted nuclear norm [
38,
39,
40,
41], or use the truncated nuclear norm [
42,
43,
44,
45] to replace the nuclear norm. Common regularization constraint schemes for low-rank matrices are summarized as follows.
2.1. Nuclear norm ‖X‖∗
For example, we use the minimized nuclear norm as a low rank constraint to establish an image inpainting model, as follows.
Y is the impulse interference image of size
n1 ×
n2. The regularization parameter
λ is introduced to convert model (2) into an unconstrained form:
Three algorithms can solve equation (3). The commonly used algorithm is the singular value shrinkage/threshold (SVT) algorithm [
46].
The SVT algorithm is shown below.
First, perform singular value decomposition for
Y,
U∑
VH = SVD(
Y), ∑ =
diag( {
σi}
1≤i≤n ), where
diag( . )is the diagonal matrix operation of the elements,
n = min(
n1,
n2). Then, a soft threshold operation
Dλ(
σi) = max(0,
σi − λ) is performed on the singular values [
47]. Then, set ∑
SVT =
diag( {
Dλ(
σi)}
1≤i≤n ). Finally, we obtained the solution results
=
U∑
SVTVH.
Jain P. proposed the SVP algorithm for solving model (2) problem [
48]. With the development of large-scale data processing and distributed computing, the alternating direction method of multipliers (ADMM) algorithm has become the mainstream optimization algorithm [
49]. When using the ADMM algorithm to solve (3), it needs to first introduce auxiliary variables
Z = Φ
X and residuals
L to transform model (3) into multiple sub-problems for an iterative solution:
where,
ρ > 0 is the introduced penalty parameter, and the SVT method is used to solve the sub problem
.
In this paper, we use the SVT algorithm, SVP algorithm, and ADMM algorithm [
50,
51] to solve the nuclear-norm-based image inpainting model, which corresponds to name as the SVT method, SVP method, and n_ADMM method respectively. The details of the SVT algorithm, SVP algorithm, and n_ADMM algorithm are shown in
Table 1,
Table 2 and
Table 3 respectively.
2.2. Weighted nuclear norm
The weighted nuclear norm
is a scheme that uses weighted singular value constraints to approximate
l0 constrained singular values [
38,
39,
40,
41]. It is a balanced constraint scheme that makes large singular values smaller and small singular values larger. It can be more accurate than the nuclear norm (i.e.,
l1 constraints on singular values), where
fun( . ) is the weighted function of each singular value
σi of matrix Φ
X, and [
U,
,
V] =
SVD(Φ
X). We use the weighted nuclear norm as a low-rank constraint to establish an image inpainting model, as follows.
Then, we introduce the regularization parameter λ and converted model (4) into an unconstrained form:
There are many kinds of weighting functions
fun( . ), and the p-norm (0<p<1) is the simplest weighting scheme, namely
gp(
σi) =
fun(
σi). Reference [
39] reviewed various weighting functions that approximate the
l0 norm of singular values, such as SCAD [
52], MCP [
53], Logarithm [
54], Geman [
55], Laplace [
56,
57], etc., of which the Logarithm scheme is the most classic. In the experiment part of this paper, we choose the Logarithm scheme for comparison. The weighting function in the Logarithm scheme is shown below.
where
γ > 0 is a parameter that is determined based on experience.
The simplest and most direct solution for model (4) is the weighted SVT (WSVT) algorithm. Set the weight wi = fun(σi), i = 1, 2, …, n, and then = U∑WSVTVH, where ∑WSVT = diag({Dλ(wiσi)}1≤i≤n).
We use ADMM to solve the weighted nuclear norm image inpainting problem (5). We introduce auxiliary variables
Z = Φ
X and residuals
L to transform model (5) into multiple sub-problems for iterative solution:
where
ρ > 0 is the introduced penalty parameter, and the WSVT algorithm is used to solve the sub problem
. The combination of the weighted SVT algorithm and ADMM algorithm can obtain more accurate iterative estimation. We use the ADMM algorithm to solve the weighted-nuclear-norm-based image inpainting model (7), and name it WSVT_ADMM method. The details of WSVT_ADMM algorithm used to solve model (7) are shown in
Table 4.
2.3. Truncated nuclear norm
In general, the singular value curve of a low-rank matrix exhibits an exponential extreme decay trend from large to small, and the singular value values sorted backward will approach 0. Therefore, the minimization of the nuclear norm is mainly to constrain the minimization of large singular values. To fully utilize the small singular values, a truncated nuclear norm minimization scheme can be used. The purpose of truncated nuclear norm minimization is to constrain the minimization of small singular values [
42,
43,
44,
45]. We use the truncated nuclear norm as a low-rank constraint to establish an image inpainting model, as follows.
where
is a truncation operation that extracts the first
r larger diagonal elements in the diagonal matrix
UHΦ
XV, and
‖Φ
X‖∗ −
(
UHΦ
XV) means that the first
r larger diagonal elements in the diagonal matrix
UHΦ
XV are zeroed, and the last
r smaller diagonal elements are retained [
58,
59]. We introduce a regularization parameter
λ and converted (8) into an unconstrained form:
U,
V is the truncated left and right singular value vector group matrix of Φ
X. The essence of truncated nuclear norm minimization is to minimize the sum of smaller singular value elements of the constrained low-rank matrices.
The truncated-nuclear-norm-based model can be solved by APGL or ADMM algorithm. This paper combines the ADMM algorithm with the SVT algorithm to solve the truncated-nuclear-norm-based image inpainting model (9), and abbreviates it as the TSVT (Truncated SVT) algorithm. The details of the TSVT algorithm used to solve model (9) are shown in
Table 5.
2.4. The F norm of UV matrix factorization
The process of solving the nuclear norm minimization problem involves time-consuming matrix singular value decomposition. Early Srebro N. [
60] proposed the property
and proved it. Later, in many applications, the F norm of UV matrix factorization was used instead of the nuclear norm to simplify the calculation time [
1,
61,
62,
63,
64]. We use the minimized F norm of UV matrix factorization as a low-rank constraint to establish an image inpainting model, as follows.
Then, we introduce the regularization parameter
λ and penalty parameter
ρ > 0 to convert model (10) into an unconstrained form:
where
L is the residual variable. The initial values of U and V can be initialized by the LMaFit method [
2,
65]. The model (11) is commonly solved by the ADMM algorithm, and we name it UV_ ADMM method. The details of UV_ ADMM algorithm are shown in
Table 6.
Compared with the n_ADMM method, the F-norm of UV-matrix based UV_ ADMM method avoids the time-consuming SVD in each iteration, making it more suitable for large matrix modeling with low-rank constraints. This method and the weighted nuclear norm method are commonly used in low-rank matrix constrained models.
The above model and its solving algorithm are summarized in
Table 7. In addition, other algorithms that can solve the above model. For example, commonly used algorithms in sparsity-solving models. Sparsity constraint on signal refers to minimization of
l0 norm of signal elements, while low-rank constraint on signal refers to minimization of zero norms of the singular value of the signal matrix. Therefore, the optimization solution based on low-rank constraint model has a lot in common with the optimization solution based on the sparse constraint model in solving the algorithm. Iterative optimization algorithms based on sparse constraint models can be applied to optimization solutions based on matrix low-rank constraint models, such as convex relaxation algorithms, which find a sparse or low-rank approximation of signals by transforming nonconvex problems into convex problems through iterative solutions. Among them, the CG algorithm, IST algorithm [
66], Split Bregman algorithm [
67], and MM (major minimize) algorithm [
58,
68] can be flexibly changed according to different optimization models.
3. Comparative Experiments
In this section, we conduct a comparison of the above methods for solving satellite image inpainting problems. We simulated the impulse interference on satellite images with an interference rate of 30%
1. The three kinds of impulse interference were: A. Random impulse interference; B. Spicy salt impulse interference; C. Random pixels missing. The satellite images in this paper are sourced from
https://captain-whu.github.io/ DOTA/dataset.html public dataset DOTA v.2.0, images provided by China Resources Satellite Data and Application Center, satellite GF-1, satellite GF-2, etc. The comparison methods: SVT, SVP, n_ADMM, TSVT_ ADMM, WSVT_ ADMM, and UV_ ADMM. For a fair comparison, every method is conducted with its optimal parameters to ensure every method has the best performance.
The relative least normalized error (RLNE) and the structural similarity (SSIM) [
69] are used as image inpainting quality indicators. The RLNE is an index based on the error between pixels, and the SSIM index is more consistent with human visual perception in image visual evaluation. Generally, the larger the SSIM value is, the better the image inpainting quality is. All simulations were carried out on Windows 10 and MATLAB R2019a running on a PC with an Intel Core i7 CPU 2.8GHz and 16GB of memory.
A gray satellite image and its singular values curve are shown in
Figure 2a,b respectively. The singular values of the image are descending rapidly from large to small, and most of them tend to be zero. This indicates that the image has the characteristics of low rank. The three examples of impulse interference satellite images are shown in
Figure 3, where the original image is the image in
Figure 1a. It can be seen that the 30% interference rate has caused obvious information loss on the building shape, layout, gray value shading, and other features in the original image.
The comparison of the average values (RLNE, SSIM, and running time) of six image inpainting methods under the interference of random impulse, salt and pepper noise, and pixel missing is shown in
Table 8. The visual comparison of the six image inpainting methods under the interference of salt and pepper noise is shown in
Figure 4.
Based on the above visual and numerical comparison, we analyze the experimental methods below.
The matrix rank constraint method based on the F norm of UV-matrix-factorization (i.e. UV_ADMM method) is equivalent to the method based on nuclear norm constraint (i.e. n_ADMM method) in terms of effectiveness. Overall, the n_ADMM method is slightly better, increasing the RLNE index by about 0.3% and the SSIM index by 0.3~1.
Due to that the nuclear-norm-based SVT, SVP, n_ADMM methods, the weighted nuclear norm-based WSVT_ADMM methods, and the truncated nuclear norm-based TSVT methods all involve time-consuming SVD calculations in each iteration, the UV_ADMM method based on UV matrix factorization F norm have absolute advantages in terms of runtime. However, the UV_ADMM method did not achieve better accurate results compared to other methods, because the UV_ADMM method needs to initially set the estimated rank. Such as using the LMaFit method to initialize the rank. However, the rank is initialization estimated and the accuracy of the rank is not high, which leads to inaccurate low-rank constraints. So, this UV-matrix-factorization-based method is more commonly used for large-scale low-rank matrix calculations, and it can greatly reduce the inpainting time of large-scale matrix optimization due to avoiding SVD of each iteration.
Since the weighted and truncated nuclear norm can better convex approach the singular value l0 norm, the WSVT_ADMM and TSVT methods are significantly better than that of the nuclear-norm-based methods (SVT, SVP, n_ADMM) in terms of the accuracy of inpainting.
4. Conclusions
In the application of machine vision, satellite images may suffer from three forms of impulse noise interference. In this paper, we use the low-rank characteristics of the image matrix to optimize and repair the images under the three kinds of impulse interference and provide the optimization algorithm. Firstly, image inpainting modeling schemes based on nuclear norm, truncated nuclear norm, weighted nuclear norm, and matrix factorization F norm are reviewed. Then, the corresponding optimization iterative algorithm is given, such as the TSVT_ ADMM algorithm, WSVT_ ADMM algorithm, UV_ ADMM algorithm, etc. Finally, the experimental results of various matrix-rank-constraint-based methods are presented visually and numerically, and a comparative analysis is given. The experimental results show that all the mentioned matrix-rank-constraint-based methods can repair images to a certain extent and suppress certain interference noise. Among them, methods based on weighted nuclear norm and methods based on truncated nuclear norm can achieve better repair effect, while methods based on matrix factorization F norm take the shortest time and can be used for large-scale matrix low-rank calculation.
Author Contributions
Conceptualization, S.M.; methodology, S.M.; investigation, W.Y. and Z.L.; resources, S.M. and Z.L.; writing—original draft preparation, S.M. and F.C.; writing—review and editing, S.M., W.Y. and S.F.; supervision, L.L. and S.F.; funding acquisition, S.M. and L.L. All authors have read and agreed to the published version of the manuscript.
Funding
This work was funded by the National Key Laboratory of Science and Technology on Space Microwave, No. HTKJ2021KL504012; Supported by the Science and Technology Innovation Cultivation Fund of Space Engineering University, No. KJCX-2021-17; Supported by the Information Security Laboratory of National Defense Research and Experiment, No.2020XXAQ02.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
Notes
1 |
The interference rate is the percentage of the number of interference pixels in the total number of image pixels. |
References
- Kyong, H.; Jong, C. Sparse and Low-Rank Decomposition of a Hankel Structured Matrix for Impulse Noise Removal. IEEE Trans. Image Process. 2018, 27, 1448–1461. [Google Scholar] [CrossRef]
- Kyong, H.; Ye, J. Annihilating filterbased lowrank Hankel matrix approach for image inpainting. IEEE Trans. Image Process. 2015, 24, 3498–511. [Google Scholar] [CrossRef]
- Balachandrasekaran, A.; Magnotta, V.; Jacob, M. Recovery of damped exponentials using structured low rank matrix completion. IEEE Trans. Med. Imaging 2017, 36, 2087–2098. [Google Scholar] [CrossRef] [PubMed]
- Haldar, J. Low-rank modeling of local-space neighborhoods (LORAKS) for constrained MRI. IEEE Trans. Med. Imaging 2014, 33, 668–680. [Google Scholar] [CrossRef] [PubMed]
- Ren, W.; Cao, X.; Pan, J.; et al. Image deblurring via enhanced low-rank prior. IEEE Trans. Image Process. 2016, 25, 3426–3437. [Google Scholar] [CrossRef]
- Xu, Z.; Sun, J. Image inpainting by patch propagation using patch sparsity. IEEE Trans. Image Process. 2010, 9, 1153–1165. [Google Scholar] [CrossRef]
- Zhao, B.; Haldar, J.; Christodoulou, A.; et al. Image reconstruction from highly undersampled (k, t)-space data with joint partial separability and sparsity constraints. IEEE Trans. Med. Imaging 2012, 31, 1809–20. [Google Scholar] [CrossRef]
- Kyong, H.; Jong, C. Annihilating Filter-Based Low-Rank Hankel Matrix Approach for Image Inpainting. IEEE Trans. Image Process. 2018, 27, 1448–1461. [Google Scholar] [CrossRef]
- Long, Z.; Liu, Y.; Chen, L.; et al. Low rank tensor completion for multiway visual data. Signal Process. 2019, 155, 301–316. [Google Scholar] [CrossRef]
- Kolda, T.; Bader, B. Tensor decompositions and applications. SIAM Review 2009, 51, 455–500. [Google Scholar] [CrossRef]
- Oseledets, I. Tensor-train decomposition. SIAM J. Scien. Comput. 2011, 33, 2295–2317. [Google Scholar] [CrossRef]
- Shi, Q.; Cheung, M.; Lou, J. Robust Tensor SVD and Recovery With Rank Estimation. IEEE Trans. Cyber. 2022, 52, 10667–10682. [Google Scholar] [CrossRef] [PubMed]
- Wu, F.; Li, C.; Li, Y.; Tang, N. Robust low-rank tensor completion via new regularized model with approximate SVD. Inform. Sciences 2023, 629, 646–666. [Google Scholar] [CrossRef]
- Huang, J.; Yang, F. Compressed magnetic resonance imaging based on wavelet sparsity and nonlocal total variation. IEEE 9th International Symposium on Biomedical Imaging: From Nano to Macro, 2012, 5, 968–971. [Google Scholar] [CrossRef]
- Zhang, X.; Chan, T. Wavelet inpainting by nonlocal total variation. Inverse Problems and Imaging 2010, 4, 191–210. [Google Scholar] [CrossRef]
- Wang, W.; Chen, J. Adaptive rate image compressive sensing based on the hybrid sparsity estimation model. Digit. Signal Process. 2023, 139, 104079. [Google Scholar] [CrossRef]
- Ou, Y.; Li, B.; Swamy, M. Low-rank with sparsity constraints for image denoising. Information Sciences 2023, 637, 118931. [Google Scholar] [CrossRef]
- Lingala, S.; Hu, Y.; Dibella, E.; et al. Accelerated dynamic MRI exploiting sparsity and lowrank structure: kt SLR. IEEE Trans. Med. Imaging 2011, 30, 1042–1054. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Huang, L.; Li, Y.; Zhang, K.; Yin, C. Low-Rank and Sparse Matrix Recovery for Hyperspectral Image Reconstruction Using Bayesian Learning. Sensors 2022, 22, 343. [Google Scholar] [CrossRef] [PubMed]
- Zhao, X.; Li, M.; Nie, T.; Han, C.; Huang, L. An Innovative Approach for Removing Stripe Noise in Infrared Images. Sensors 2023, 23, 6786. [Google Scholar] [CrossRef]
- Tremoulheac, B.; Dikaios, N.; Atkinson, D.; et al. Dynamic MR image reconstructionseparation from undersampled (k, t)space via lowrank plus sparse prior. IEEE Trans. Med. Imaging 2014, 33, 1689–1701. [Google Scholar] [CrossRef] [PubMed]
- Kyong, H.; Ye, J. Annihilating filter-based low-rank Hankel matrix approach for image inpainting. IEEE Trans. Image Process. 2015, 24, 3498–511. [Google Scholar] [CrossRef] [PubMed]
- Zhao, Q.; Zhou, G.; Xie, S.; et al. Tensor ring decomposition. 2016, arXiv:1606.05535. [Google Scholar] [CrossRef]
- Kilmer, M.; Braman, K.; Hao, N. Third-order tensors as operators on matrices: a theoretical and computational framework with applications in imaging. Siam J. Matrix Anal. A. 2013, 34, 148–172. [Google Scholar] [CrossRef]
- Zhang, Z.; Aeron, S. Exact tensor completion using t-SVD. IEEE Trans. Signal Process. 2017, 65, 1511–1526. [Google Scholar] [CrossRef]
- Bengua, J. Efficient tensor completion for color image and video recovery: Lowrank tensor train. IEEE Trans. Image Process. 2017, 26, 1057–7149. [Google Scholar] [CrossRef] [PubMed]
- Ma, S.; Du, H.; Mei, W. Dynamic MR image reconstruction from highly undersampled (k, t)-space data exploiting low tensor train rank and sparse prior. IEEE Access 2020, 8, 28690–28703. [Google Scholar] [CrossRef]
- Ma, S.; Ai, J.; Du, H.; Fang, L.; Mei, W. Recovering low-rank tensor from limited coefficients in any ortho-normal basis using tensor-singular value decomposition. IET Signal Process. 2021, 19, 162–181. [Google Scholar] [CrossRef]
- Tang, T.; Kuang, G. SAR Image Reconstruction of Vehicle Targets Based on Tensor Decomposition. Electronics 2022, 11, 2859. [Google Scholar] [CrossRef]
- Gross, D. Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Infor. Theory 2011, 57, 1548–1566. [Google Scholar] [CrossRef]
- Jain P.; Oh S. Provable tensor factorization with missing data. In: Proceedings of the 27th International Conference on Neural Information Processing Systems (NIPS) 2014, 1: 1431–1439.
- Zhang, Z.; Aeron, S. Exact tensor completion using t-SVD. IEEE Trans. Signal Process. 2017, 65, 1511–1526. [Google Scholar] [CrossRef]
- Vetterli, M.; Marziliano, P.; Blu, T. Sampling signals with finite rate of innovation. IEEE trans. Signal Process. 2002, 50, 1417–1428. [Google Scholar] [CrossRef]
- Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–209. [Google Scholar] [CrossRef] [PubMed]
- Lou, Y.; Zhang, X.; Osher, S.; Bertozzi, A. Image recovery via nonlocal operators. Journal of Scientific Computing 2010, 42, 185–197. [Google Scholar] [CrossRef]
- Filipovi, M.; Juki´, A. Tucker factorization with missing data with application to low-n-rank tensor completion. Multidim Syst. Sign. Process. 2015, 26, 677–692. [Google Scholar] [CrossRef]
- Wang, X.; Kong, L.; Wang, L.; Yang, Z. High-Dimensional Covariance Estimation via Constrained Lq-Type Regularization. Mathematics 2023, 11, 1022. [Google Scholar] [CrossRef]
- Candès, E.; Wakin, M.; Boyd, S. Enhancing sparsity by reweighted l1 minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar] [CrossRef]
- Lu, C.; Tang, J.; Yan, S.; et al. Nonconvex Nonsmooth Low-Rank Minimization via Iteratively Reweighted Nuclear Norm. IEEE Trans. Image Process. 2015, 25, 829–839. [Google Scholar] [CrossRef]
- Zhang, J.; Lu, J.; Wang, C.; Li, S. Hyperspectral and multispectral image fusion via superpixel-based weighted nuclear norm minimization. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–12. [Google Scholar] [CrossRef]
- Li, Z.; Yan, M.; Zeng, T.; Zhang, G. Phase retrieval from incomplete data via weighted nuclear norm minimization. Pattern Recognition. 2022, 125, 108537. [Google Scholar] [CrossRef]
- Cao, F.; Chen, J.; Ye, H.; et al. Recovering low-rank and sparse matrix based on the truncated nuclear norm. Neural Networks 2017, 85, 10–20. [Google Scholar] [CrossRef]
- Hu, Y.; Zhang, D.; Ye, J.; et al. Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2117–2130. [Google Scholar] [CrossRef] [PubMed]
- Fan, Q.; Liu, Y.; Yang, T.; Peng, H. Fast and accurate spectrum estimation via virtual coarray interpolation based on truncated nuclear norm regularization. IEEE Signal Process. Lett. 2022, 29, 169–173. [Google Scholar] [CrossRef]
- Yadav, S.; George, N. Fast direction-of-arrival estimation via coarray interpolation based on truncated nuclear norm regularization. IEEE Trans. Circuits Syst. II, Exp. Briefs 2021, 68, 1522–1526. [Google Scholar] [CrossRef]
- Cai, J.; Candès, E.; Shen, Z. A singular value thresholding algorithm for matrix completion. Siam J. Optimiz. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
- Xu, J.; Fu, Y.; Xiang, Y. An edge map-guided acceleration strategy for multi-scale weighted nuclear norm minimization-based image denoising. Digit. Signal Process. 2023, 134, 103932. [Google Scholar] [CrossRef]
- Jain P.; Meka R. Guaranteed rank minimization via singular value projection. Available online: http://arxiv.org/ abs/0909.5457, 2009. [CrossRef]
- Stephen, B. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends in Mach. Le. 2010, 3, 1–122. [Google Scholar] [CrossRef]
- Zhao, Q.; Lin, Y.; Wang, F. Adaptive weighting function for weighted nuclear norm based matrix/tensor completion. Int. J. Mach. Learn. Cyber. 2023. [Google Scholar] [CrossRef]
- Liu, X.; Hao, C.; Su, Z. Image inpainting algorithm based on tensor decomposition and weighted nuclear norm. Multimed Tools Appl. 2023, 82, 3433–3458. [Google Scholar] [CrossRef]
- Fan, J.; Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
- Friedman, J. Fast sparse regression and classification. Int. J. Forecasting 2012, 28, 722–738. [Google Scholar] [CrossRef]
- Zhang, C.-H. Nearly unbiased variable selection under minimax concave penalty. Ann. Statist. 2010, 38, 894–942. [Google Scholar] [CrossRef]
- Geman, D.; Yang, C. Nonlinear image recovery with half-quadratic regularization. IEEE Trans. Image Process. 1995, 4, 932–946. [Google Scholar] [CrossRef] [PubMed]
- Trzasko, J.; Manduca, A. Highly undersampled magnetic resonance image reconstruction via homotopic 0-minimization. IEEE Trans. Med. Imag. 2009, 28, 106–121. [Google Scholar] [CrossRef] [PubMed]
- Liu, Q. A truncated nuclear norm and graph-Laplacian regularized low-rank representation method for tumor clustering and gene selection. BMC Bioinformatics. 2021, 22, 436. [Google Scholar] [CrossRef]
- Zhang, Q.; Li, X.; Mao, H.; Huang, Z.; Xiao, Y.; Chen W., g; Xian, J.; Bi, Y. Improved sparse low-rank model via periodic overlapping group shrinkage and truncated nuclear norm for rolling bearing fault diagnosis. Measurement Sci. Technol. 2023, 34. [Google Scholar] [CrossRef]
- Ran, J.; Bian, J.; Chen, G.; Zhang, Y.; Liu, W. A truncated nuclear norm regularization model for signal extraction from GNSS coordinate time series. Adv. Space Res. 2022, 70, 336–349. [Google Scholar] [CrossRef]
- Signoretto M.; Cevher V.; Suykens J. An SVD-free approach to a class of structured low rank matrix optimization problems with application to system identification. In: IEEE Conference on Decision and Control, EPFL-CONF-184990, 2013.
- Srebro N. Learning with matrix factorizations. Cambridge, MA, USA: Massachusetts Institute of Technology, 2004.
- Recht, B.; Fazel, M.; Parrilo, P. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM review 2010, 52, 471–501. [Google Scholar] [CrossRef]
- Ma, S.; Du, H.; Mei, W. A two-step low rank matrices approach for constrained MR image reconstruction. Magn. Reson. Imaging 2019, 60. [Google Scholar] [CrossRef]
- Yang, G.; Zhang, L.; Wan, M. Exponential Graph Regularized Non-Negative Low-Rank Factorization for Robust Latent Representation. Mathematics 2022, 10, 4314. [Google Scholar] [CrossRef]
- Wen, Z.; Yin, W.; Zhang, Y. Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Program. Comput. 2012, 4, 333–361. [Google Scholar] [CrossRef]
- Daubechies, I.; Defrise, M.; Mol, C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 2004, 57, 1413–1457. [Google Scholar] [CrossRef]
- Goldstein, T.; Osher, S. The split Bregman method for L1regularized problems. Siam J. Imaging Sci. 2009, 2, 323–343. [Google Scholar] [CrossRef]
- Jacobson, M.; Fessler, J. An Expanded Theoretical Treatment of IterationDependent MajorizeMinimize Algorithms. IEEE Trans. Image Process. 2007, 16, 2411–2422. [Google Scholar] [CrossRef] [PubMed]
- Wang, Z.; Bovik, A.; Sheikh, H.; et al. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).