1. Introduction
Image restoration is an important task in many areas of applied sciences since digital images are frequently degraded by blur and noise during the acquisition process. Image restoration can be mathematically formulated as the linear inverse problem [
1]
where
and
respectively are vectorized forms of the observed
image and the exact
image to be restored,
is the linear operator modeling the imaging system and
represents Gaussian white noise with mean zero and standard deviation
. The image restoration problem (
1) is inherently ill-posed and regularization strategies, based on the prior information on the unknown image, are usually employed in order to effectively restore the image
from
.
In a variational framework, image restoration can be reformulated as a constrained optimization problem of the form
whose objective function contains a
-based term, imposing consistency of the model with the data, and a regularization term
, forcing the solution to satisfy some a priori properties. Here and henceforth, the symbol
denotes the Euclidean norm. The constraint imposes some characteristics on the solution which are often given by the physics underlying the data acquisition process. Since image pixels are known to be nonnegative, a typical choice for
is the positive orthant.
The quality of the restored images strongly depends on the choice of the regularization term which, in a very general framework, can be expressed as
where the positive scalars
are regularization parameters and the
are regularization functions for
. The multi-penalty approach (
3) allows to impose several regularity properties on the desired solution, however a crucial issue with its realization is the need to define reliable strategies for the choice of the regularization parameters
,
.
Therefore, in the literature, the most common and famous regularization approach is single-penalty regularization, also known as Tikhonov-like regularization, which corresponds to the choice
:
In image restoration, smooth functions based on the
-norm or convex nonsmooth functions like the Total Variation, the
norm or the Total Generalized Variation are usually used for
in (
4) [
2,
3]. Even in the case
, the development of suitable parameter choice criteria is still an open question. The recent literature has demonstrated a growing interest in multi-penalty regularization, with a significant number of researchers focusing on scenarios involving two penalty terms. Notably, the widely-used elastic regression in Statistics serves as an example of a multi-penalty regularization technique, integrating the
and
penalties from the Lasso and Ridge methods. However, the majority of the literature primarily addresses the development of suitable rules for parameter selection.
Lu, Pereverzev et al. [
4,
5] have extensively investigated two
-based terms, introducing a refined discrepancy principle to compute dual regularization parameters, along with its numerical implementation.
The issue of parameter selection is further discussed in [
6], where a generalized multi-parameter version of the L-curve criterion is proposed, and in [
7], which suggests a methodology based on the GCV method.
Reichel and Gazzola [
8] propose regularization terms of the form
where
are suitable regularization matrices. They present a method to determine the regularization parameters utilizing the discrepancy principle, with a special emphasis on the case
. Fornasier et al. [
9] propose a modified discrepancy principle for multi-penalty regularization and provide theoretical background for this a posteriori rule.
Works such as [
10,
11,
12,
13,
14] also explore multi-penalty regularization for unmixing problems, employing two penalty terms based on
and
norms,
and
. The latter specifically concentrates on the
and
norms.
The study [
15] assesses two-penalty regularization, incorporating
and
penalty terms, to tackle nonlinear ill-posed problems and analyzes its regularizing characteristics.
In [
16], an automated spatially adaptive regularization model combining harmonic and Total Variation (TV) terms is introduced. This model is dependent on two regularization parameters and two edge information matrices. Despite the dynamic update of the edge information matrix during iterations, the model necessitates fixed values for the regularization parameters.
Calatroni et al. [
17] present a space-variant generalized Gaussian regularization approach for image restoration, emphasizing its applicative potential.
In [
18], a multipenalty point-wise approach based on the Uniform Penalty principle is considered and analyzed for general linear inverse problems, introducing two iterative methods, UpenMM and GUpenMM, and analyzing their convergence.
Here we focus on image restoration and, following [
18], we propose to find an estimate
of
satisfying
where
is a positive scalar and
is the discrete Laplacian operator. This model, named MULTI, is specifically tailored for the image restoration problem. Observe that MULTI incorporates a pixel-wise regularization term and includes a rule for choosing the parameters. Building upon UPenMM and GUpenMM methods analyzed in [
18], we formulate an iterative algorithm for computing the solution
of (
6), where
. In every inner iteration, once the regularization parameters are set, the constrained minimization subproblem is efficiently solved by a customized version of the Newton Projection (NP) method. Here, the Hessian matrix is approximated by a Block Circulant with Circulant Blocks (BCCB) matrix, which is easily invertible in the Fourier space. This modified version of NP was designed in [
19] for single-penalty image restoration under Poisson noise and it is adapted here to the context of multi-penalty regularization. Consequently, the convergence of the modified NP method can be established.
The principal contributions of this work are summarized as follows:
We propose a novel variational pixel-wise regularization model for image restoration.
We devise an algorithm capable of effectively and efficiently solving the proposed model.
Through numerical experiments, we demonstrate that the proposed approach can proficiently eliminate noise and blur in smooth areas of an image while preserving its edges.
The structure of this paper is as follows:
Section 2 introduces the proposed algorithm. The results of numerical experiments are presented in
Section 3, and finally, the conclusions are drawn in
Section 4.
2. Materials and Methods
In this section we present the iterative algorithm that generates the sequence
converging to the solution
in (
6).
Starting from an initial guess
taken as the observed image
, the correspondent initial guess of the regularization parameters is computed as:
where
and
is a neighborhood of size
, (with
R odd and
) of the
th pixel with coordinates
.
The successive terms
are obtained by the update formulas reported in steps 3-5 of Algorithm 1. The iterations are stopped when the relative distance between two successive regularization vectors is smaller than a fixed tolerance
.
Algorithm 1 Input: , , , Output:
|
- 1:
Set .
- 2:
repeat
- 3:
- 4:
- 5:
- 6:
- 7:
until
- 8:
,
|
Algorithm 1 is well defined, and it is possible to prove its convergence. Let’s assume for now that the maximum value of
in the neighborhood
of the
th pixel is exactly in the center of
with coordinates
. In this case, Algorithm 1 corresponds to UPenMM and, using Theorem 3.4 in [
18] we can prove that the sequence
converges to
in (
6). In the general case, when
, it is possible to preserve convergence through a correction obtained from a convex combination between
and
. This approach is presented as Generalized Uniform Penalty method (GUPenMM) [
18].
However, such algorithmic correction is not necessary in image restoration problems where it is not essential to compute solutions with a high level of accuracy. Indeed the human eye cannot distinguish the difference smaller than a few gray levels.
At each inner iteration, the constrained minimization subproblem (step 3 in Algorithm 1) is solved efficiently by a tailored version of the NP method where the Hessian matrix is approximated by a BCCB matrix easily invertible in the Fourier space.
Let us denote by
the function to be minimized at step 3 in Algorithm 1:
and by
its gradient
where the iteration index
k has been omitted for easier notation. Moreover, let
denote the reduced gradient:
where
is the set of indices [
20]:
with
and
is a small positive parameter.
The Hessian matrix
has the form
where
is the diagonal matrix with diagonal elements
.
A general iteration of the proposed NP-like method has the form:
where
is the search direction,
is the steplength and
denotes the projection on the positive orthant.
At each iteration
ℓ, the computation of
requires the solution of the linear system
where
is the following approximation to
Under periodic boundary conditions,
is a BCCB matrix and system (
11) can be efficiently solved in the Fourier space by using Fast Fourier Transforms. Hence, despite its simplicity, the BCCB approximation
is efficient, since it allows to solve the linear system in
operations, and effective as it is shown by the numerical results. Finally, given the solution
of (
11), the search direction
is obtained as
The step length
is computed with the variation of the Armijo rule discussed in [
20] as the first number of the sequence
,
, such that
where
and
.
We observe that the approximated Hessian
is constant for each inner iteration
ℓ and it is positive definite, then it satisfies
Then, the results given in [
19] for single-penalty image restoration under Poisson noise, can be applied here to prove the convergence of the NL-like iterations to critical points.
The stopping criteria for the NP-like method are based on the relative distance between two successive iterates and the relative projected gradient norm. In addition, a maximum number of NP iterations have been fixed.
3. Numerical Experiments
All the experiments are performed under Windows 10 and MATLAB R2021a running on a desktop (Intel(R) Core(TM) i5-8250CPU@1.60 GHz). Quantitatively, we evaluate the quality of image restoration by the relative error (RE), improved signal to noise ratio (ISNR) and mean structural similarity index (MSSIM) measures. The MSSIM is defined by Wang et al. [
21] and ISNR is calculated as:
where
is the restored image,
is the reference image and
is the blurred, noisy image.
Four reference images were used in the experiments:
galaxy,
mri,
leopard and
elaine, shown in
Figure 1,
Figure 2,
Figure 3 and
Figure 4. The first three images have size
, while the
elaine image is
. In order to define the test problems, each reference image was convolved with two PSFs corrsponding to a Gaussian blur with variance 2, generated by the
psfGauss function from the MATLAB toolbox Restore-Tool [
1], and an out-of-focus blur with radius 5, obtained with the function
fspecial from the MATLAB Image Processing Toolbox. The resulting blurred image was then corrupted by Gaussian noise with different values of the noise level
. The values
were used.
We compare the proposed pixel-wise multi-penalty regularization model (MULTI) with Tikhonov (TIKH), Total Variation (TV) and Total Generalized Variation (TGV) regularization with nonnegative constraints. The regularization parameter values for TIKH, TV and TGV have been chosen heuristically by minimizing the relative error values. The Alternating Direction Method of Multipliers (ADMM) was used for the solution of the TV-based minimization problem, while for TIKH, we used the Scaled Gradient Projection (SGP) method with Barzilai and Borwein rules for the step length selection. Regarding the TGV regularization, the RESPOND method [
22] has been used. We remark that RESPOND has been originally proposed for the restoration of images corrupted by Poisson noise, by using Directional Total Generalized Variation regularization. It has been adapted here to deal with TGV-based restoration of images under Gaussian noise. The MATLAB implementation for Poisson noise is available on GitHub.
The tolerance in the outer loop of MULTI in Algorithm 1 step 7 was , while the maximum number of iterations was 20. Regarding the NP method, a tolerance of was used and the maximum number of iterations was 1000.
The size of the neighborhood
in (
8) was
pixels for all tests except for
Galaxy, where a
neighborhood was used.
The values of the parameter
in (
6), used in the various tests, are in the range
. In order to compare all the algorithms at their best performance, the values
used in each test are reported in
Table 1, where we observe that the value of
is proportional to the noise level. The parameter
represents a threshold and, in general, should have a small value when compared to the non null values of
. We note that at the cost of adjusting a single parameter
, it is possible to achieve point-wise optimal regularization.
Table 2,
Table 3,
Table 4 and
Table 5 report the numerical results for all the test problems. The last column of the tables shows the used values of the regularization parameter for TIKH, TV and TGV while, for MULTI, it reports the norm of the regularization parameters vector
computed by Algorithm 1. Column 7 shows the number of RESPOND, ADMM and SGP iterations for TGV, TV and TIKH, respectively. For the MULTI algorithm, Column 7 shows the number of outer iterations and NP iterations in parenthesis.
In
Table 2,
Table 3,
Table 4 and
Table 5, we observe that TGV always outperforms TIKH and TV in terms of accuracy, since it has greater MSSIM and ISNR values and smaller values of RE. Therefore, in
Figure 1,
Figure 2,
Figure 3 and
Figure 4 we represent the images obtained by MULTI and TGV in the out-of-focus case, with
as this is a very challenging case.
Moreover, from the images provided in
Figure 5, it can be observed that MULTI method better preserves the local characteristics of the image, avoiding flattening the shooth areas and optimally preserving the sharp contours. We observe that a smooth area like the cheek is better represented by MULTI, avoiding the presence of the staircasing effect. Moreover, an area with strong contours, such as the teeth and the eyes, is better depicted.
The regularization parameters computed by MULTI are represented in
Figure 6, the adherence of the regularization parameters’ values to the image content is clear, showing larger values in areas where the image is flat and smaller values where there are pronounced gradients (edges). The parameters range automatically adjusts according to the different test types.
Finally we show in
Figure 7 an example of the algorithm behaviour, reporting the history of regularization parameter norm, relative error and residual norm in the case of
leopard test with out-of-focus blur and noise level
. The relative error flattens after a few iterations, and the same behaviour can be observed in all the other test. Therefore we used a large tolerance value (
) in the outer loop of Algorithm 1.