1 Introduction

Images are ubiquitous and indispensable in modern communication. However, in many applications, such as microscopy imaging, remote sensing, and astronomical imaging, observed images are often degraded. Image restoration can be resorted to settle this problem because it allows the recovery of the lost information from the observed degraded image data [42]. In the past, many researchers (see [3, 33, 38] for a review and [1, 7, 1416, 36, 37]) have been working on recovering clear images from the degraded images. The most common image degradation model (shown in Fig. 1) can be represented by the following system

$$\begin{aligned} g=h*f+n, \end{aligned}$$
(1)

where \(f( {x,y}):\Omega \subset R^2\rightarrow R\) is an original image,\(g\) is its observed degraded image, \(h\) is a blur kernel or point spread function (PSF), \(n\) is the zero mean Gaussian white noise with standard variance \(\sigma \), and \(*\) denotes 2-D convolution operator.

Fig. 1
figure 1

Degradation process of the original image

If the blur kernel is exactly known as prior, recovering the original image is called a nonblind deconvolution problem; otherwise called a blind deconvolution problem. Recently, Levin et al. [20] pointed out that a simultaneous estimation of both image and blur kernel is ill-posed, while estimating the blur kernel alone is better conditioned because the number of parameters to estimate is small, relative to the number of image pixels measured. As a result, many approaches [8, 13, 1820, 29] have been proposed to accurately estimate the blur kernel, and then the image restoration problem is turned into the nonblind deconvolution problem.

It is well known that the nonblind deconvolution problem is typically ill-posed; a classical way to overcome this ill-posed problem is to add a regularization term, which was introduced in 1977 by Tikhonov and Arsenin [32]. To this day, a number of regularization-based image restoration methods [2, 5, 10, 11, 2226, 28, 30, 32, 39, 40] have been proposed to solve this inverse problem. Using a variational framework, the nonblind deconvolution problem can be tackled by minimizing an objective function as follows:

$$\begin{aligned} \mathop {\min }\limits _f \left\{ {J(f)=\left\| {g-h*f} \right\| _2^2 +\lambda \sum \limits _{i\in \Omega } {\varphi ( {\left| {d_i f} \right| })}} \right\} , \end{aligned}$$
(2)

where the first term is the data fidelity item, which stands for the fidelity between the observed image \(g\) and the original image \(f\). The second term \(\varphi ( x)\) is regularization, which gives a prior model of the original image \(f\), and \(\lambda >0\) is the parameter that controls the trade-off between the data fidelity and prior terms. \(\varphi :R^+\rightarrow R\) is a potential function (PF), and \(\left\{ {d_i } \right\} \) yields the first-order differences between each pixel and its 4 or 8 adjacent neighbors.

The most usual choice for \(\varphi \) is the convex regularization function, for example, the \(L^1\)norm that allows discontinuities makes it an acceptable PF for images [28]. Nevertheless, the nonconvex nonsmooth (NCNS) regularization offers much richer possibilities to restore high-quality image with neat edges [23]. Moreover, Fu et al. [10] proposed an adaptive nonconvex total variation (ANCTV) regularization for image restoration, which can adaptively adjust the regularization strength according to the spatial property of the image.

However, the adaptive nonconvex total variation regularization is based on the image gradient, which is easily sensitive to noise. Tian’s previous work [31] has proposed a good spatial information indicator called difference eigenvalue edge indicator, which can characterize fine image structures even in noisy circumstance. In this paper, an adaptive nonconvex nonsmooth (ANCNS) regularization based on the spatial information indicator is proposed for image restoration; meanwhile, an efficient numerical algorithm for solving the resulting minimization problem is introduced by applying the variable splitting and the penalty techniques.

The rest of this paper is organized as follows. In Sect. 2, our ANCNS regularization algorithm is presented in detail. The optimization method is described in Sect. 3. In Sect. 4, some experimental results and discussion are presented. Finally, conclusions are drawn in Sect. 2. A preliminary version of this paper was presented at the CISP conference [41].

2 Adaptive Nonconvex Nonsmooth Image Restoration

In this section, our ANCNS image restoration algorithm is presented. Firstly, the nonconvex nonsmooth model is reviewed, and then the ANCNS model is introduced.

2.1 Nonconvex Nonsmooth Model

Since inverse problems are typically ill-posed, it is standard to use a regularization technique to make them well-posed [32, 39]. Tikhonov regularization (\(L^2\)norm, i.e., \(\varphi ( t)=t^2)\) is first introduced in the literature, however, it leads to oversmoothing of edges. To preserve image edges as much as possible, Rudin et al. [28] proposed to use the \(L^1\)norm (i.e., \(\varphi ( t)=t)\) of gradient, called ROF model or the total variation (TV) model, instead of the \(L^2\) norm. The TV model can preserve image edges, while removing noise, however, it cannot deblur or sharpen edges. To restore discontinuous feature of an image such as edges, lines, and other fine details, and to remove image noise, a backward diffusion in local normal directions and a forward diffusion in tangent direction [9] is needed. As a result, the nonconvex nonsmooth regularization was proposed as follows [6, 25]:

$$\begin{aligned} \varphi ( t)=t^p, \quad ({0<p<1}). \end{aligned}$$
(3)

2.2 Adaptive Nonconvex Nonsmooth Model

In order to adaptively adjust the regularization strength according to different regions and features of an image, Fu et al. [10] generalize the total variation regularization and propose an adaptive nonconvex total variation regularization for image restoration. However, the ANCTV regularization is easily sensitive to noise because it is based on the image gradient. In this paper, an adaptive nonconvex nonsmooth regularization model which can get better restored images even in noisy circumstance is introduced. A critical problem is to select a good spatial information indicator, which can characterize fine image structures in the presence of noise. To do this, we use the spatial information extractor called difference eigenvalue from Tian et al. [31] to indicate the spatial information. The difference eigenvalue is based on the Hessian matrix of the image [4]. We use a Gaussian filter version of the Hessian matrix to improve the robustness to noise as follows [36]:

$$\begin{aligned} J_\rho =\left[ {\begin{array}{l@{\quad }l} j_{11} &{}j_{12} \\ j_{21} &{}j_{22} \\ \end{array}} \right] =\left[ {\begin{array}{l@{\quad }l} g_{xx} *G_\rho &{}g_{xy} *G_\rho \\ g_{yx} *G_\rho &{}g_{yy} *G_\rho \\ \end{array}} \right] , \end{aligned}$$
(4)

where \(G_\rho \) denotes the Gaussian kernel with the parameter \(\rho \)(the size is \(5\times 5\) and \(\rho =0.8\) in this paper). The matrix \(J_\rho \) is positive semidefinite, and the two eigenvalues of the \(J_\rho \), denoted by \(\lambda _1 \) and \(\lambda _2 \), are given by:

$$\begin{aligned} \lambda _{1,2} =\frac{1}{2}\left[ {( {j_{11} +j_{22} })\pm \sqrt{( {j_{11} -j_{22} })^2+4j_{12}^2 } } \right] . \end{aligned}$$
(5)

Let \(\lambda _1 \)denotes the larger eigenvalue and \(\lambda _2 \)denotes the other eigenvalue. Here \(\lambda _1 \) and \(\lambda _2 \) correspond to the maximum and minimum local variation at a pixel, respectively. The modified difference eigenvalue edge indicator \(D(x,y)\) is defined as follows:

$$\begin{aligned} D( {x,y})=( {\lambda _1 -\lambda _2 })\lambda _1 w( {g( {x,y})}), \end{aligned}$$
(6)

where \(w( {g( {x,y})})\) is a weighting factor which is used to achieve balance between detail enhancement and noise suppression, and its value is given by

$$\begin{aligned} w( {g( {x,y})})=\frac{\delta ( {x,y})-\min ( \delta )}{\max ( \delta )-\min ( \delta )}, \end{aligned}$$
(7)

where \(\max (\delta )\) and \(\min ( \delta )\) are maximum and minimum gray level variances of image \(g\), respectively. For a given pixel with coordinates \(( {x,y})\), the gray level variance \(\delta ( {x,y})\) is calculated from its \(3\times 3\) neighborhood

$$\begin{aligned} \delta ( {x,y})=\frac{1}{9}\sum \limits _{i=-1}^1 {\sum \limits _{j=-1}^1 {\left( {g\left( {x+i,y+j}\right) -g\left( {x,y}\right) }\right) ^2} }. \end{aligned}$$
(8)

With the spatial information extraction result \(D( {x,y})\) using the difference eigenvalue, the ANCNS regularization for image restoration can be introduced as follows:

$$\begin{aligned} \varphi ( t)=t^p,\quad p( {x,y})=\left\{ {\begin{array}{l@{\quad }l} \frac{1}{1+l( {D( {x,y})-T})},&{}D( {x,y})\ge T \\ 2,&{}D( {x,y})<T \\ \end{array}} \right. \end{aligned}$$
(9)

where \(T\) is a threshold and \(l\) is the contrast factor.

From Eqs. (6) and (9), it is clearly seen that for the pixels in smooth areas, since the \(D( {x,y})\) value is less than the threshold \(T\), \(\varphi ( t)\)is turned into \(t^2\), which means that the proposed adaptive regularization leads to the \(L^2\) regularization, and the noise in the smooth regions will be well suppressed. Conversely, for nonsmooth pixels, because the \(D( {x,y})\) value is greater than or equal to the threshold \(T\), \(\varphi ( t)\) is changed into \(t^p,( {0<p<1})\), and this means that a NCNS regularization is enforced to these pixels, so the edge and texture will be well preserved.

According to the analysis above, it is shown that the ANCNS regularization has the ability to adaptively adjust the regularization strength according to the spatial property of the image. Moreover, in contrast to the ANCTV model, because the difference eigenvalue edge indicator is robust with noise to extract the spatial information than the image gradient, the ANCNS model can produce a perfect restoration result. If tracing these image regularization methods, we can see that the power \(p\) of the energy function (2) changes in a special way in Fig. 2.

Fig. 2
figure 2

Change of power \(p\) and corresponding regularization methods

3 Optimization Procedure

The minimization of a nonconvex nonsmooth energy involves three major difficulties that drastically restrict the methods that can be envisaged [23]. Owing to the nonconvexity of \(\varphi \), \(J( f)\) may exhibit a large number of local minima which are not global. In addition, \(J( f)\) is usually nonsmooth at the minimizers, and usual gradient-based methods are inappropriate even for local minimization. Therefore, the NCNS regularization’ practical interest used to be limited. In this section, an efficient algorithm is also introduced to solve the resulting ANCNS minimization problem by applying the variable splitting and the penalty techniques.

First, we approach \(\varphi \) by a sequence of \(\varphi _{\varepsilon _k } :R^+\rightarrow R^+\)(where consider a sequence \(\varepsilon _0 =0<\varepsilon _1 <\cdots <\varepsilon _k <\cdots <\varepsilon _n =1)\) such that \(\varphi _0 \) is convex and \(\varphi _{\varepsilon _k } \) monotonously reaches \(\varphi \) when \(\varepsilon _k \) goes from 0 to 1, with \(\varphi _1 =\varphi \). Correspondingly, our minimization problem is approximated by a sequence of minimization problems as given below

$$\begin{aligned} \mathop {\min }\limits _f \left\{ {J_{\varepsilon _k } ( f)=\left\| {g-h*f} \right\| _2^2 +\lambda \sum \limits _{i\in \Omega } {\varphi _{\varepsilon _k } ( {\left| {d_i f} \right| })} } \right\} , \end{aligned}$$
(10)

where \(0\le \varepsilon _k \le 1\). \(\varphi _{\varepsilon _k } \) should be differentiable on \(R^+\), so we modify \(\varphi ( t)=t^p\) by a similar function \(\varphi ( t)=( {t+\beta })^p\)and construct \(\varphi _{\varepsilon _k } =( {t+\beta })^{\varepsilon _k p+1-\varepsilon _k }\), where \(\beta \) is a small positive parameter.

We can further rewrite Eq. (10) as follows:

$$\begin{aligned} \mathop {\min }\limits _f \left\{ {J_{\varepsilon _k } ( f)=\left\| {g-h*f} \right\| _2^2 +\lambda \sum \limits _{i\in \Omega } {\psi _{\varepsilon _k } ( {\left| {d_i f} \right| })} {}+\lambda \beta ^{\varepsilon _k p-\varepsilon _k }\sum \limits _{i\in \Omega } {( {\left| {d_i f} \right| +\beta })} } \right\} ,\nonumber \\ \end{aligned}$$
(11)

where \(\psi _{\varepsilon _k } ( t)=( {t+\beta })^{\varepsilon _k p+1-\varepsilon _k }-\beta ^{\varepsilon _k p-\varepsilon _k }( {t+\beta })\). Removing the constant-value term, the Eq. (11) can be simplified as

$$\begin{aligned} \mathop {\min }\limits _f \left\{ {J_{\varepsilon _k } ( f)=\left\| {g-h*f} \right\| _2^2 +\lambda \sum \limits _{i\in \Omega } {\psi _{\varepsilon _k } ( {\left| {d_i f} \right| })} +\lambda \beta ^{\varepsilon _k p-\varepsilon _k }\left\| {\nabla f} \right\| _2 } \right\} . \end{aligned}$$
(12)

The second term \(\psi _{\varepsilon _k } ( t)\) is \(C^2\)-smooth and concave on \(R^+\), whereas the third term is convex and nonsmooth.

Then, following the idea of [17], an auxiliary \(u\) and a quadratic penalty term are introduced into \(J_{\varepsilon _k } ( f)\) as follows:

$$\begin{aligned} \mathop {\min }\limits _{( {f,u})} \left\{ \!{J_{\varepsilon _k } ( {f,u}){=}\left\| {g{-}h*f} \right\| _2^2 {+}\lambda \!\sum \limits _{i\in \Omega } {\psi _{\varepsilon _k } ( {\left| {d_i f} \right| })} {+}\lambda \beta ^{\varepsilon _k p{-}\varepsilon _k }\left\| {\nabla u} \right\| _2{+}w\left\| {u{-}f} \right\| _2^2 } \right\} \!,\nonumber \\ \end{aligned}$$
(13)

where \(w>0\) is a positive parameter which controls the weight of penalty term, and it can be gradually increasing in the iterations in order to force that \(u\) is close to \(f\). By fixing the variable \(u\), \(J_{\varepsilon _k } ( {f,.})\) is a twice differentiable function with respect to \(f\) so that it can be minimized by gradient-based methods. For \(f\) fixed, minimizing \(J_{\varepsilon _k } ( {.,u})\) with respect to \(u\) amounts to a TV denoising problem, which can be solved efficiently by already existing method such as Split Bregman algorithm [12].

The computational steps can be given as follows:

$$\begin{aligned} f^{( {j,k})}&= \arg \mathop {\min }\limits _f J_{\varepsilon _k } ( {f,u^{(j{-}1,k)}})\nonumber \\&= \arg \mathop {\min }\limits _f \left\{ {\left\| {g-h*f} \right\| _2^2 {+}\lambda \sum \limits _{i\in \Omega } {\psi _{\varepsilon _k } ( {\left| {d_i f} \right| })} +w\left\| {f-u^{(j-1,k)}} \right\| _2^2 } \right\} \qquad \end{aligned}$$
(14)
$$\begin{aligned} u^{( {j,k})}=\arg \mathop {\min }\limits _u J_{\varepsilon _k } ( {f^{(j,k)},u})=\arg \mathop {\min }\limits _u \left\{ {\lambda \beta ^{\varepsilon _k p-\varepsilon _k }\left\| {\nabla u} \right\| _2 +w\left\| {f^{(j,k)}-u} \right\| _2^2 } \right\} ,\nonumber \\ \end{aligned}$$
(15)

where \(\psi _{\varepsilon _k } ( t)=( {t+\beta })^{\varepsilon _k p+1-\varepsilon _k }-\beta ^{\varepsilon _k p-\varepsilon _k }( {t+\beta })\). In this case, for each \(\varepsilon _k \), we initialize with \(u^{(0,k)}=u_{\varepsilon _{k-1} } \), where \(u_{\varepsilon _{k-1} } \) results from the minimization of \(J_{\varepsilon _{k-1} } \), and then we obtain a sequence of iterates

$$\begin{aligned} \begin{aligned}&f^{(1,k)},u^{(1,k)},f^{(2,k)},u^{(2,k)},\ldots ,f^{(j,k)},u^{(j,k)},\ldots \\&\,\hbox {for each}\;k=0,1,\ldots ,n. \\ \end{aligned} \end{aligned}$$
  1. (1)

    Computation of \(f^{( {j,k})}\) according to (14) For \(\varepsilon _0 =0\) (i.e., \(k=0)\), the minimization problem amounts to minimize the convex quadratic function

    $$\begin{aligned} J_0 ( {f,u^{(j-1,0)}})=\left\| {g-h*f} \right\| _2^2 +w\left\| {f-u^{(j-1,0)}} \right\| _2^2. \end{aligned}$$
    (16)

    It is equivalent to solving a linear system

    $$\begin{aligned} ( {h( {-x,-y})*h+wI})f^{( {j,0})}=h( {-x,-y})*g+wu^{(j-1,0)}. \end{aligned}$$
    (17)

    For \(\varepsilon _k >0\) (i.e., \(k>0)\), the Quasi-Newton method [27] can be used to solve (14) as follows:

    $$\begin{aligned} f^{(j,k)}=f^{(j-1,k)}+\tau \Delta f^{(j-1,k)} \end{aligned}$$
    (18)
    $$\begin{aligned} 2( {h( {-x,-y})*h+wI})\Delta f^{( {j-1,k})}=-\nabla _f J_{\varepsilon _k } ( {f,u^{(j-1,k)}}), \end{aligned}$$
    (19)

    where \(h( {-x,-y})\) denotes the adjoint of \(h\), and \(\tau >0\) is the step size.

  2. (2)

    Computation of \(u^{( {j,k})}\) according to (15) Minimization of the objective function (15) amounts to a TV denoising problem which can be solved efficiently by the Split Bregman algorithm [12] as follows:

    $$\begin{aligned} ( {2wI-\mu \Delta })u^{( {j,k,i+1})}&= 2wf^{(i,k)}+\mu \nabla _x^T ( {d_x^i -b_x^i })+\mu \nabla _y^T ( {d_y^i -b_y^i })\quad \quad \quad \end{aligned}$$
    (20)
    $$\begin{aligned} d_x^{i+1}&= \max \left( {s^i-\frac{\lambda \beta ^{\varepsilon _k p-\varepsilon _k }}{\mu },0}\right) \frac{\nabla _x u^{( {j,k,i})}+b_x^i }{s^i}\end{aligned}$$
    (21)
    $$\begin{aligned} d_y^{i+1}&= \max \left( {s^i-\frac{\lambda \beta ^{\varepsilon _k p-\varepsilon _k }}{\mu },0}\right) \frac{\nabla _y u^{( {j,k,i})}+b_y^i }{s^i} \end{aligned}$$
    (22)
    $$\begin{aligned} s^i&= \sqrt{\left| {\nabla _x u^{( {j,k,i})}+b_x^i } \right| ^2+\left| {\nabla _y u^{( {j,k,i})}+b_y^i } \right| ^2}\end{aligned}$$
    (23)
    $$\begin{aligned} b_x^{i+1}&= b_x^i +\left( {\nabla _x u^{( {j,k,i+1})}-d_x^{i+1} }\right) \end{aligned}$$
    (24)
    $$\begin{aligned} b_y^{i+1}&= b_y^i +\left( {\nabla _y u^{( {j,k,i+1})}-d_y^{i+1} }\right) , \end{aligned}$$
    (25)

    where \(\mu \) is also a positive parameter as \(w\).

figure a

4 Experimental Results and Discussion

In this section, we present three simulated data experiments under different noise conditions to illustrate the performance of the proposed algorithm, whose original images are shown in Fig. 3. To assess the relative merits of the proposed methodology, we compare the proposed algorithm with the restoration results using the \(L^2\) (also abbreviated to L2) regularization, the NCNS regularization (\(\varphi ( t)=\left| t \right| ^{0.9})\), the ANCTV regularization, and the spatially adaptive TV regularization (RLSATV) [36] in the experiments.

Fig. 3
figure 3

The test images. a Circuit, b lena, c pepper

In this paper, we use the improved signal-to-noise (ISNR) [30], the structural similarity (SSIM) [34, 35], and a promising recently proposed based on edge/gradient similarity image quality assessment index called GSM [21], to evaluate the restoration results. For the color image, we computed the ISNR, SSIM, and GSM values between each clear channel and restoration channel, and then averaged them. For simplicity of notation, they are noted as the MISNR, MSSIM, and MGSM. The definitions of these evaluation indices are as follows:

$$\begin{aligned} \mathrm{ISNR}&= 10\log _{10} \left( {\frac{\left\| {g-f} \right\| ^2}{\left\| {\overline{f} -f} \right\| ^2}}\right) \end{aligned}$$
(26)
$$\begin{aligned} \mathrm{SSIM}&= \frac{\left( {2\mu _f \mu _{\overline{f} } +C_1 }\right) \left( {2\delta _{f\overline{f} } +C_2 }\right) }{\left( {\mu _f^2 +\mu _{\overline{f} }^2 +C_1 }\right) \left( {\delta _f^2 +\delta _{\overline{f} }^2 +C_2 }\right) } \end{aligned}$$
(27)
$$\begin{aligned} \mathrm{GSM}&= \frac{2f_x f_y +C_3 }{f_x^2 +f_y^2 +C_3 } \end{aligned}$$
(28)
$$\begin{aligned} \mathrm{MISNR}&= \frac{1}{M}\sum \limits _{i=1}^M {\mathrm{ISNR}_i } \end{aligned}$$
(29)
$$\begin{aligned} \mathrm{MSSIM}&= \frac{1}{M}\sum \limits _{i=1}^M {\mathrm{SSIM}_i } \end{aligned}$$
(30)
$$\begin{aligned} \mathrm{MGSM}&= \frac{1}{M}\sum \limits _{i=1}^M {\mathrm{GSM}_i }, \end{aligned}$$
(31)

where \(M\) is the channel number of image.\(f\) and \(\overline{f} \) represent the original image and the restored result, and \(\mu _f \) and \(\mu _{\overline{f} } \) represent the average gray values of the original clear image and the restored result, respectively. \(\delta _f \) and \(\delta _{\overline{f} } \) represent the variances of the original clear image and the restored image, respectively.\(f_x \) and \(f_y \) are the gradient values for the central pixel of image blocks \(x\) and \(y\), respectively. \(\delta _{f\overline{f} } \) represents the covariance between the original clear image and the restored image. \(C_1 \), \(C_2 \), and \(C_3 \)are two constants, which prevent unstable results when either \(\mu _f^2 +\mu _{\overline{f} }^2 \) or \(\delta _f^2 +\delta _{\overline{f} }^2 \) is very close to zero.

In each experiment, we strive to be impartial, while collecting data. For each method tested, the regularization parameter and other parameters in all the prior models are adjusted until the best restoration result is obtained. Furthermore, to eliminate the bias created by different manifestations of noise, we use the same degraded images. For the experiments, the best result is selected to be the one with the highest ISNR value.

4.1 Experimental Results

First, we consider two grayscale Circuit and Lena image. Figures 4a and 6a show the blurred and noisy images (Gaussian blur: \(7\times 7\) mask, \(\sigma =1.5)\), and Figs. 4b–f and 6b–f show the restored image with the \(L^2\) model, RLSATV model, NCNS model, ANCTV model, and our ANCNS model, respectively. The detailed regions cropped from these five figures are, respectively, presented in Figs. 5 and 7. In our test, we use the following setting: \(\beta =0.0021,T=0.03,\lambda =0.0008,w=0.0087,l=0.1\) for Circuit image, while \(\beta =0.0024,T=0.01,\lambda =0.0002,w=0.0102,l=0.1\) for Lena image. It is clear that the proposed ANCNS model gives better restored result, compared to the restored results of the other four models. In the ANCNS restored result, not only is the noise suppressed more thoroughly, but also the edge and detailed information are well preserved. However, for the \(L^2\)model, the restored result is oversmoothed, and some of the detailed information is lost. The restored results using the RLSATV model and the NCNS model are not as sharp as the restored image using the ANCNS model.

Fig. 4
figure 4

Restored results of the circuit image. a Blurred and noisy image, b image restored by L2, c image restored by RLSATV, d image restored by NCNS, e image restored by ANCTV, and f image restored by ANCNS

Fig. 5
figure 5

Detailed regions cropped from Fig. 4. a Deblurred by L2, b deblurred by RLSATV, c deblurred by NCNS, d deblurred ANCTV, and e deblurred by ANCNS

Fig. 6
figure 6

Restored results of the Lena image. a The degraded image, b image restored by L2, c image restored by RLSATV, d image restored by NCNS, e image restored by ANCTV, and f image restored by ANCNS

Fig. 7
figure 7

Detailed regions cropped from Fig. 6. a Deblurred by L2, b deblurred by RLSATV, c deblurred by NCNS, d deblurred by ANCTV, and e deblurred by ANCNS

We further show the ability of our ANCNS model on the color image. An example of the pepper image is shown in Fig. 8. The image was degraded by the \(15\times 15\) Gaussian blur kernel with the standard deviation of 2, and then it was contaminated by Gaussian noise. We use \(\beta =0.002,T=0.018,\lambda =0.0008,w=0.0102,l=0.1\). It can be observed from Figs. 8b–f and 9a–e that the image edges of the result by the \(L^2\) model remain a little blurry, while the image edges by the RLSARV model and the NCNS model are not as sharp as those by the ANCNS model. In contrast, the ANCNS model performs better, and can produce a satisfactory result with sharp edges and smooth contours, and most image details and fine discontinuity are restored reasonably.

Fig. 8
figure 8

Restored results of the pepper image. a The degraded image, b image restored by L2, c image restored by RLSATV, d image restored by NCNS, e image restored image by ANCTV, and f image restored by ANCNS

Fig. 9
figure 9

Close-ups of selected sections of Fig. 8. a Deblurred by L2, b deblurred by RLSATV, c deblurred by NCNS, d deblurred by ANCTV, and e deblurred by ANCNS

The ANCNS model can automatically adjust the regularization strength according to the spatial property of the image, thus the restored results are better than those achieved by \(L^2\) model, as well as those results derived from NCNS model. In contrast with the RLSATV model, the ANCNS model is a nonconvex nonsmooth regularization function, so the ANCNS model can provide a visually appealing output. However, compared with the ANCTV regularization, because the spatial information indicator is more robust in the presence of noise, the ANCNS model can yield more satisfactory results. The good performance of the proposed ANCNS model can also be illustrated by the ISNR, SSIM, and GSM values shown in Table 1. It is shown that the ANCNS regularization produces the highest ISNR value, and also has the highest SSIM and GSM values, which illustrates that the proposed method produces a better restoration result, close to the original image, both from gray value and image structure aspects.

Table 1 The ISNR values of different methods in the experiments

4.2 Discussion

The threshold parameter \(T\), which distinguishes the smooth area and the nonsmooth area, has an important effect on the restored result. If it is too small, a smooth area will be identified as a nonsmooth area, which will lead to noise not being fully suppressed. Conversely, if it is too large, the nonsmooth areas will be identified as smooth areas, and the edge information cannot be fully preserved. Figure 10 provides a plot showing the change in the ISNR value with parameter \(T\) from 0.01 to 0.04 in the first experiment. It is found that the satisfactory result is produced when \(T\) is about 0.03.

Fig. 10
figure 10

Change in the ISNR value versus the threshold parameter \(T\) in the first experiment

5 Conclusion

In this paper, an ANCNS regularization model using the spatial information indicator is proposed. The proposed model can automatically adjust the regularization strength according to the spatial characteristic of the image. Moreover, an efficient numerical algorithm was also introduced to solve the ANCNS minimization problem by applying the variable splitting and the penalty techniques. Comparative results on simulated images show that the ANCNS model produces a better restoration of image edges and fine detail. Moreover, because the spatial information indicator is robust even in noisy circumstances, it makes the proposed ANCNS model more preferable in practical applications.