Abstract
Blind image deblurring is a classical ill-posed problem that usually requires constraints on the clean image, the blur kernel, and noise to make it well-posed. Recently, a simple yet effective sparse norm \(l_e\) is proposed, which adds two widely-adopted sparse norms, i.e., \(l_0\) and \(l_1\). By using \(l_e\) to regularize the gradients of the clean image, and \(l_2+\nabla l_e\) as the noise fitting function, an enhanced sparse model for blind image deblurring is established and achieves surprisingly attractive results. In this paper, inspired by the facts that the gradients of a natural image tend to obey a heavy-tailed distribution, and the noise exhibits spatial randomness, we propose a more flexible model called the extended sparse model which can take the enhanced sparse model as a special case. Specifically, for the image gradients, we suggest a improved sparse norm \(l_{\mathcal {P}}\), which is developed from \(l_0\) and \(l_p(0<p\le 1)\). Furthermore, we constrain the second-order derivative of noise to boost the percentage of high-frequencies in the fidelity such that the recovery focuses more on high-frequencies that are erased in the blurry image. Based on the half-quadratic splitting method and a variant of the generalized iterated shrinkage algorithm (GISA), we provide an effective optimization scheme for the overall model. Extensive evaluations of benchmark datasets and real images indicate the superiority of the proposed method against state-of-the-art deblurring algorithms.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Blind image deblurring has been widely investigated and seen substantial progress in the past few decades. In general, the blurry image B can be formulated by:
where I, k, and n represent the latent clean image, the blur kernel, and the additive noise. We use \(*\) to denote the convolution operator. The goal of blind image deblurring is to estimate I and k from the given B. This is a highly ill-posed problem, and theoretically, there exist infinite pairs of I and k making the same B. In order to clear up such indefiniteness, certain priors on the clean image, the blur kernel, and noise should be imposed to constrain the space of feasible solutions. In the Bayesian inference framework, the most commonly-used estimator is the maximum a posteriori (MAP):
where P(B|I, k) is the likelihood function, P(I) and P(k) denote statistical priors of I and k, respectively. By adopting \(-\log P(I,k|B)\), such an MAP estimator is equivalent to minimize the following regularized model:
where \(R_{1}(\cdot )\) is the data fidelity term used to model the noise n, \(R_{2}(\cdot )\) and \(R_{3}(\cdot )\) regularize the clean image and the blur kernel, \(\alpha \) and \(\beta \) are regularization parameters.
Existing MAP-based works usually assume the noise to follow a Gaussian or Laplacian distribution [1,2,3,4]. Consequently, the \(l_2\) or \(l_1\)-norm is adopted to model the noise, i.e., \(||B-I*k||_{2}^{2}\) or \(||B-I*k||_{1}\). However, as demonstrated in the previous studies [5,6,7], the noise modeling should be more complex empirically, and only using the above formulations as the data fidelity term is less compelling. Recently, by modifying the noise modeling technique from [8], Chen et al. [9] proposed a weighted sum of the dense norm \(l_2\) and the sparse norm \(l_e\) to model the unknown noise:
where \( \omega _{1} \) is a weight parameter, \( \nabla _{1} = \{\partial _{x},\partial _{y}\} \), and \( ||\cdot ||_{e}=||\cdot ||_{0}+||\cdot ||_{1} \). Compared with these complicate noise modeling skills, such as learning data fitting terms [5] or the mixture of Gaussian [6, 7], the noise modeling (4) is surprisingly simple but very effective as confirmed in [9]. Considering spatial randomness of the noise [10, 11] in the blurry image, we further introduce the second-order derivative of the residue image to (4):
where \( \omega _{2} \) is a weight parameter, and \( \nabla _{2} = \{\partial _{xx},\partial _{xy},\partial _{yy}\} \). In Fig. 1, it can be seen that the proposed data fidelity term (5) is more effective than the other two terms, as the corresponding results contain fewer artifacts.
For the regularization term on the latent clean image, it is generally believed that the gradients of sharp images follow heavy-tailed distributions [12,13,14], which can be well modeled by a hyper-Laplacian prior \( P(\nabla ^{1}I)\propto e^{-\alpha |\nabla ^{1}I|^p} \) with \( 0.5\le p \le 0.8\). As a result, the \(l_p\)-norm is adopted on the image gradients for image deblurring [13, 15,16,17,18]. In other works [1,2,3, 9, 19,20,21], the importance of pursuing sparsity of image gradients for the blur kernel estimation has also been illustrated, and they mostly used \(l_0\)-norm to achieve sparsity. Particularly in the work [9], an enhanced sparse norm \(l_e\) is given to promote sparsity. Motivated by the success of these previous works, we propose a composite \(l_{\mathcal {P}}\)-norm for image gradients by taking advantage of both \(l_0\) and \(l_p (0<p\le 1)\). The proposed \(l_{\mathcal {P}}\)-norm can be sparser than either single one as well as the \(l_e\)-norm, and undoubtedly when \(p=1\), the \(l_{\mathcal {P}}\)-norm is equivalent to the \(l_e\)-norm. With a proper of \(p\) value, we confirm that using the \(l_{\mathcal {P}}\)-norm to regularize the image gradients for the blur kernel estimation is more rational. As shown in Fig. 2, we note that the proposed \(l_{\mathcal {P}}\)-norm (\(p\) is set as 0.6) performs best among different regularization terms.
Minimizing the overall regularized model is challenging since the proposed \(l_{\mathcal {P}}\)-norm is highly non-convex. To address this optimization problem, we use the half-quadratic splitting method [22] to split the model into several sub-problems, and modify the thresholding rule of the generalized iterated shrinkage algorithm (GISA) [23] to solve the related \(l_{\mathcal {P}}\)-minimization problem. In the end, we summarize the contributions of this work as follows:
-
We present a more flexible deblurring model based on the model in [9]. By setting the corresponding parameters, our model becomes that of [9].
-
We propose an improved sparse regularization \(l_{\mathcal {P}}\)-norm to regularize the image gradients and introduce a new data fidelity term.
-
We provide an effective optimization scheme for the proposed deblurring model.
2 Related works
Extensive studies on blind image deblurring have developed abundant technologies, such as selecting salient edges for kernel estimation [4, 10, 24,25,26,27,28,29], end-to-end deblurring based on deep learning [30,31,32,33], and imposing sparsity on the image gradients [11, 15,16,17,18].
We only focus on the last category that is highly related to the proposed method. In [11], Shan et al. concatenated two piece-wise functions to fit the heavy-tailed distribution of image gradients and combined it with a local prior to suppress ringing artifacts. Kotera et al. [17] used the \(l_{0.3}\)-norm on image gradients, and Gan et al. [16] adopted the \(l_{0.4}\)-norm on both image gradients and the blur kernel. Moreover, Almeida et al. [15] used the \(l_{p}\)-norm on gradients and manually set \(p\) values as a non-increasing sequence (i.e., 0.8, \(\cdots \), 0.6, \(\cdots \),0.4, \(\cdots \)) while iterating. In [18], Zuo et al. further developed an iteration-wise learning method to produce a series of increasing \(p(-1<p \le 0.2)\) values and assigned them to the corresponding iteration of deblurring. However, these methods do not perform well on specific images beyond natural ones.
Instead of exploring the heavy-tailed distribution of gradients to achieve sparsity, Krishnan et al. [34] used a normalized sparsity regularization \( \frac{1_1}{l_2} \), and Xu et al. [21] proposed the approximated \(l_{0}\)-norm regularization to fulfill the task. Recently, to further improve the deblurring performance, researchers have suggested some complementary priors for the \(l_{0}\)-norm of gradients, Such as the dark channel prior [2], the extreme channels prior [35], the discriminative priors [1], and the local minimal intensity prior [36]. In [9], Chen et al. used an enhanced sparse regularization to regularize the gradients, and incorporate it with the noise modeling (4) for the blur kernel estimation. In this paper, we extend the noise modeling in [9] and suggest a more rational \(l_{{\mathcal {P}}}\)-norm for image gradients. We show that the proposed algorithm is advanced against [9] as well as state-of-the-art deblurring methods.
3 Proposed deblurring model
3.1 Improved sparse regularization
In this subsection, we first introduce the proposed improved sparse regularization \(l_{\mathcal {P}}\)-norm, and then provide the solution of the \(l_{\mathcal {P}}\)-minimization problem.
For a signal \(X\), the \(l_{\mathcal {P}}\)-norm of it is given by:
Given a degraded signal \(Y\), and assuming its latent signal \(X\) is sparse, the \(l_{\mathcal {P}}\)-minimization problem can be expressed as:
where \(\lambda \) is the penalty parameter. Note that Eq.(7) is an element-wise minimization problem, thus it can be decomposed into a set of independent sub-problems, and one of them is:
where \(x\) and \(y\) denote the elements of \(X\) and \(Y\), respectively, at the same location. Without the term \(|x|^{p}\), Eq.(8) has a closed-form solution [22]. Similarly, without the term \(|x|^{0}\), Eq.(8) becomes a \(l_{p}\)-minimization problem that can be effectively solved by the generalized soft-thresholding (GST) operator [23].
In the following, we show that the solution of Eq.(8) can be obtained by modifying the thresholding rule of the GST operator. Let
and
By fixing \(p=0.5\) and \(\lambda =2\), we plot the curves of \(E(x)\) and \(F(x)\) with five typical \(y\) values, respectively, in Fig. 3, without loss of generality, only the case of \(y>0\) is considered. The so-called GST operator is proposed to minimize \(E(x)\) based on the observations that there exists a specific threshold \(\tau _{p}(\lambda )\), where \(E(0)\) equals \(E(x_{p}^{*})\) as seen in Fig. 3c. If \(y<\tau _{p}(\lambda )\), \(x=0\) (e.g., Fig. 3a and b) is the global minimum. Otherwise, a non-zero solution (e.g., Fig. 3d and e) would be optimal. Thus, the main issues of GST operator focus on how to calculate the \(\tau _{p}(\lambda )\) and search the non-zero solution \(S_{p}(y,\lambda )\), which satisfies \(E^{\prime }(S_{p}(y,\lambda ))=0\), i.e.,
More details and proofs can be found in [23] and its supplementary material. Since \(F(x)\) is the sum of \(E(x)\) and the positive constant \(\lambda \) when \(x\) is non-zero, \(S_{p}(y,\lambda )\) is also the non-zero solution of \(F(x)\).
From Fig. 3f–j, one can see that GST strategy can also be used to minimize \(F(x)\) with a different and larger threshold. We use \(\tau _{{\mathcal {P}}}(\lambda )\) to denote this threshold, and rewrite the \(F(x)\) as:
The first-order derivative of \(F(x)\) in the range of \((0,+\infty )\) is:
We solve the following equation system to obtain the threshold \(\tau _{{\mathcal {P}}}(\lambda )\) and its corresponding \(x_{{\mathcal {P}}}^{*}\):
From Eq.(15), we can obtain that \(\tau _{{\mathcal {P}}}(\lambda )\) equals to \(x_{{\mathcal {P}}}^{*}+\frac{1}{2}\lambda p(x_{{\mathcal {P}}}^{*})^{p-1}\) and then carry it to Eq.(14):
we use fixed point iteration method to estimate \(x_{{\mathcal {P}}}^{*}\). After obtaining \(x_{{\mathcal {P}}}^{*}\), the threshold \(\tau _{{\mathcal {P}}}(\lambda )\) is:
Considering all the cases of \(y\), the solution of Eq. (8) is given by:
where \(\text {sgn}(y)\) denotes the sign of \(y\). Algorithm 1 shows the corresponding pseudo code, and we empirically use \(J=5\) in all experiments. As mentioned before, we further show that the solution of \(l_{{\mathcal {P}}}\)-minimization problem with \(p=1\) is exactly the solution of the \(l_e\)-minimization problem [9]. When \(p=1\), Algorithm 1 converges at \(t=1\), and \(x_{{\mathcal {P}}}^{*}=\sqrt{\lambda }\), \(\tau _{{\mathcal {P}}}(\lambda )=\sqrt{\lambda }+\frac{1}{2}\lambda \), then \(T_{{\mathcal {P}}}(y,\lambda )\) becomes:
This solution had been given in the supplementary material of [9].
3.2 Deblurring model and optimization
This section presents the proposed deblurring model and the corresponding optimization scheme. We use \(l_{2}\)-norm to regularize the blur kernel \(k\) just as many state-of-the-art method [1, 2, 9, 19, 36, 37] adopt. Our proposed deblurring model is given by:
Since it is intractable to obtain the solution of (20) directly, we perform alternating minimization in terms of the latent clean image \(I\):
and the blur kernel \(k\):
3.2.1 Estimating the latent clean image \(I\)
To estimate the latent clean image \(I\), the half-quadratic splitting method is adopted by introducing auxiliary variables \(f_{1}\), \(f_{2}\), and \(g\) with respect to \(\nabla ^{1}(B-I*k)\), \(\nabla ^{2}(B-I*k)\) and \(\nabla ^{1}I\), respectively. Thus, (21) can be rewritten as:
where \(\delta _{1}\), \(\delta _{2}\), and \(\delta _{3}\) are penalty parameters. This optimization problem can be solved by alternatively minimizing \(f_1\), \(f_2\), \(g\) and \(I\).
Given \(I\), we obtain \(f_1\) and \(f_2\) by:
and
Note that these two are minimization problem [9] and have closed-form solutions based on (19):
and
We use \(D\) to represent the residue image \(B-I*k\) for the convenience of display.
The sub-problem referring to \(g\) is given by:
This is a \(l_{{\mathcal {P}}}\)-minimization problem, and the solution can be obtained via Algorithm 1.
For the fixed \(f_1\), \(f_2\) and \(g\), estimating \(I\) becomes a quadratic problem:
By using fast Fourier transform (FFT), we can obtain the closed-form solution of (29) in the following:
where \({\mathcal {F}}(\cdot )\), \({\mathcal {F}}^{-1}(\cdot )\), and \( \overline{{\mathcal {F}}(\cdot )} \) denote the FFT, inverse FFT, and the complex conjugate operator after FFT, respectively.
3.2.2 Estimating the blur kernel \(k\)
Based on the valuable experiences of previous works [10, 38], we adopt an improved version of (22) by solving \(k\) in the gradient domain, i.e.,
We also use the half-quadratic splitting method to introduce auxiliary variables \(h_{1}\) and \(h_{2}\) with respect to \(\nabla ^{1}(B-I*k)\) and \(\nabla ^{2}(B-I*k)\), then the above formulation can be rewritten as:
where \(\varphi _{1}\) and \(\varphi _{2}\) are penalty parameters. We solve (32) by alternatively updating \(h_1\), \(h_2\) and \(k\) from the following formulations:
and
The solutions of (33) and (34) are similar with (26) and (27), respectively. And (35) can be efficiently solved by FFT.
After obtaining \(k\), its negative elements are set to 0 and then normalized \(k\). Same to state-of-the-art deblurring methods, the blur kernel estimation is performed in a coarse-to-fine manner with an image pyramid [10] as shown in Fig. 4, the main steps from one of pyramid layers are listed in Algorithm 2.
4 Experimental results
Same as previous algorithms [1, 2, 9, 35], the proposed method only estimates the blur kernel from the blurry input and then adopts one of the existing non-blind deblurring methods [13, 14, 20, 39,40,41] to obtain the final clean image. In all experiments, we experientially set \(\omega _{1}=4e-3\), \(\omega _{2}=4e-4\), \(\alpha =4e-3\), and \(\beta =2\). Since \(p\) is an important parameter in the proposed model, the deblurring effects with different \(p\) values (ranged from 0.1 to 1 with a step of 0.1) are evaluated on three widely-used benchmark datasets [20, 24, 42]. We show two results of the proposed method on these datasets, i.e., Ours-FP: setting a fixed \(p\) value for the entire dataset, and Ours-GFP: setting a fixed \(p\) value for a group of blurry images with the same ground truth in the dataset. And the results of other methods are provided by the authors or generated by their code using default settings. Our method is implemented in MATLAB, and all experiments are run on a computer with Intel Core i7-8700 CPU and 16 GB RAM.For a fixed blur kernel size of 21\(\times \)21, it takes about 10, 35, 150 s to process a blurry image with a size of 200\(\times \)200, 400\(\times \)400, 800\(\times \)800, respectively. As the image and blur kernel size increases, the time required also increases accordingly.
4.1 Sun et al.’s dataset
We first evaluate the proposed method on the dataset of [24], this dataset contains 640 blurry images that are generated by 80 natural images and 8 blur kernels [38]. For fair comparisons, we use the provided codes of methods [1, 9, 36] to estimate blur kernels and the same non-blind deblurring method [41] to obtain the deblurring results, the results of methods [10, 24, 27, 28] are duplicated from [24]. We measure the quantitative performances of all methods with Error Ratio, which is defined by Levin et al. [38],
where \(I\) denotes the ground-truth clean image, \(I_{k}\) and \(I_{{\hat{k}}}\) are the restored images generated by the ground-truth blur kernel \(k\) and the estimated kernel, respectively. The smaller \(r\) means the better result. Figure 5 reports the success percent of all methods with different Error Ratios, and the Success Percent refers to fractions of images that can be restored within a given \(r\). From this figure, one can see that the proposed method consistently outperforms these state-of-the-art methods. We further introduce the mean PSNR, mean SSIM, mean Error Ratio, and the Success Percent with \(r \le 5\), in Table 1. The proposed method is superior to all competing methods in all measures. Figure 6 shows three qualitative examples of challenging cases. Compared with other algorithms, the proposed method achieves more robust results with fewer artifacts.
4.2 Lai et al.’s dataset
Then the proposed method is tested on a challenging dataset [42], which consists of 100 blurry images made by 25 high-quality clean images (including 5 categories: Man-made, Natural, People, Saturated, and Text images) and 4 large-scale blur kernels. We compare the proposed method with seven other algorithms [1, 9, 10, 18, 20, 34, 36]. For a fair comparison, the same blur kernel sizes and the non-blind deblurring method [39] are adopted for all methods. Table 2 shows the comparisons of the methods in terms of average PSNR. It can be seen that our method outperformed other methods in all categories. As shown in Fig. 7 and Fig. 8, images restored by the proposed method contain sharper edges and are visually more satisfying compared with other methods.
4.3 Pan et al.’s dataset
To further illustrate the superiority of the proposed method, we evaluate on the text dataset of Pan et al. [20], and compare it with state-of-the-art methods [1, 9, 10, 18, 20, 34, 36]. This text dataset contains 120 blurry images generated by 15 ground-truth document images and 8 blur kernels from [38]. We adopt the same non-blind deblurring method [20] to generate final clean images for a fair comparison. Table 3 shows the average PSNR and SSIM of each method. The proposed method performs favorably against all these methods. As shown in Fig. 9, our result has fewer residual blur and ringing artifacts than other methods.
4.4 Real-world blurry images
For real blurry images, the blur cases of them are usually more complicated, thus it is challenging to handle real images for most deblurring methods. Finally, besides synthetic datasets, we evaluate the performance of the proposed method on real blurry images gathered by [42], and compare it with state-of-the-art methods [9, 18, 34, 36]. Figure 10 and Fig. 11 show two examples, i.e., “postcard” and “boat2,” respectively, and it can be seen that the deblurring results of the proposed method in general have better visual quality. Note that the same non-blind deblurring method [39] is applied to restore the final results. As shown in Fig. 10, methods [18, 34, 36] fail to generate valuable results, in contrast, the method [2] generates generally satisfactory result and our result is even better than that of [9]. Combined with Fig. 11, the proposed method tends to recover more robust and clearer results with better visual quality.
5 Analysis and discussion
5.1 Effectiveness of the proposed model
In this section, we conduct an ablation study on datasets [20, 42] to show the effectiveness of the proposed model (20). Considering different data fidelity terms (4) and (5), and sparse regularizations such as \(l_{0}\), \(l_{1}\), \(l_{p}\), \(l_{e}\) and \(l_{{\mathcal {P}}}\) on image gradients, we develop a total of 10 combinations as listed in Table 4 and show quantitative comparisons in Table 5. We use the same parameters on different models for fair comparisons, i.e., \(\omega _{1}=4e-3\), \(\omega _{2}=4e-4\), \(\alpha =4e-3\), \(\beta =2\) and \(p=0.6\).
In the view of the data fidelity term, (5) is not always superior to (4), for example, the performance of the Model5 is better than that of the Model6. But in most cases, (5) contributes to improving the effectiveness of models. For the regularizations on image gradients, we can see that enforcing more sparsity constraints on image gradients can achieve better performance. In the end, the proposed model achieves the best results among different combinations. The success mainly stems from two aspects: constraining the second-order derivative of noise focuses more on high frequencies erased in the blurry image, and constraining the \(l_{{\mathcal {P}}}\)-norm of image gradients helps to recover a sharper latent clean image, favoring the estimation of the blur kernel.
5.2 Convergence property analysis
The proposed model is highly non-convex due to the involvement of \(l_{0}\) and \(l_{p}\)-norm, one may want to know whether our overall optimization scheme is convergent or not. Unfortunately, the theoretical proof of convergence property is intractable. We instead explain the convergence of the proposed algorithm by carrying out experiments on the dataset [20], which contains 120 synthetic blurry images. We record the average values of the energy function (21) and the kernel similarity [25] over iterations at the finest image scale as shown in Fig. 12, it can be seen that the proposed algorithm converges after less than 30 iterations, which demonstrates the effectiveness of our optimization algorithm.
5.3 Relations with other deblurring methods
As discussed in Sect. 3.1, one can obtain the solution of the \(l_{e}\)-minimization problem [9] through the proposed Algorithm 1 by setting \(p=1\). Therefore, our method with \(p=1\) and reduces to the deblurring method of Chen et al. [9]. Previous methods [15,16,17,18] usually assume the \(l_{p}\) sparsity on image gradients, and a nature question is how to set \(p\) value during the iterations. For this problem, methods [16, 17] used a fixed \(p\) value throughout the process, and [15] experientially set a series of decreasing \(p\) values. Moreover, the method [18] automatically learned \(p\) values via an iteration-wise learning method. Although our method achieves competitive results on commonly-used datasets [20, 24, 42] and real blurry images, we can only set a fixed \(p\) value by experience rather than theoretical guidance.
5.4 Analysis of some key parameters
The proposed method has four key parameters in addition to \(p\), i.e., \(\omega _1\), \(\omega _2\), \(\alpha \), and \(\beta \). To analyze the effect of these parameters to the experiments, we carry out experiments on the dataset from [20] by varying one of them and keeping others fixed. Average PSNR is adopted as the evaluation metric. As shown in Fig. 13, the proposed method is not very sensitive to changes in these parameters within reasonable ranges. And we empirically select parameters that result in better performance as default parameters.
5.5 Limitations of the proposed method
As mentioned earlier, there are practical limitations to our method, specifically related to the selection of the parameter \(p\). In our experiments of Sect. 4, \(p\) ranged from 0.1 to 1 with a step of 0.1, and we choose the best \(p\) value based on the experimental results. For example, Fig. 14 shows the performance of different \(p\) values on datasets [20, 24, 42], and we, respectively, choose \(p\) as 0.1, 0.6, and 0.6. In fact, there is a relationship between the choice of \(p\) and the semantic information of the blurred image. In most deblurring methods [12,13,14], the gradient of natural clean images is considered to obey a heavy-tailed distribution (commonly modeled by a hyper-Laplacian prior\( P(\nabla ^{1}I)\propto e^{-\alpha |\nabla ^{1}I|^p} \) with \( 0.5\le p \le 0.8\)), which is used as regularization in the process of image restoration to help recover clean and accurate images. Therefore, there must exist a \(p\) that best recovers a blurry image, and our future work can focus on how to derive \(p\) from the given blurred image. According to abundant deblurring results in our work, we believe that setting \(p\) to 0.6 can achieve satisfactory results in most cases.
6 Conclusion
In this paper, we have proposed an improved sparse regularization, namely \(l_{{\mathcal {P}}}\)-norm, and a flexible model for blind image deblurring. The \(l_{{\mathcal {P}}}\)-norm is a combination of \(l_{0}\) and \(l_{p}(0<p \le 1)\). Our model can be seen as a generalization of the enhanced sparse model. In order to solve the \(l_{{\mathcal {P}}}\)-minimization problem, we present an effective method based on the generalized iterated shrinkage algorithm and further combine this algorithm with the half-quadratic splitting method to develop an optimization scheme for the proposed model. Experimental results show that the proposed method performs favorably against state-of-the-art methods on commonly-used synthetic datasets and real blurry cases. The proposed model has limitations to select the \(p\) values, but in practical applications, we can set the value of \(p\) to 0.6.
Data availability and materials
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
References
Li, L., Lai, W.-S., Yang, M.-H.: Blind image deblurring via deep discriminative priors. Int. J. Comput. Vis. 127, 1025–1043 (2019). https://doi.org/10.1007/s11263-018-01146-0
Pan, J., Sun, D., Pfister, H., Yang, M.-H.: Deblurring images via dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2315–2328 (2018). https://doi.org/10.1109/TPAMI.2017.2753804
Ren, W., Cao, X., Pan, J., Guo, X., Zuo, W., Yang, M.-H.: Image deblurring via enhanced low-rank prior. IEEE Trans. Image Process. 25(7), 3426–3437 (2016). https://doi.org/10.1109/TIP.2016.2571062
Xu, L., Jia, J.: Two-phase kernel estimation for robust motion deblurring. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) Computer Vision - ECCV 2010, pp. 157–170. Springer, Berlin, Heidelberg (2010)
Dong, J., Pan, J., Sun, D., Su, Z., Yang, M.-H.: Learning data terms for non-blind deblurring. In: Computer Vision – ECCV 2018, pp. 777–792. Springer, Cham (2018)
Meng, D., Torre, F.: Robust matrix factorization with unknown noise. In: 2013 IEEE International Conference on Computer Vision, pp. 1337–1344 (2013). https://doi.org/10.1109/ICCV.2013.169
Pan, J., Dong, J., Tai, Y.-W., Su, Z., Yang, M.-H.: Learning discriminative data fitting functions for blind image deblurring. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 1077–1085 (2017). https://doi.org/10.1109/ICCV.2017.122
Gong, Z., Shen, Z., Toh, K.-C.: Image restoration with mixed or unknown noises. Multiscale Model. Simul. 12(2), 458–487 (2014). https://doi.org/10.1137/130904533
Chen, L., Fang, F., Lei, S., Li, F., Zhang, G.: Enhanced sparse model for blind deblurring. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) Computer Vision - ECCV 2020, pp. 631–646. Springer, Cham (2020)
Cho, S., Lee, S.: Fast motion deblurring. ACM Trans. Graph. 28(5), 1–8 (2009). https://doi.org/10.1145/1618452.1618491
Shan, Q., Jia, J., Agarwala, A.: High-quality motion deblurring from a single image. ACM Trans. Graph. 27(3), 1–10 (2008). https://doi.org/10.1145/1360612.1360672
Fergus, R., Singh, B., Hertzmann, A., Roweis, S.T., Freeman, W.T.: Removing camera shake from a single photograph. In: ACM SIGGRAPH 2006 Papers. SIGGRAPH ’06, pp. 787–794. Association for Computing Machinery, New York, NY, USA (2006). https://doi.org/10.1145/1179352.1141956
Krishnan, D., Fergus, R.: Fast image deconvolution using hyper-laplacian priors. In: Advances in Neural Information Processing Systems 22 - Proceedings of the 2009 Conference, pp. 1033–1041. Neural Information Processing Systems, ??? (2009)
Levin, A., Fergus, R., Durand, F., Freeman, W.T.: Image and depth from a conventional camera with a coded aperture. In: ACM SIGGRAPH 2007 Papers. SIGGRAPH ’07, p. 70. Association for Computing Machinery, New York, NY, USA (2007). https://doi.org/10.1145/1275808.1276464
Almeida, M.S.C., Almeida, L.B.: Blind and semi-blind deblurring of natural images. IEEE Trans. Image Process. 19(1), 36–52 (2010). https://doi.org/10.1109/TIP.2009.2031231
Gan, W., Zhou, Y., He, L.: Bi-Lp-norm sparsity pursuiting regularization for blind motion deblurring. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds.) Neural Inf. Process., pp. 723–730. Springer, Cham (2016)
Kotera, J., Šroubek, F., Milanfar, P.: Blind deconvolution using alternating maximum a posteriori estimation with heavy-tailed priors. In: Wilson, R., Hancock, E., Bors, A., Smith, W. (eds.) Comput. Anal. Images Patterns, pp. 59–66. Springer, Berlin, Heidelberg (2013)
Zuo, W., Ren, D., Zhang, D., Gu, S., Zhang, L.: Learning iteration-wise generalized shrinkage-thresholding operators for blind deconvolution. IEEE Trans. Image Process. 25(4), 1751–1764 (2016). https://doi.org/10.1109/TIP.2016.2531905
Ge, X., Tan, J., Zhang, L., Liu, J., Hu, D.: Blind image deblurring with gaussian curvature of the image surface. Signal Process.: Image Commun. 100, 116531 (2022). https://doi.org/10.1016/j.image.2021.116531
Pan, J., Hu, Z., Su, Z., Yang, M.-H.: \(l_0\) -regularized intensity and gradient prior for deblurring text images and beyond. IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 342–355 (2017). https://doi.org/10.1109/TPAMI.2016.2551244
Xu, L., Zheng, S., Jia, J.: Unnatural l0 sparse representation for natural image deblurring. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1107–1114 (2013). https://doi.org/10.1109/CVPR.2013.147
Xu, L., Lu, C., Xu, Y., Jia, J.: Image smoothing via l0 gradient minimization. ACM Trans. Graph. 30(6), 1–12 (2011). https://doi.org/10.1145/2070781.2024208
Zuo, W., Meng, D., Zhang, L., Feng, X., Zhang, D.: A generalized iterated shrinkage algorithm for non-convex sparse coding. In: 2013 IEEE International Conference on Computer Vision, pp. 217–224 (2013). https://doi.org/10.1109/ICCV.2013.34
Sun, L., Cho, S., Wang, J., Hays, J.: Edge-based blur kernel estimation using patch priors. In: IEEE International Conference on Computational Photography (ICCP), pp. 1–8 (2013). https://doi.org/10.1109/ICCPhot.2013.6528301
Hu, Z., Yang, M.-H.: Good regions to deblur. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) Computer Vision - ECCV 2012, pp. 59–72. Springer, Berlin, Heidelberg (2012)
Joshi, N., Szeliski, R., Kriegman, D.J.: Psf estimation using sharp edge prediction. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008). https://doi.org/10.1109/CVPR.2008.4587834
Lai, W.-S., Ding, J.-J., Lin, Y.-Y., Chuang, Y.-Y.: Blur kernel estimation using normalized color-line priors. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 64–72 (2015). https://doi.org/10.1109/CVPR.2015.7298601
Michaeli, T., Irani, M.: Blind deblurring using internal patch recurrence. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) Computer Vision - ECCV 2014, pp. 783–798. Springer, Cham (2014)
Hu, D., Tan, J., Zhang, L., Ge, X., Liu, J.: Salient edges combined with image structures for image deblurring. Signal Processing: Image Commun. 107, 116787 (2022). https://doi.org/10.1016/j.image.2022.116787
Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 257–265 (2017). https://doi.org/10.1109/CVPR.2017.35
Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., Matas, J.: Deblurgan: Blind motion deblurring using conditional adversarial networks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8183–8192 (2018). https://doi.org/10.1109/CVPR.2018.00854
Tao, X., Gao, H., Shen, X., Wang, J., Jia, J.: Scale-recurrent network for deep image deblurring. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8174–8182 (2018). https://doi.org/10.1109/CVPR.2018.00853
Zhang, K., Ren, W., Luo, W., Lai, W.-S., Stenger, B., Yang, M.-H., Li, H.: Deep image deblurring: a survey. Int. J. Comput. Vis. 130, 2103–2130 (2022)
Krishnan, D., Tay, T., Fergus, R.: Blind deconvolution using a normalized sparsity measure. In: CVPR 2011, pp. 233–240 (2011). https://doi.org/10.1109/CVPR.2011.5995521
Yan, Y., Ren, W., Guo, Y., Wang, R., Cao, X.: Image deblurring via extreme channels prior. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6978–6986 (2017). https://doi.org/10.1109/CVPR.2017.738
Wen, F., Ying, R., Liu, Y., Liu, P., Truong, T.-K.: A simple local minimal intensity prior and an improved algorithm for blind image deblurring. IEEE Trans. Circuits Syst. Video Technol. 31(8), 2923–2937 (2021). https://doi.org/10.1109/TCSVT.2020.3034137
Liu, J., Tan, J., Zhang, L., Ge, X., Hu, D.: Blind deblurring with patch-wise second-order gradient prior. Signal Processing: Image Commun. 107, 116781 (2022). https://doi.org/10.1016/j.image.2022.116781
Levin, A., Weiss, Y., Durand, F., Freeman, W.T.: Understanding blind deconvolution algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2354–2367 (2011). https://doi.org/10.1109/TPAMI.2011.148
Cho, S., Wang, J., Lee, S.: Handling outliers in non-blind image deconvolution. In: 2011 International Conference on Computer Vision, pp. 495–502 (2011). https://doi.org/10.1109/ICCV.2011.6126280
Hu, Z., Cho, S., Wang, J., Yang, M.-H.: Deblurring low-light images with light streaks. IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2329–2341 (2018). https://doi.org/10.1109/TPAMI.2017.2768365
Zoran, D., Weiss, Y.: From learning models of natural image patches to whole image restoration. In: 2011 International Conference on Computer Vision, pp. 479–486 (2011). https://doi.org/10.1109/ICCV.2011.6126278
Lai, W.-S., Huang, J.-B., Hu, Z., Ahuja, N., Yang, M.-H.: A comparative study for single image blind deblurring. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1701–1709 (2016). https://doi.org/10.1109/CVPR.2016.188
Funding
The research leading to these results received funding from PhD research startup foundation of Anhui Jianzhu University under Grant Agreement No2022QDZ05.
Author information
Authors and Affiliations
Contributions
XG and JL wrote the main manuscript text, DH prepared figures and tables, JT wrote Sect. 5 in the manuscript. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors have no competing interests to declare that are relevant to the content of this article.
Ethics approval
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Ge, X., Liu, J., Hu, D. et al. An extended sparse model for blind image deblurring. SIViP 18, 1863–1877 (2024). https://doi.org/10.1007/s11760-023-02888-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11760-023-02888-2