Abstract
Speckle noise is one of the major challenges that affects Synthetic Aperture Radar (SAR) images in view of its multiplicative nature. To deal with this issue, a new fast and effective despeckling algorithm is proposed, wich is based on a variational model incluing data fidelity and regularization terms. The G0 distribution is considered to define the data fidelity term, whereas the regularization term is formed by a combination of the weighted second-order total variation, the Overlapping Group Sparsity (OGS), and a box constraint. Moreover, a new fast and efficient diffusion function is proposed to solve the problem of over-smoothing, and speed up the despeckling process. The obtained results show that the proposed solution can achieve a maximum value of Equivalent Number of Looks (ENL\(\simeq \)189) for real SAR images, and the best CPU time consumption compared with the state-of-the-art speckle removal methods.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Imaging by Synthetic Aperture Radar (SAR) is of great interest thanks to its independence of weather conditions and capability of day and night functioning [24]. SAR offers raw data in complex form that commonly processed using focalization algorithms to obtain high-resolution images. However, speckle noise remains a major challenge that limiting SAR image exploitation in various computer vision tasks, such as image classification, object recognition, edges detection and feature extraction [3, 12, 18, 19]. Speckle is multiplicative noise resulting from the coherent nature of radar waves, which complicates parameters estimation, such as reflectivity, interferometric and polarimetric properties of the ground imaged targets [29]. Many approaches have been reported to reduce the speckle noise effects. In the early 80s Lee [20], Frost et al. [11] and Kuan et al. [17] proposed various despeckling methods which depend on the size of the used mask. Another approach called Non Local Mean (NLM) has been published by Buades et al. [4] based on patch similarity. This approach is more efficient than the previous ones in term of preserving edges and features. For that, many authors introduced the concept of NLM in their algorithms [25, 27, 34]. In the same context, the Probabilistic Patch-Based filter (PPB) [7] and the Patch-Ordering-Based method (POTDF) [35] have been developed. However, the main drawback of NLM based algorithms is the strong smoothing and remaining noise at the homogeneous and flat region edges. Furthermore, these methods have a high computational cost. To deal with this last problem, look-up tables are used in the Fast Adaptive Nonlocal SAR despeckling algorithm (FANS) [6] to compute the distances between patches.
Variational approach has also been employed for despeckling purposes, which is based on the minimization of a functional energy formulated in two separated terms, namely data fidelity and regularization [34]. The data fidelity term describes the statistical parameters of the observed image whereas the regularization term quantifies the smoothness of the final solution. The statistical parameters related to the data fidelity are generally estimated using different rules such as the maximum likelihood estimator (MLE) and the maximum a posteriori (MAP) [36]. Few years ago, another method was proposed to formulate the data fidelity of SAR image intensity using the Gamma distribution [2]. In this context, various speckle distributions have been claimed to model the radar cross section (RCS) of the SAR imaged scene [15]. It has been shown in [5, 25] that the G0 distribution is one of the best distributions that could fit the SAR data in different texture varieties, either for homogeneous or heterogeneous. Indeed, the inverse Gamma distribution is used to model the RCS, and the speckle noise is assumed to follow the Gamma distribution, which leads to G0 distribution for the whole imaged scene.
In literature, the first order Total Variation (TV) was introduced by Rudin et al. [30] as a regularization term, in order to guarantees the preservation of edge information. However, it is characterized by an undesirable staircase effect. To overcome this drawback, hybrid model is proposed in [22] by combining first and second order TV for the regularization. The obtained results are satisfactory in terms of solving the staircase effect and false edge problem, but only in the case of local structures and textures. For a global staircase artefacts smoothing, another regularization term known as Overlapping Group Sparsity (OGS) was adopted in [21].
Recently, SAR image despeckling algorithm is proposed in [14], by adopting the I divergence model for the data fidelity term under the assumption that the speckle noise follows the Gamma distribution. In addition, a truncated non-smooth method abbreviated TRTVpIdiv has been developed using the non-convex lp-norm [14]. After solving the optimization problem with an Alternating Direction Multiplier Method (ADMM), the despeckling algorithm was successfully applied to real SAR images, but the blurry effect and over-smoothing problems still remain. Indeed for solving these two problems, the total variation term should be weighted by an efficient diffusion function. In this context, many functions can be used such as the well known Parona Malik (PM) function [28] and the Sigmoid-based diffusion function proposed by Tebini et al. [32].
In the present paper, a new fast method based on a variational model is developed by adopting the G0 distribution to fit as best as possible the SAR data. The proposed regularization term includes a weighted second order TV, an OGS and a box constraint. Moreover, a novel fast and efficient diffusion function is proposed for weighting the second order TV term. The performance of the developed despeckling model is evaluated and compared to different existing methods, using both synthetic and real SAR images.
The present paper is organized as follows. In Section ??, the mathematical background that will be used throughout the paper is provided. In Section ??, a brief description of related models is presented. The developed variational based despeckling model is then given in Section ??, whereas the obtained experimental results are pointed in Section ??.
2 Mathematical background
2.1 Alternating Direction Method of Multipliers (ADMM)
The ADMM algorithm is considered as an efficient and powerful method [21], which is widely used to solve separable constrained optimization problems of the following form:
where \(f\left (.\right )\) and \(g\left (.\right ):\chi _{i(i=1,2)}\rightarrow \mathbb {R}\) represent a closed convex functions, \( A,B\in \mathbb {R}^{l\times K_{i}}\) stand for linear transforms, and \( b\in \mathbb {R}^{l}\) is the input vector. The augmented Lagrangian function of (1) is:
where \(\lambda \in \mathbb {R}^{l}\) is the Lagrange multiplier and μ is a positive penalty parameter. The ADMM is based on finding a saddle point of \({\mathscr{L}}\) by alternately minimizing \({\mathscr{L}}\) with respect to x1, x2 and λ. A powerful algorithm for solving (1) in the framework of ADMM is given in Algorithm 1.
2.2 First and second order TV
Considering an image \( u\in \mathbb {R}^{M\times N}\). The discretized image domain can be expressed as: \(V=\left \{ \left (i,j\right ) \mid i,j\in \mathbb {Z},1\leq i\leq M,1\leq j\leq N\right \} \). where \(\mathbb {Z}\) denotes the set of positive integers. The isotropic first order TV regularizer can be represented in the discrete form as:
and the second order TV can be represented as:
where \(D_{x}^{+}\) and \(D_{y}^{+}\) are the forward difference operators with periodic boundary conditions while \(D_{x}^{-}\) and \(D_{y}^{-}\) correspond to the backward difference operators with periodic boundary conditions.
2.3 Constraint box
A characteristic function IC is defined for a set C, with \(C=\left [0, 255\right ] \) or \(\left [ 0, 1\right ]\).
The function IC is used to restore the pixels of an image in the interval C.
2.4 Overlapping group sparsity
For an image u, a K square-point group is defined as:
where \(\breve {u}_{i,j,K}\in \mathbb {R}^{K\times K}\) and \(K_{1}=\left [ \frac {K-1}{2}\right ]\), \(K_{2}=\left [ \frac {K}{2}\right ]\), and \(\left [ x\right ] \) denotes the largest integer less than or equal to x. The vector resulting from stacking K columns of the matrix \(\breve {u}_{i,j,K}\) is denoted by \(u_{\left (i,j\right ) ,K}\). The OGS regularizer of \(\breve {u}_{i,j,K}\) in the two dimensional space is defined by:
3 Related works
The multiplicative model for speckle noise is denoted by:
where u is the unknown noise-free two-dimensional image (M × N), b is the noisy observed image, and n is the multiplicative noise, assuming that u and b are uncorrelated. The speckle intensity follows a Gamma distribution with a Probability Density Function (PDF) [34] :
where L is the equivalent number of looks (ENL) and \({\Gamma } \left (.\right )\) is the Gamma function. The amplitude PDF is derived with taking the square root of the intensity PDF.
3.1 The G 0 distribution
The G0 distribution is based on the assumption that the amplitude backscatter follow the reciprocal of the square root of the Gamma distribution, while the speckle noise allows a square root of Gamma law [10]. The PDF of the reciprocal of the square root of Gamma distribution is given by:
The PDF of the intensity backscatter b can be derived from (8), (9) and (10) as follows:
α and γ are parameters related to the roughness of the imaged scene and the scale of the distribution, respectively. These unknown parameters are estimated according to the Mellin transform [1].
3.2 Related variational model
Starting with the fact that the speckle model is multiplicative, its formula has the form of (8), for which b the image of backscatter terrain, u the noise-free image (true searched image) and n is the speckle noise. By assuming that u and n are independent, a convex model has been studied in [31], starting from the lack of non convexity of the model of Aubert and Aujol (AA) [2]. The energy functional given in [31], has been obtained after a logarithmic transformation of the form \(z=log\left (u \right )\). As a result, the energy functional has the form:
with \(E_{fid}=\beta \underset {V}{\sum }\left (be^{-z}+z\right )\) and \(E_{reg}=\underset {V}{\sum }\left \vert \nabla z\right \vert \), β is a positive regularization parameter, and V is the discretized image domain. Many existing variational models have been proposed by constructing the fidelity term with assuming a constant terrain RCS and different regularization terms. Based on these assumptions, the despeckling results are still unsatisfactory.
4 Proposed variational model
In this section, the developed despeckling model is presented, including the data fidelity term and the regularization one, which is itself a combination of a weighted second order total variation, OGS regularizer, and constraint box terms.
4.1 Data fidelity term
By combining (8) and (9), the resulting PDF is:
The PDF of b has a priori the form of (10), and by a MAP rule, \(f\left (u\vert b\right )\) is maximized. By using (13) and (10) in the framework of MAP rule, the obtained equation is as follows:
where ∝ denotes “proportional to”. In (14), all quantities are positive, and hence minimizing the negative log-function is equivalent to maximizing the log-function:
Thus, the data fidelity can be expressed by:
4.2 Proposed regularization term
The proposed regularization term is as follows:
s.t.
\(S\left (\nabla u\right )\) is the diffusion function used for weighting the second order TV term. ρ1 and ρ2 are regularization parameters.
4.3 Proposed weighting diffusion function
The diffusion function has been widely used to describe different physical phenomena, e.g molecular and atoms diffusion in different mediums [26]. The diffusion function should fulfil some mathematical conditions. It must be positive, nonlinear and monotonically decreasing.
For the case of image denoising, the values of \(S\left \vert \left (\nabla u\right ) \right \vert \) tend to 0 near edges, where the image gradient has very high values. Thus, restraint the filtering process which is suitable since it conserves the boundaries and consequently the images structures. The high values of S correspond to low image gradient, which allow the smoothing of homogeneous regions.
The idea behind proposing weighting diffusion function comes from the probability theory, for which the PDFs have usually a continuous variation in the interval [0-1], with a similar variation nature compared to a diffusion function. The Lomax distribution commonly used in reliability and life testing problems is adopted [16]. For a random variable X with scale a and shape b, the Cumulative Density Function (CDF) of Lomax distribution is given by:
The probability density corresponding to \( F\left (x\right ) \) is:
Considering the variable transformation X = Y eY, and after replacing in (18), the new obtained CDF is:
The survival function of a random variable X has the form:
By replacing the new CDF in (21), the new survival function is:
The obtained function \(S\left (x\right )\) is positive, nonlinear, and monotonically decreasing. Thus, it satisfies all requirements of a diffusion function. The flux function is defined as:
The performance of \(S\left (x\right )\) can be enhanced by varying the parameters a and b, depending on the statistical parameters of the denoised image. For a fixed b, \(S\left (x\right )\) is faster with large values of a, starting with the value 1 at low gradient levels, as shown in Fig. 1a. This is the case of homogeneous regions, where the diffusion is permitted. With the increase of the gradient values, \(S\left (x\right )\) decreases, and hence inhibiting the diffusion and preserving the image edges. For a fixed a value, \(S\left (x\right )\) smoothness is controllable by the variation of the parameter b. High b values require more smoothness to \(S\left (x\right )\), whereas lower ones make \(S\left (x\right )\) more sharp (see Fig. 1c). As in the case of \(S\left (x\right )\), a and b have an impact on \(F_{Flux}\left (x\right )\) in similar way as depicted in Fig. 1b and d. The values of the parameter a and b should be selected in a way to ensure a good compromise between the diffusion speed and the preservation of the image edges.
Figure 2 displays the proposed diffusion and flux functions, compared with the two well known models PM [28] and Tebini et al. [32]. For the same parameters a = 1 and b = 2, the results show in one side that the first curve which reaches its minimum values corresponds to our proposed model. In the other side, the flux function of our proposed model reaches its maximum firstly, and decreases to converge rapidly compared with the two other models. The findings reveal that our proposed function is faster, when used for image denoising, with efficiently edge preserving, and allows good diffusion capability within the homogenous regions.
In order to prove the convergence speed of the diffusion function, the υ parameter is analyzed over successive iteration steps, as reported elsewhere [32]. The obtained results, as presented in Fig. 3, clearly indicate that despite the Tebini et al. model reaches its minimum earlier, our model converges to zero more faster. Thus, the proposed function conducts to a deeper filtering, with maintaining edges by rapidly inhibiting the diffusion. Consequently, our proposed function does not only preserve edges, but may even enhance them. Moreover, it permits the diffusion inside the homogeneous areas which ensures a good performance of denoising. Nevertheless, the two other models can provide a blurring edges since they require more time during the diffusion. For further details on mathematical convergence proofs, the reader is referred to [32, 33].
In Fig. 4, it can be seen that edges obtained by our proposed diffusion function are more clear and continuous, while results obtained by PM and Tebini et al. functions suffer from weak and discontinuous edges.
4.4 SAR despeckling algorithm
By combining the data fidelity term in (16) and the regularization term in (17), the energy functional of the proposed model is as follows:
Due to the data fidelity term, our model lacks the global convexity. To solve this problem, the value of u is replaced by another variable z such that \(z=log\left (u\right ) \). After substitution in (24), the energy functional of the proposed model is:
(25) can be rewritten using the following notations
s.t. w = z, Ω = ∇2z, Σ = ∇z, Θ = z. The augmented Lagrangian in (26) is defined by the (27) as follows:
where \(\lambda _{i}\left (i=1:4\right ) \) are the Lagrange multipliers, and μ a positive parameter for penalizing the L2 norm quadratic terms. The solution of our optimization problem is performed by separately solving the simpler following sub-problems:
-
1.
z sub-problem: fixing wk, Ωk, Σk and Θk, the (27) can be transformed into:
$$ \begin{array}{@{}rcl@{}} z^{k+1} &=&\underset{z}{\arg \min }\mathscr{L} \left( z,w^{k},{\Omega} ^{k},{\Sigma}^{k},{\Theta}^{k};{\lambda_{1}^{k}},{\lambda_{2}^{k}},\lambda_{3}^{k},{\lambda_{4}^{k}}\right)\\ z^{k+1} &=&\underset{z}{\arg \min }\frac{\mu }{2}\left\Vert w^{k}-z\right\Vert_{2}^{2}-{\lambda_{1}^{T}}\left( w^{k}-z\right)\\ && +\frac{\mu }{2}\left\Vert {\Omega}^{k}-\nabla^{2}z\right\Vert_{2}^{2}-\lambda_{2}^{T}\left( {\Omega}^{k}-\nabla^{2}z\right) +\frac{\mu }{2}\left\Vert {\Sigma}^{k}-\nabla z\right\Vert_{2}^{2}\\ &&-{\lambda_{3}^{T}}\left( {\Sigma}^{k}-\nabla z\right) +\frac{\mu }{2}\left\Vert {\Theta}^{k}-z\right\Vert _{2}^{2}-{\lambda_{4}^{T}}\left( {\Theta}^{k}-z\right) \end{array} $$(29)With respect to z, the corresponding normal equation is:
$$ \begin{array}{@{}rcl@{}} z\left( 2\mu I+\mu \left( \nabla^{2}\right)^{T}\nabla^{2}+\mu \nabla^{T}\nabla \right) &=&\mu w^{k}-\lambda_{1}+\mu \left( \nabla^{2}\right)^{T}{\Omega}^{k}\\ &&-\lambda_{2}\left( \nabla^{2}\right)^{T}+\mu \nabla^{T}{\Sigma}^{k}\\ && -\lambda_{3}\nabla^{T}+\mu {\Theta}^{k}-\lambda_{4} \end{array} $$(30)Here, the fast Fourier transform is chosen to diagonalize under periodic boundary conditions the two terms ∇T∇ and \(\left (\nabla ^{2}\right )^{T}\nabla ^{2}\), which are first and second order difference operators. Thus, z can be written as:
$$ z=\mathcal{F}^{-1}\left( \frac{\mathcal{F} \left[ \mu \left( w^{k} - \frac{\lambda_{1}}{\mu }+{\Theta}^{k} - \frac{\lambda_{4}}{\mu }\right) +\mu \left( \nabla^{2}\right)^{T}\left( {\Omega}^{k}-\frac{\lambda_{2}}{\mu }\right)+\mu \nabla^{T}\left( {\Sigma}^{k}-\frac{\lambda_{3}}{\mu }\right) \right] }{2\mu +\mu \left[ \left\vert \mathcal{F} \left[ \nabla^{2}\right] \right\vert^{2}+\left\vert \mathcal{F} \left[ \nabla \right] \right\vert^{2}\right] }\right) $$(31) -
2.
w sub-problem: fixing zk+ 1, Ωk,Σk,Θk to calculate wk+ 1. Equation (27) can be transformed into :
$$ \begin{array}{@{}rcl@{}} w^{k+1} &=&\underset{w}{\arg \min }\beta \underset{V}{\sum }\left[ \left( 2L-2\alpha +1\right) w+\left( \gamma +Lb^{2}\right) e^{-2w}\right]\\ && +\frac{\mu }{2}\left\Vert w-z^{k+1}\right\Vert_{2}^{2}-{\lambda_{1}^{T}}\left( w-z^{k+1}\right)\\ &=&\underset{w}{\arg \min }\beta \underset{V}{\sum }\left[ \left( 2L-2\alpha +1\right) w+\left( \gamma +Lb^{2}\right) e^{-2w}\right]\\ && +\frac{\mu }{2}\left\Vert w-\left( z^{k+1}-\frac{{\lambda_{1}^{T}}}{\mu }\right) \right\Vert_{2}^{2} \end{array} $$(32)Since the objective function of w is strictly convex, this equation has a unique solution obtained by using Newton’s method through solving the following non-linear equation:
$$ \beta \left( \gamma +Lb^{2}\right) e^{-2w}-\frac{\mu }{2}\left[ w-\left( z^{k+1}-\frac{{\lambda_{1}^{T}}}{\mu }\right) -\beta \left( L-\alpha +\frac{1}{2}\right) \right]=0 $$(33)The numerical solution of the problem is obtained after a few number of iterations.
-
3.
Ω sub-problem: fixing zk+ 1, wk+ 1, Σk+ 1, Θk+ 1 to calculate Ωk+ 1. The solution of Ω subproblem is given by shrinkage operator, as follows:
$$ {\Omega}^{k+1} = \max \left( \left\vert \nabla^{2}z^{k+1}+\frac{\lambda_{2}^{k}}{\mu }\right\vert -\frac{\rho_{1} S\left( \nabla z^{k+1}\right) }{\mu } ,0\right) . \frac{\nabla^{2}z^{k+1}+\frac{{\lambda_{2}^{k}}}{\mu }}{\left\vert \nabla^{2}z^{k+1}+\frac{{\lambda_{2}^{k}}}{\mu }\right\vert } $$(34) -
4.
Σ subproblem: fixing zk+ 1, wk+ 1,Ωk+ 1, Θk to calculate Σk+ 1: Σ is the overlapping group sparse problem:
$$ {\Sigma}^{k+1}= \underset{\Sigma }{\arg \min }\frac{\mu }{2}\left\Vert {\Sigma} -\left( \nabla z^{k+1}+\frac{{\lambda_{3}^{T}}}{\mu }\right) \right\Vert_{2}^{2} \\ + \rho_{2} \varphi_{OGS}\left( {\Sigma} \right) $$(35)The Σ equation can be solved by the iterative Majorization-Minimization algorithm (MM). This approach has been adopted in [8] to solve the optimization problem for denoising images under Cauchy noise. Also, the MM algorithm is used to solve a numerical problem of deblurring Poisson noisy image using the OGS regularizer [23].
-
5.
Θ subproblem: given by:
$$ {\Theta}^{k+1}=\underset{\Theta }{\arg \min }\frac{\mu }{2}\left\Vert {\Theta} -\left( z^{k+1}+\frac{{\lambda_{4}^{T}}}{\mu }\right) \right\Vert_{2}^{2}+\Im_{C}\left( {\Theta} \right) $$(36)The Θ subproblem can be solved by a simple projection on the set C.
Finally, the Lagrange multipliers is updated by the following equations:
Finally, the proposed despeckling model is summarized in Algorithm 2.
5 Experimental results
To evaluate the efficiency and the performance of the developed despeckling model, various images are used: 8-bit grey standard images, two simulated SAR images, and four scenes of real SAR images. The cameraman and peppers images of size 256 × 256 and 512 × 512, respectively, have been used. As synthetic SAR images, the Napoli and Nimes aerial images of size 512 × 512 and 400 × 400, respectively, have been selected.
The real SAR images used in our experiments, are downloaded from the open image data base of Sandia National Laboratories, and they can be found at http://www.sandia.gov/radar/sar-data.html. The proposed algorithm is overall compared with the following recent methods, taking into account their default parameters :
All experiments are performed using a personal computer with Intel(R) Core (TM) i5 processor, 4G of RAM, under the windows 7 operating system.
5.1 Parameters setting
To get more accurate results and highest possible metrics, it is important to select the appropriate simulation parameters. An empirical choice by tuning manually the parameters β, ρ1 and ρ2 is necessary to control the balance between the data fidelity and the regularization term. According to the obtained results with changing the group size K of the OGS term, it is found that a high value of this latter leads to over smoothed results and more computational complexity. The obtained optimal value after varying the group size in several experiments is K = 3. In addition, the values of the parameters a and b in the diffusion function are selected to be 8 and 2, respectively, for all experiments. Finally, to control the ADMM algorithm, the convergence condition stated in [13] has been used.
5.2 Performance evaluation metrics
In the general noise case, good noise suppression, and well image’s structure preservation can be evaluated quantitatively by using the metrics Signal to Noise Ratio (SNR) and Mean Square Error (MSE). However, it is inefficient in the case of speckle noise in real SAR images due to the lack of noise-free data. For this reason, the standard grey level images, and the synthetic SAR images cited above, are used as noise-free data. Speckle noise is added synthetically with equivalent number of looks (ENL= 3), fulfilling the practical application [9]. The obtained noisy images exhibited the characteristics of multiplicative noise following Rayleigh distribution. For that, three metrics are used to evaluate the despeckling performance [29]:
The Peak Signal to Noise Ratio (PSNR) is given by:
The Structural Similarity Index (SSIM) is given by:
The Square Root of Mean Square Error (RMSE) is given by:
where u represents the noise-free image of size M × N, \(\hat {u}\) is the recovered image after despeckling, and \(u^{2}_{max}\) is the maximum grey level value of the original image. μu and \(\mu _{\hat {u}}\) are the mean values of the original image and the recovered image, respectively, and their standard deviation are σu and \(\sigma _{\hat {u}}\). \(2\sigma _{\hat {u}u}\) denotes the correlation value between u and \(\hat {u}\). The constants C1 and C2 are added, with small values,to avoid computation instability.
The despeckling quality of such algorithm can be also achieved through ENL evaluation, in case of SAR intensity data as follows:
where \(\mu _{u_{s}}\) and \(\sigma _{u_{s}}\) denote, respectively, the mean and the standard deviation of the selected homogeneous area. In case of SAR amplitude data, the ENL can be expressed as follows:
High values of ENL indicate a strong ability of the algorithm to suppress the speckle noise in the homogeneous regions, without introducing distortions.
To measure the sharpness and structure preservation of the despeckled images, the ENLRt is now calculated as the ratio between a noisy image, and the corresponding denoised one.
The values are calculated in small zones, which contain various structures and objects. As ENLRt approaches the ENL value of the noisy image, the algorithm better preserves structures and gives sharper results.
5.3 Experimental results on synthetic images
Figures 5–8 report the results obtained with different despeckling methods using: two standard gray level images (Cameraman and Peppers) and two synthetic SAR images (Napoli and Nimes). The added speckle noise to each image corresponds to ENL= 3, as showed in Figs. 5a–8a. As it appears in Fig. 5b,c, SAR-BM3D and FANS methods present good results, with some false edges around the face and the camera. Some artefacts in the background for SAR-BM3D are also noticed. PPB-nonit method displays more clear edges, in Fig. 5e, however the smooth parts in the background are partially removed. This result is more enhanced by PPB-it method, as showed in Fig. 5f, with some visible false edges. Despite its edge preservation, the staircase effect and small artefacts clearly appears with TRTVpIdiv method in the background of Fig. 5g. Besides, POTDF method gives the worst result, in Fig. 5d, with strong blurred image content. Obviously, it can be seen that our algorithm overcomes these drawbacks, where the speckle is reduced and the edges are well maintained. As it is said before, the regularization parameters are selected to keep the better trade-off between noise reduction and fine detail preserving. It appears in Fig. 5h, that the right hand of cameraman is well restored compared with other methods.
Figure 6 presents a despeckled Peppers image by different algorithms. As for the previous image, the staircase effect appears in the TRTVpIdiv result (Fig. 6g), and blurred image obtained with POTDF algorithm (Fig. 6d). The PPB-nonit and the proposed method efficiently reduce the speckle noise, comparing to the rest of the methods, without affecting the image’s structure as showed in Fig. 6.
Two synthetic SAR images Napoli and Nimes are used as SAR noise-free data. Figures 7 and 8 present despeckling results for all used methods. It can be seen the similarity between FANS, SAR-BM3D and PPB-it results in Fig. 7b,c,f, with small enhancement for PPB-it. The speckle noise is well removed in the case of PPB-nonit (Fig. 7e), but at the cost of strongly smoothing some textures and objects. According to results presented in Fig. 7g, speckle noise still exists with staircase effect using TRTVpIdiv algorithm.
Meanwhile, an enhanced and smoothed result obtained with the proposed method in Fig. 7h, with notable difference appearing in the bottom right corner of the other despeckled images, except those obtained by PPB-it and PPB-nonit methods. The results presented in Fig. 8 confirm the efficiency of the proposed method.
Tables 1, 2 and 3 summarize the obtained quantitative values of PSNR, SSIM and RMSE. For all the used images, our proposed method has the highest PSNR and SSIM values, followed by FANS, PPB-nonit and SAR-BM3D, whereas the POTDF showed the lowest performance. This reflects the ability of our algorithm to remove the speckle noise with good edges and textures preservation. For all experiments, the minimum value of RMSE is obtained using our proposed algorithm. This proves the strong similarity between the despeckled images and the original ones.
Finally, all results listed in Tables 1, 2 and 3 demonstrate the effectiveness of our algorithm, in terms of PSNR, SSIM and RMSE.
5.4 Experimental results on real SAR images
To prove the ability and efficiency of our algorithm to deal with the speckle noise problem, it is necessary to test it in the case of real SAR images. Four scenes (denoted SAR(1-4)) with and equivalent number of looks (ENL= 3) are chosen from Sandia National Laboratories official site. Four parts with different sizes are selected from each scene, and for each part, two areas (homogeneous and non-homogeneous) are defined. The areas contain some objects marked with red and yellow windows successively. Visual perception in Figs. 9–12, shows that all methods reject the speckle noise. However, the results exhibit that the side effects differ from one method to another.
For the image in Fig. 9d, the resulting despeckled image by POTDF algorithm is strongly blurred. In Fig. 9g, staircase effect still exist in the homogeneous regions in the case of TRTVpIdiv method, whereas the proposed method and the remaining ones efficiently remove the speckle noise.
Figure 10d shows a blurred result obtained by POTDF method and remaining speckle in the case of TRTVpIdiv algorithm (Fig. 10g). The results in Fig. 10b,c obtained by the FANS and SAR-BM3D methods appear sharp and satisfactory, but remaining speckle can be observed in the top left corner. An enhanced result is obtained by PPB-it method (Fig. 10f). According to Fig. 10h, the proposed method gives the best performance in terms of speckle reduction, smoothness, without generating artefacts and staircase. Moreover, the results displayed in Fig. 11 differ from one method to another; smeared features for the PPB-nonit method (Fig. 11e), strongly smoothed image for the PPB-it method (Fig. 11f), and blurry results for the POTDF method (Fig. 11d). Enhanced results are obtained by SAR-BM3D and FANS (Fig. 11b,c). For the last method, some over-smoothing of homogeneous parts are also noticed, which can be explained by the use of look-up tables. The obtained despeckled image using our algorithm, in Fig. 11h, presents well smoothed homogeneous parts and good edges preservation. Results presented in Fig. 12h confirm the robustness of our algorithm in homogeneous areas containing man-made structures. The fine details and bright scatterers are well preserved and the shadow of the air-plane is well enhanced, which may improve the object recognition and feature extraction.
From Table 4, it can be seen that our algorithm displays the highest values of ENL for the cases of SAR1 and SAR4. Moreover, the values of ENLRt are near the ENL of the original image (ENL= 3). For SAR2 and SAR3, PPB-nonit has the higher ENL. These results indicate that our algorithm can reduce the speckle noise efficiently in the homogeneous regions, by preserving edges. However, the POTDF results are blurred. The FANS and SAR-BM3D methods exhibit similar results, which are previously confirmed by visual perception. As can be seen, the proposed method adaptively reduces speckle noise according to local heterogeneity. Moreover, the staircase effects in the homogeneous regions are effectively avoided, meanwhile the fine details, textures and repetitive structures are well preserved.
5.5 CPU time consumption
In Table 5, the CPU time of the proposed method is summarized with the aforementioned methods. For all experiments listed in Table 1, our algorithm is the fastest, followed by FANS and PPB-nonit. Thanks to the proposed fast diffusion function, which highly accelerates the despeckling process, our algorithm ensures a rapid convergence to the exact solution.
6 Conclusion
In this paper, a new combined weighted second order TV with an OGS regularizer and a box constraint is proposed for SAR image despeckling. A new fast and efficient diffusion function is proposed to solve the problems of over-smoothing, staircase effect and speed up the despeckling process. Experimental comparison of the proposed method is performed on both synthetic and real images. The results show that our method can adaptively reduce speckle and provide superior performance in terms of SSIM, PSNR, RMSE, ENL, and CPU time.
Besides the speckle reduction application, we expect to extend our work to handle the wider class of SAR data problems, such as image reconstruction, change detection, interferometry and polarimetry.
Data Availability
All real SAR images used in our experiments are available online in the open source SAR database of Sandia National Laboratories, and can be found at http://www.sandia.gov/radar/sar-data.html
References
Achim A, Kuruoglu EE, Zerubia J (2006) Sar image filtering based on the heavy-tailed rayleigh model. IEEE Trans Image Process 15(9):2686–2693
Aubert G, Aujol JF (2008) A variational approach to removing multiplicative noise. SIAM J Appl Math 68(4):925–946
Bansal M, Kumar M, Kumar M et al (2021) An efficient technique for object recognition using shi-tomasi corner detection algorithm. Soft Comput 25 (6):4423–4432
Buades A, Coll B, Morel JM (2005) A non-local algorithm for image denoising. In: 2005 IEEE Computer society conference on computer vision and pattern recognition (CVPR’05), IEEE, pp 60–65
Cassetti J, Delgadino D, Rey A et al (2021) Sar image classification using non-parametric estimators of shannon entropy. In: 2021 2nd China International SAR symposium (CISS). IEEE, pp 1–5
Cozzolino D, Parrilli S, Scarpa G, et al. (2013) Fast adaptive nonlocal sar despeckling. IEEE Geosci Remote Sens Lett 11(2):524–528
Deledalle CA, Denis L, Tupin F (2009) Iterative weighted maximum likelihood denoising with probabilistic patch-based weights. IEEE Trans Image Process 18(12):2661–2672
Ding M, Huang TZ, Wang S et al (2019) Total variation with overlapping group sparsity for deblurring images under cauchy noise. Appl Math Comput 341:128–147
Feng W, Lei H, Gao Y (2014) Speckle reduction via higher order total variation approach. IEEE Trans Image Process 23(4):1831–1843
Frery AC, Muller HJ, Yanasse C d C F et al (1997) A model for extremely heterogeneous clutter. IEEE Trans Geosci Remote Sens 35(3):648–659
Frost VS, Stiles JA, Holtzman JC et al (1980) Radar image preprocessing. In: LARS symposia, p 350
Garg D, Garg NK, Kumar M (2018) Underwater image enhancement using blending of clahe and percentile methodologies. Multimed Tools Applic 77 (20):26,545–26,561
Glowinski R, Le Tallec P (1989) Augmented Lagrangian and operator-splitting methods in nonlinear mechanics. SIAM
Guo M, Han C, Wang W, et al. (2021) A novel truncated nonconvex nonsmooth variational method for sar image despeckling. Remot Sens Lett 12(2):122–131
Karakuş O, Kuruoǧlu E E, Altınkaya MA (2018) Generalized Bayesian model selection for speckle on remote sensing images. IEEE Trans Image Process 28(4):1748–1758
Kilany N (2016) Weighted lomax distribution. SpringerPlus 5(1):1–18
Kuan DT, Sawchuk AA, Strand TC, et al. (1985) Adaptive noise smoothing filter for images with signal-dependent noise. IEEE Trans Pattern Anal Mach Intell 2:165–177
Kumar M, Chhabra P, Garg NK (2018) An efficient content based image retrieval system using bayesnet and k-nn. Multimed Tools Applic 77 (16):21,557–21,570
Kumar M, Kumar M et al (2021) Xgboost: 2d-object recognition using shape descriptors and extreme gradient boosting classifier. In: Computational methods and data engineering. Springer, pp 207– 222
Lee JS (1980) Digital image enhancement and noise filtering by use of local statistics. IEEE Trans Pattern Anal Mach Intell 2:165–168
Liu J, Huang TZ, Liu G, et al. (2016) Total variation with overlapping group sparsity for speckle noise reduction. Neurocomputing 216:502–513
Liu P (2020) Hybrid higher-order total variation model for multiplicative noise removal. IET Image Process 14(5):862–873
Lv XG, Jiang L, Liu J (2016) Deblurring poisson noisy images by total variation with overlapping group sparsity. Appl Math Comput 289:132–148
Massonnet D, Souyris JC (2008) Imaging with synthetic aperture radar. EPFL Press
Nie X, Huang X, Feng W (2017) A new nonlocal tv-based variational model for sar image despeckling based on the g0 distribution. Digital Signal Process 68:44–56
Oliveira FA, Ferreira RM, Lapas LC et al (2019) Anomalous diffusion: a basic mechanism for the evolution of inhomogeneous systems. Front Phys 7:18
Parrilli S, Poderico M, Angelino CV et al (2011) A nonlocal sar image denoising algorithm based on llmmse wavelet shrinkage. IEEE Trans Geosci Remote Sens 50(2):606–616
Perona P, Malik J (1990) Scale-space and edge detection using anisotropic diffusion. IEEE Trans Pattern Anal Mach Intell 12(7):629–639
Ponmani E, Saravanan P (2021) Image denoising and despeckling methods for sar images to improve image enhancement performance: a survey. Multimed Tools Applic 80(17):26,547–26,569
Rudin LI, Osher S, Fatemi E (1992) Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena 60(1-4):259–268
Shi J, Osher S (2008) A nonlinear inverse scale space method for a convex multiplicative noise model. SIAM J Imaging Sci 1(3):294–321
Tebini S, Mbarki Z, Seddik H, et al. (2016) Rapid and efficient image restoration technique based on new adaptive anisotropic diffusion function. Digital Signal Process 48:201–215
Tebini S, Seddik H, Braiek EB (2016) An advanced and adaptive mathematical function for an efficient anisotropic image filtering. Comput Math Applic 72(5):1369–1385
Wang R, He N, Wang Y et al (2020) Adaptively weighted nonlocal means and tv minimization for speckle reduction in sar images. Multimed Tools Appl 79(11):7633–7647
Xu B, Cui Y, Li Z et al (2014) Patch ordering-based sar image despeckling via transform-domain filtering. IEEE J Selected Topics Appl Earth Observ Remote Sens 8(4):1682–1695
Zhou Z, Lam EY, Lee C (2019) Nonlocal means filtering based speckle removal utilizing the maximum a posteriori estimation and the total variation image prior. IEEE Access 7:99,231–99,243
Acknowledgements
Authors would like to thank the Sandia National Laboratories Group for sharing the real SAR images used in our experiments.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interests
This statement is to certify that all Authors have seen and approved the manuscript being submitted. We warrant that the article is the Authors’ original work. We warrant that the article has not received prior publication, and is not under consideration for publication elsewhere. On behalf of all Co-Authors, the corresponding Author shall bear full responsibility for the submission. This research has not been submitted for publication nor has it been published in completely or in part elsewhere.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Nabil, G., Azzedine, B. & Mustapha, B. Fast and efficient variational method based on G0 distribution for SAR image despeckling. Multimed Tools Appl 82, 5899–5922 (2023). https://doi.org/10.1007/s11042-022-13472-0
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-022-13472-0