Abstract
Variational models involving Euler’s elastica energy have been widely used in many fields of digital image processing, such as image inpainting and additive Gaussian noise removal. In this paper, according to the signal dependence of multiplicative noise, the Euler’s elastica functional is modified to adapt for the multiplicative denoising problem. And a novel multiplicative noise removal model based on adaptive Euler’s elastica is proposed. Furthermore, we develope two fast numerical algorithms to solve this high-order nonlinear model: Aiming at the evolution case of Euler–Lagrange equation, a semi-implicit iterative scheme is designed and the additive operator splitting algorithm is used to speed up the calculation; Expanding the augmented Lagrangian algorithm that has been successfully applied in recent years, we obtain a restricted proximal augmented Lagrangian method. Numerical experiments show the effectiveness of the two algorithms and the significant advantages of our model over the standard total variation denoising model in alleviating the staircase effect and restoring the tiny geometrical structures, especially, the line-like feature.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Multiplicative noise removal is one of the most primary tasks in the research of coherent imaging systems [1,2,3,4]. It is crucial to find an effective method that can protect the details of digital images as much as possible while smoothing the oscillations. Denote an observed image by \(f=f(x)\), \(x\in { \Omega }\), where \({ \Omega }\) is a bounded open subset of \(\mathbb {R}^{2}\) and has a Lipschitz boundary. Usually the degradation process of coherent imaging system is given mathematically as
where u is unknown ideal image, \(\circ \) refers to the componentwise multiplication, \(\eta \) is the noise which obeys a certain probability distribution. In a real-world application scenario, for example, the Gamma distribution is studied in synthetic aperture radar (SAR) images [3], and the Rayleigh distribution is researched in ultrasound imaging [4]. In the scope of this paper, we focus on dealing with multiplicative noise in L-look SAR image [5]. It is well known that the noise obeys a Gamma distribution
where \(\chi _{\{\eta \ge 0\}}\) is the indicator function of the set \(\{\eta \ge 0\}\), the mean value of \(\eta \) is 1, and the variance of noise \(\eta \) is 1/L.
Various techniques such as TV regularization, non-local filtering, sparse representation and tight-frame approach have been applied to deal with multiplicative Gamma noise. We refer the reader to see [6,7,8,9,10,11,12,13,14,15]. Focusing on the variational based method, one of the landmarks is the proposal of the AA model [14]. Authors derived a new fidelity term from the maximum a posteriori estimation under the assumption that the multiplicative noise obeys the Gamma distribution. The goal is to minimize the following energy functional
where \(\int _{ \Omega }|Du|\) stands for the total variation (TV) of u. However, the model lacks of global convexity since it is strictly convex only for \(u\in (0,2f)\). Undeniably, it promotes the development of a series of global convex models based on TV regularization, which have good performance for piecewise constant images. For instance, Huang, Ng and Wen [7] used the transformation \(u=\exp {z}\) in (3), and considered the TV regularization for z. Zhao, Wang and Ng [10] proposed rewritting the degradation equation (1) as \(f\circ w=u\), where \(w(x)=\frac{1}{\eta (x)}\), and derived the following model for multiplicative noise removal:
where \(\mu \) can be set to be the mean value of w. The model is jointly convex for (u, w). On the flip side, TV regularization is known to yield some undesired effects, such as the staircase and blocky effect [16]. To mitigate this inherent effect, researchers begin to focus on the study of replacing TV with higher order differential operators [17, 18]. Besides that, Chan and Strong [19] introduced the adaptive total variation and proposed the following model:
where the weight function g(x) controls the regularization level of different regions. Yet, most of the existing weight functions are designed according to \(|\nabla u|\), namely, the structural information of the image. After the signal dependence of multiplicative noise is captured in [20], several authors begin to explore the use of gray level indicator in the weight function.
Another approach for multiplicative noise removal is the diffusion based approach. In essence, anisotropic diffusion is to iteratively modeficate the observed image through a partial differential equation (PDE) to suppress noise. This kind of method is inspired by the work of [21], where the nonlinear diffusion equation (PM model) is proposed for the images corrupted by additive Gaussian noise. Lately, Zhou et al. [11] proposed a doubly degenerated (DD) diffusion model to address the multiplicative noise removal problem
Here, the diffusion coefficient is given as follows
where \(\alpha >0,\) \(0<\beta <1\), \(M=\sup _{x\in { \Omega }}u(x,t)\), \(\frac{2|u|^{\alpha }}{M^{\alpha }+|u|^{\alpha }}\) is a gray level indicator, and \(\frac{1}{\left( 1+|\nabla u|^{2}\right) ^{\frac{1-\beta }{2}}}\) is devoted to the edge detection. In the work of [22], a fractional-order nonlinear diffusion model is proposed, which can handle the multiplicative noise well and protect the texture. Because these equations are highly non-linear, the CFL condition forces the time step to be small, resulting in many iterations of the algorithm. Although the DD model uses the fast explicit diffusion (FED) to accelerate the calculation, the nature of explicit diffusion still has a great limitation on the time step.
Recently, Euler’s elastica regularization has received extensive attention because of its good geometric interpretation. Many functional models based on Euler’s elastica have been proposed for image inpainting [23, 24], segmentation [25, 26], restoration [27], deconvolution [28] and other tasks. Since they integrate curvature information involving second derivatives, the corresponding Euler–Lagrange equations are usually fourth-order. It is very time consuming to calculate their evolution equations directly, which raises a nontrivial issue of developing effective and efficient algorithms to solve them. The work of [29] approximates the Euler’s elastica based image inpainting model by an elliptic system consisting of a weighted TV equation and a linear equation. Furthermore, several fast numerical algorithms based on operator splitting technique have been widely used to solve diverse models arising from image processing [30, 31]. They convert the original problem into a few simple and easily solved subproblems, also allow for a broader class of regularizers. However, parameter tuning will greatly affect its convergence rate [32], and the convergence of non-convex problems is limited. These still do not affect the pursuit of such an algorithm for nonlinear and non-convex problems [26, 33, 34], because the numerical implementation is indeed effective.
In this article, our main contributions are twofold. Firstly, since Euler’s elastica functional is not suitable for multiplicative noise, we propose a noval multiplicative denoising model based on adaptive Euler’s elastica regularization by introducing the gray level indicator [20, 35]. Secondly, two effective and efficient numerical algorithms are proposed. On one hand, using the gradient descent algorithm, an efficient semi-implicit discrete scheme for the gradient descent flow is designed, and then an additive operator splitting algorithm is employed to accelerate the calculation. On the other hand, a restricted proximal augmented Lagrangian algorithm is proposed, which transforms the original complicated functional minimization problem into an iterative update of a series of simple sub-problems. Numerical experiments illustrate that the new model can restore some geometric features of the image well.
The paper is organized as follows. In Sect. 2, the working mechanism of Euler’s elastica energy in image processing is reviewed, and then a model based on adaptive Euler’s elastica is developed for images contaminated by multiplicative Gamma noise. In Sect. 3.1, the model is implemented by the gradient descent method, a semi-implicit discrete scheme with AOS algorithm is designed to solve the evolution equation. Additionally, a restricted proximal augmented Lagrangian algorithm is presented in Sect. 4. Experimental results are discussed in Sect. 5. Section 6 concludes the paper by highlighting the significance of the proposed model.
2 Adaptive Euler’s Elastica Regularization Combined with Convex Fidelity Terms for Multiplicative Noise Removal
In this section, we review the related concepts and geometric meanings of Euler’s elastica energy. Then, benefiting from the geometric characteristics of adaptive Euler’s elastica energy, a new multiplicative Gamma noise removal model is described, which can maintain some features of the image.
2.1 The Brief Review on Variational Geometry
In image analysis, perhaps we can consider that a collection of curves have to be dealt with simultaneously. Let u(x) denote an image on a domain \({ \Omega }\), so the image is a stack of curves
To weigh the quality of each individual curve \(\gamma _{\lambda }\), the first step is to establish some suitable measure \(E[\gamma _{\lambda }]\).
The first natural geometric measure is the length L. In addition, curvature k plays a more explicit role, since human can easily observe and process convex or concave features of shapes and boundaries. Generally, the second order measure for smooth curve \(\gamma _{\lambda }\) is given
where f(k) is some suitable function of curvature. In order to effectively penalize local singularities, such as kinks or corners, a special but very useful case is Euler’s elastica measure
with some positive constant coefficients a and b. The ratio a/b manifests the relative importance of the total length versus total squared curvature.
Thus, for image denoising, the minimization criterion becomes
Using the celebrated co-area formula, this can be lifted to be
When \(b=0\), the functional degenerates into the total variation of u. This reveals that the geometric essence of TV measure is the sum of the perimeters of image level set curves.
It is worth noting that the functional \(E_{el}[u]\) consists of two parts, TV and the curvature term \(I:=\int _{ \Omega }\left( \nabla \cdot \frac{\nabla u}{|\nabla u|}\right) ^{2}|\nabla u|\mathrm{d}x\), where I is a higher-order term with stronger regularity. Generally, higher-order derivatives play a more important role. For example, Poincar\(\acute{\mathrm{e}}\)’s inequality and embedding theorem tell us that lower derivatives can be controlled by higher derivatives. Therefore, if the adaptive strategy [19, 20] is adopted to make the model suitable for the multiplicative denoising problem, a weight function can be designed to replace the constant a in (4) to correct the basic regularization process caused by TV. In Sect. 2.3, the effects of this modification will be experimentally demonstrated.
2.2 Proposed Model
In this subsection, we aim to establish a multiplicative noise removal model based on adaptive Euler’s elastica. In particular, by analyzing the characteristics of multiplicative noise, the gray level indicator \(\alpha (u)\) is introduced, namely
where p is a positive constant, \(G_{\sigma }\) represents a Gaussian kernel whose width parameter is \(\sigma \), and \(M=\max G_{\sigma }u\).
In fact, according to the degradation model (1), the expectation and variance of the noisy image can be obtained through simple calculations as
It is not difficult to find that the variance of noisy image depends on expectation, which reveals the signal-dependent nature of multiplicative noise pollution levels. As illustrated in Fig. 1a, the greater the original signal strength, the greater the influence of noise. Therefore, we expect the regularization level to be higher where the original image pixel value is higher.
However, as image denoising is a typical inverse problem, the original pixel value can only be approximated by the pre-processed noisy image. Thus, in the gray level indicator (5), the image is first preprocessed through a small-scale Gaussian filter. And then the gray level is compressed to [0,1]. The idea of the exponential parameter p is derived from gamma correction [36], which is used to avoid the compression of the pixel information at the intermediate gray level. Since the noise pollution level is positively correlated with the gray level, the gray level indicator can be used as a weight function in the regularization term to adjust the degree of regularization adaptively. Specifically, the gray level indicator \(\alpha (u)\) becomes very small at low gray levels, which leads to preserve image information at low gray levels. In high gray level regions, \(\alpha (u)\) is closing to 1, so that outliers are eliminated. As shown in Fig. 1b, \(\alpha \) coincides with the pollution level at each pixel of the noisy image.
Therefore, combined with the fidelity term in [20], we propose the following multiplicative denoising functional model based the adaptive Euler’s elastica
where positive constants b, \(\eta \) leverage the importance of regularity and fidelity. In particular, here \(\alpha =\alpha (f)\) is considered to be a known coefficient function.
2.3 The Role of Regularization Terms
In fact, the change of coefficient pairs \((\alpha , b)\) in model (6) will cause the degradation of the model. Specifically, the model (6) covers four types of regularization terms: (1) adaptive Euler’s elastica regularization: \(\alpha =\alpha (f), b\ne 0\); (2) standard Euler’s elastica regularization: \(\alpha \) is a positive constant, \(b \ne 0\); (3) adaptive total variation: \(\alpha =\alpha (f),b=0\); 4) TV (AA model): \(\alpha \) is a positive constant, \(b=0\). Next we will test these four cases to present the role of regularization term.
In Fig. 2, four kinds of denoising results are shown by taking "Color.png" as the test image. The parameters and the peak signal to noise ratio (PSNR) are recorded in Table 1. By comparison, the classical AA model, which adopts total variation regularization, has a lower PSNR value and an obvious staircase effect in the recovery result (Fig. 2c). PSNR values have been significantly improved under Euler’s elastica framework or when the adaptive total variation technique is applied, but there are still some dissonant factors in the restored image. For example, the "fish scale" phenomenon is shown in Fig. 2d, and some halos still appear in Fig. 2e. Naturally, this means that their level set curves are not very smooth and appear jagged in many places. On the whole, when adaptive total variation is combined with curvature constraint, a more ideal result (Fig. 2f) can be achieved. Thus, the proposed adaptive Euler’s elastica regularization is meaningful.
In conclusion, the adaptive Euler’s elastica regularization consists of two parts: the adaptive total variation and the curvature-driven term. They control the length and contour of the level set curves respectively. Therefore, it has a stronger ability to deal with details, especially the gradient information.
3 The Additive Operator Splitting (AOS) Algorithm
3.1 The Gradient Descent Flow
The basis of the calculation in this section is the variational method and gradient descent flow. That is, the solution of the original minimization functional model is transformed into the calculation of the gradient descent flow corresponding to the first-order variation of the functional.
The first variation of Euler’s elastica energy (4) has been derived in [23], and please refer to lemma 1 below for details.
Lemma 1
Let \(\phi \in C^{1}(\mathbb {R},[0,\infty ))\) be a given function and
Then the first variation is given by
where the flux field \(\mathbf {V}\) is
Here \(\mathbf {n}\) is the ascending normal field, and \(\mathbf {t}\) is the targent field. Namely,
Therefore, the Euler Lagrange equation corresponding to the minimization problem (6) is
Then, the gradient descent method is adopted to obtain the following steepest descent marching
The natural boundary conditions along \(\partial { \Omega }\) are
3.2 Explicit Scheme
From now on, we present the numerical scheme for the evolution equation (7). Let (i, j) denote the digital pixel locations, and choose a small time step \(\tau \), then time be digitized to \(n\tau \), \(n=0,1,\ldots \). Thus \(u^{n}_{i,j}\) denotes the value of u at pixel (i, j) at time \(n\tau \). For convenience of description, the right end of equation (7) is denoted as H(u).
If the explicit scheme is used directly to discretize the equation, we have
Then the update criterion is
the time step \(\tau \) must be very small to ensure the strict stability of the algorithm. So it needs more iterations to achieve its convergence. In Table 2, we use Cameraman and Hourglass as test images . We can see that the explicit scheme is very inefficient because it is strictly limited by a very small time step. At the same time, we can see that the computational cost depends on the image size and the noise level.
Therefore, we propose a new and efficient numerical PDE scheme in the next subsection.
3.3 Semi-implicit Scheme with AOS
In order to improve the efficiency of computation, a semi-implicit scheme is designed to reduce the limitation of time step on the efficiency of the algorithm. Note that the right term of the equation in (7) actually consists of three parts. For simplicity, the combination of the cross partial derivative and the nonlinear source term is denoted as F(u). The coefficient of the diffusion term and F(u) are fixed to the value at \(n\tau \). So the following semi-implicit scheme is obtained
For brevity, set \(G(u^{n})=\frac{\alpha +b(k^{n})^{2}}{|\nabla u^{n}|}\nabla u^{n+1}\). So far, the constructed scheme (10) is just suitable for the additive operator decomposition (AOS) algorithm proposed in [37], which is a fast algorithm for solving nonlinear differential equations. The core of the algorithm is to divide the original iteration (10) into two iterations, that is, to split the two-dimensional diffusion process into the one-dimensional diffusion of the rows and columns of \(u^{n}\), and two intermediate results \(U_{1}^{n+1}\) and \(U_{2}^{n+1}\) are obtained. Then the average of them is the result of the nth iteration, \(u^{n+1}=\frac{U^{n+1}_{1}+U^{n+1}_{2}}{2}\). Therefore, the problem is transformed into solving the following algebraic equations
where \(G_{x}^{n}\) and \(G_{y}^{n}\) are two one-dimensional diffusion operators. Take the solution of equation (13) as an example for a simple explanation. Let the image size be \(M \times N\), update the jth column of u sequentially. Namely, traverse j from 1 to N, using the Thomas algorithm to calculate the equations
for the jth column of \(u^{n}\), due to the boundary condition (8), let \(u_{0,j}^{n}=u_{1,j}^{n}\) and \(u_{N+1,j}^{n}=u_{N,j}^{n}\) to expand the image formally. Thus, the corresponding column diffusion operator \(G^{n}_{y,j}\) is obtained
where \(G^{n}_{y,j}\) is an \(N\times N\) tridiagonal matrix. Similarly, (12) can also be solved. Specifically, the process of solving (10) by AOS algorithm is written in Algorithm 1.
It is worth noting that the AOS algorithm is absolutely stable for any time step \(\tau \). The following is a simple theorem, please refer to reference [37] for details.
Theorem 1
For any time step \(\tau \), semi-implicit scheme (12) and (13) is sufficient to guarantee absolute stability.
Proof
\(G_{x,i}^{n}\) and \(G_{y,j}^{n}\) are diagonally dominant matrices, and the diagonal elements are all negative. For any \(\lambda _{1}>0\), \(G_{y,j}^{n}\) is a strictly diagonally dominant matrix, \((\lambda _{1} I-G_{y,j}^{n})x=0\) has no non-zero solutions. Hence, \(\lambda _{1}\) is not an eigenvalue of \(G_{y,j}^{n}\), and the eigenvalues of \(G_{y,j}^{n}\) are nonpositive. Suppose \(\lambda _{1}\le 0\) is the eigenvalue of \(G_{y,j}^{n}\), then the eigenvalue \((1-2\tau \lambda _{1})^{-1}\) of \((I-2\tau G_{y,j}^{n})^{-1}\) satisfies \((1-2\tau \lambda _{1})^{-1}\in (0,1]\). Similarly, assuming that \(\lambda _{2}\le 0\) is the eigenvalue of \(G_{x,i}^{n}\), we can obtain that the eigenvalue \((1-2\tau \lambda _{2})^{-1}\) of \((I-2\tau G_{x,i}^{n})^{-1}\) satisfies \((1-2\tau \lambda _{2})^{-1}\in (0,1]\). Thus, the semi-implicit scheme is stable for any \(\tau >0\). \(\square \)
4 The Restricted Proximal Augmented Lagrangian Method
By introducing auxiliary variables, a proximal augmented Lagrangian method is implemented to relax model (6), and the alternate direction multiplier method (ADMM) is used to solve the minimal element iteratively. It is worth noting that a simple truncation strategy in [34] was adopted to ensure the consistency between the model and the algorithm based on augmented Lagrangian. That is, when the coefficient b of the curvature term is equal to 0, the original problem is reduced to an adaptive total variation model, so the corresponding numerical algorithm should directly solve the adaptive total variation model from the very beginning.
4.1 Conventional Augmented Lagrangian Functional
Now, we introduce four variables \(\mathbf{p} , \mathbf{n} , h, v\), and cast the functional (6) into a constrained minimization problem
where the virtue of v is to split the nonlinear term and the higher-order term. To avoid the degenerate case of \(\mathbf{p} =\mathbf{0} \), the constraint \(\mathbf{n} =\frac{p}{|p|}\) is amended to its equivalent form \(|\mathbf{p} |=\mathbf{n} \cdot \mathbf{p} , |\mathbf{n} |\le 1\), as suggested in [33, 38]. The crucial constraint \(|\mathbf{n} |\le 1\) prevents \(|\mathbf{n} |\) from being unbounded when \(\mathbf{p} =\mathbf{0} \). To impose the constraint \(|\mathbf{n} |\le 1\) a.e. in \({ \Omega }\), we define a characteristic function
So far, the Lagrangian functional can be shown as
Here, \(\lambda _{1},\lambda _{3},\lambda _{4}\in \mathbb {R}\) and \(\varvec{\lambda _{2}}\in \mathbb {R}^{2}\) are Lagrangian multipliers, and \(r_{i}\) \((i=1,2,3,4)\) are positive scalars as weighting parameters. The aim of the augmented Lagrangian method is to acquire a saddle point of (15), which approximates the solution of the original problem (6). To this end, the alternating direction multiplier method is used to calculate, i.e. given the previous iterate \((u^{k},\varvec{p}^{k},\varvec{n}^{k},h^{k},v^{k};\lambda _{1}^{k},\varvec{\lambda _{2}}^{k},\lambda _{3}^{k},\lambda _{4}^{k})\), then solve the subproblems corresponding to variables \(u,\varvec{p},\varvec{n},h,v\) in turn and update the Lagrangian multipliers.
However, in order to make the iteration results more stable and obtain the consistency between the algorithm and the model, the subproblems of u and \(\mathbf{p} \) are modified.
4.2 Proximal Operators and Truncation Techniques
On one hand, for the u-subproblem, we add a quadratic constraint, so that the updated \(u^{k+1}\) does not deviate too far from the result \(u^{k}\) of the last iteration. So, the u-subproblem is
S is a self-adjoint positive semidefinite operator, for simplicity, where we consider the proximal term in \(L^{2}\) setting. In the experiment, adding the quadratic term is helpful for us to select appropriate parameters to get the convergent solution. In addition, the existence and uniqueness of solutions to u-subproblem is supported by the theory of proximal operators. Below we provide a brief overview, one can see [32, 39, 40] for details.
For a convex function g, the proximal operator of g respect to a semi-definite matrix S at y is defined by
where \(\mu >0\). If g is strictly convex and coercive then its proximity operator exists uniquely [41]. In particular, when the matrix S is exactly the identity matrix I, then the above proximity operator is reduced to
and the associated optimal value specifies its Moreau envelope
The Moreau envelope is a continuously differentiable function, even when g is not, and its gradient [39] is given by
For instance, when g is the \(l_{1}\) norm, \(g(z)=\Vert z\Vert _{1}=\sum |z_{i}|\), then the proximal operator is determined by soft thresholding
the associated Moreau envelope is the Huber function
and its gradient is the saturation function \(\mathrm{sign}(y_{i})\cdot \min (|y_{i}|/\mu ,1)\).
On the other hand, for \(\varvec{p}\)-subproblem, the cutting-off strategy proposed in [34] is used to eliminate the connection with the variable \(\varvec{n}\), which is according to the dependent order of variables. Therefore, the \(\varvec{p}\)-subproblem is reduced to
Because \(\varvec{n}\) is defined as \(\varvec{p}/|\varvec{p}|\) to express some normal fields, it is reasonable to assume that \(\varvec{p}\) does not depend on \(\varvec{n}\). However, this strategy is very critical. When \(b=0\), the closed iterative loop is formed with subproblems of \(\varvec{p}^{k+1}\) and \(u^{k+1}\), which is exactly in line with the augmented Lagrangian method for solving the adaptive total variation model.
At this point, we write the pseudo-code of the alternating iterative algorithm in Algorithm 2. From now on, we will name our method as RPALM (restricted proximal augmented Lagrangian method) for brevity.
4.3 The Solutions of Minimization Subproblems
In this section, let’s show how to solve the subproblems in each iteration of Algorithm 2. For ease of description, we state some mathematical notations in the discrete setting.
Let \({ \Omega }=[1,N]\times [1,M]\) be a set of \(N\times M\) points in \(\mathbb {R}^{2}\), denote the Euclidean space \(\mathbb {R}^{N\times M}\) as V and \(Q=V\times V\). For \(u\in V\) and \(\mathbf{p} \in Q\), given \((i,j)\in [1,N]\times [1,M]\), then \(u(i,j)\in \mathbb {R}\) and \(\mathbf{p} (i,j)=(p_{1}(i,j),p_{2}(i,j))\in \mathbb {R}^{2}\). The standard Euclidean inner products can be defined as
Using periodic boundary condition, the discrete forward and backward difference operators for \(u\in V\) are defined as follows
Furthermore, the discrete gradient operator \(\nabla :V\rightarrow Q\) can be defined as follows
and the negative adjoint or divergence operator div:\(Q\rightarrow H\) of \(\nabla \) is given by
4.3.1 Solution for v-Subproblem
Because the objective problem (16)
is a convex nonlinear functional with respect to v. Newton iterative algorithm with second order convergence can be used, the first and second order variation of v are
thus
4.3.2 Solution for u-Subproblem
For \(u-\)subproblem (17)
whose optimality condition yields a linear equation
which can be efficiently solved by fast Fourier techniques. The updated \(u^{k+1}\) can be expressed explicitly as
where \(\mathcal {F}\) and \(\mathcal {F}^{-1}\) mean fast Fourier transform (FFT) and its inverse transform respectively.
4.3.3 Solution for \(\mathbf{p} \)-Subproblem
It is easy to see that the \(\varvec{p}\)-subproblem (18) can be recast as
we use the soft thresholding operator defined in [42] to find the closed-form formula for the minimizer
where
4.3.4 Solution for \(\mathbf{n} \)-Subproblem
For \(\mathbf{n} \)-subproblem, we actually solve for
and then projecting the minimizer into the unit ball. Here, let’s approximate \(\mathbf{n} \) in the second term with \(\mathbf{n} ^{k}\), and quadratic term \(\int _{ \Omega }\frac{r_{4}}{2}(h^{k}-\text {div}{} \mathbf{n} )^{2}\mathrm{d}x\) is linearized follows [43, 44], i.e.
where \(\delta >0\) is a constant. Hence, the corresponding Euler–Lagrange equation can be simplified as
which generates a closed-form solution \(\mathbf{n} ^{k+1}\). In addition, the Euler–Lagrange equation of (24) can also be directly solved, which is just a linear coupled PDEs. Nevertheless, due to the singularity of the operator \(\nabla (\text {div})\), it is impossible to directly apply FFT technique to solve the problem. To overcome this difficulty, a quadratic penalty term \(\frac{\mu }{2}\int _{ \Omega }|\mathbf{n} -\mathbf{n} ^{k}|^{2}\mathrm{d}x\) should be added to (24). And then, let’s project \(\mathbf{n} ^{k+1}\) onto the unit circle, which is
4.3.5 Solution for h-Subproblem
As for the h-subproblem
We can easily derive the following update formula from its optimality condition
5 Results and Discussion
In this paper, we tested several classical images, such as, the Pepper, Cone, etc. (see Fig. 4). Quantitatively, the performance of algorithm is evaluated by using two quality metrics named as the Peak Signal to Noise Ratio (PSNR) [45] and the Structural SIMilarity (SSIM) [46]. PSNR is defined as
where u and \(\hat{u}\) denote the original image and the recovered image respectively, M and N are the size of image. While SSIM is defined as
Here \(\mu _{u},\mu _{\hat{u}}\) are the mean values of images u and \(\hat{u}\), \(\sigma _{u},\sigma _{\hat{u}}\) represent the variances of \(u,\hat{u}\), and \(\sigma _{u\hat{u}}\) is the covariance of u and \(\hat{u}\). Note that the larger PSNR and larger SSIM means the better restoration effect. For each experiment, the parameters are selected to obtain the maximum PSNR and SSIM. For convenience, the two algorithms of our developed model are denoted as AOS and RPALM. All experiments of both AOS and RPALM algorithms proposed in this paper are performed under Ubuntu 16.04 and Python 3.6 running on a Desktop Computer with an Intel Core (TM) i7-9700K CPU at 3.60GHz with 8GB of RAM.
In our model, \(\alpha \) and b are the coefficients of the adaptive Euler’s elastica energy. The ratio \(\alpha /b\) represents the relative importance of the total length versus total squared curvature. For \(b=0\), the minimization problem (6) degenerates into an adaptive total variation multiplicative denoising model. The balance parameter \(\eta \) is the coefficient of the fidelity term. In the case of denoising, this parameter generally does not need to be too large to obtain a cleaner restored image.
5.1 Discussion of Parameters in AOS Algorithm
In AOS algorithm, in addition to model parameters \(\sigma ,p,b,\eta \), time step \(\tau \) is also introduced. In Theorem 1 we emphasize the absolute stability of semi-implicit schemes. Although the larger \(\tau \) is, the faster the convergence rate is, we still need to choose the appropriate \(\tau \) to find a balance between accuracy and efficiency. Figure 5 shows examples of different time step \(\tau \) with fixed remaining parameters by the above techniques. From this, we find that Fig. 5b and c look blurry visually, and d and e have good visual effects. But the number of iterations at \(\tau =0.001\) was about eight times that at \(\tau =0.01\). Therefore, \(\tau =0.01\) is an appropriate choice in this group of experiments.
5.2 Discussion of Parameters in RPALM Algorithm
In the RPALM algorithm, in addition to the inherent parameters of the model, due to constraints and linearization techniques, six parameters \(\tau , r_{1}, r_{2}, r_{3}, r_{4}\) and \(\delta \) are also introduced. These optimal parameters usually need to be tuned manually, which is quite a troublesome task. However, it is worth noting that in the process of using the operator splitting technique to transform the original model into a constrained optimization problem, the most critical constraint is \(\mathbf{p} =\nabla u\). Moreover, the penalty term \(\frac{\tau }{2}\Vert u-u^{k}\Vert ^{2}_{S}\) in the u-subproblem also plays an important role in enhancing the robustness of the algorithm. Therefore, according to the above analysis and some experimental tests, we divide all the parameters into two categories. The first category contains high-sensitive parameters which are \(\tau \) and \(r_{2}\), their values may directly affect the convergence of the algorithm. The second category contains low-sensitive parameters which are \(r_{1}, r_{3}, r_{4}, \delta \), the change of PSNR caused by their perturbation is small.
Moreover, the experiment on Pillsetc in Fig. 6 shows that when appropriate parameters are taken, after multiple iterations of the RPALM algorithm, the auxiliary variable \(\mathbf{p} \) is sufficiently close to \(\nabla u\) and v is sufficiently close to u, which verifies that the constraint (14) is satisfied, and the RPALM algorithm is meaningful.
5.3 Efficient of Solving Higher Order Model
In Sect. 3.2, we have witnessed that the traditional gradient descent method for solving the second-order functional minimization model is very time-consuming, so it is necessary to develop more efficient numerical algorithms. Adressing this challenge, this paper proposed two algorithms, AOS and RPALM, in Sects. 3.3 and 4 respectively. Hereon, we use the same test images as Fig. 3, the noisy Cameraman (\(L=4\)) and the noisy Hourglass (\(L=10\)), to show the significant improvement in the computational efficiency of the AOS and RPALM. The parameters selection and performance indicators (including PSNR, number of iterations and running CPU-time) are listed in Table 3. Note that in Table 2, the traditional gradient descent method is strictly limited by the time step, and it takes tens of thousands of iterations to process an image, while the algorithm in this paper can achieve the optimal restoration by hundreds of iterations at most, the computational efficiency has been improved by nearly 300 times.
It’s noteworthy that the AOS algorithm converges for any time step \(\tau \), if a larger \(\tau \) is set, the noise will be removed faster, but this will sacrifice calculation accuracy. In addition, in Fig. 7e, f, k, l, we plot the curve of numerical energy (i.e. the value of the functional in (6)) vary with number of iterations. One can easily see that the two algorithms proposed for the multiplicative noise removal model based on adaptive Euler’s elastica are both convergent. Furthermore, the RPALM algorithm can make the functional energy drop to a certain threshold quickly, after which the energy drop speed becomes very slow.
5.4 Denoising Performance
In this subsection, we provide amount numerical results to illustrate the image restoration capability of our method. The results are compared with those obtained by some state of the art algorithms,
-
The AA model based on TV regularization proposed by Aubert and Aujol [14];
-
The HNZ model proposed by Huang et al. [7] is derived based on logarithmic transformation and TV regularization;
-
The ZWN model proposed by Zhao et al. [10] involves the TV regularization term, the terms of variance of the inverse of noise, and the fidelity term;
-
The DD model based on adaptive diffusion technology proposed by Zhou et al. [11];
-
The FD model based on tight frame proposed by Wang et al. [12];
-
The MuLoG+BM3D algorithm proposed by Deleddalle et al. [13], which integrates multichannel logarithm, block matching and 3-D filtering.
Among them, the AA model is solved by gradient descent method, the HNZ, ZWN and FD models are solved by alternating minimization algorithm, and DD model is solved by fast explicit diffusion algorithm. In the experiment, parameters are selected for them to obtain the maximum PSNR. For the MuLoG+BM3D algorithm, the free parametres are set as suggested in [13]. For the AOS and RPALM algorithms developed in this paper, the selection of parameters is tabulated in Tables 4 and 5 respectively.
Next, we report the numerical experiments of multiplicative noise removal for the original test images Fig. 4e–i. All test images are corrupted by multiplicative noise with \(L\in \left\{ 1,4,10\right\} \), and the corresponding results are depicted in Figs. 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21 and 22. In particular, for ZWN and FD models, only the case with noise level \(L=4\) and \(L=10\) is recorded, because the expectation of the inverse of the noise is bounded only when \(L>1\).
For a comparison of the performance quantitatively, we list the PSNR and SSIM values of the restored results in Table 6, where the best results are shown in bold face. These tables show that both AOS and RPALM algorithms are competitive to other algorithms.
As illustrated in Figs. 8, 9 and 10, when the image is multiplied by a noise that obeys a Gamma distrbution with a heavy right tail, there will be many unusually bright outliers in a noisy image. The denoised results of the AA algorithm, the HNZ algorithm, the ZWN algorithm and the FD algorithm, much more white spots remains compared with those other algorithms. Moreover, the contrast of the details for these four reference algorithms are noticeably reduced due to the oversmoothing during noise removal, please observe the wrinkled areas in the brain computed tomography images. Essentially, the DD model and ours model adopt the adaptive smoothing strategy to significantly improve these problems. They can better maintain the image contrast than those three methods based on the standard TV regularizqation. As a non-local method,
MuLoG+BM3D algorithm performs well in contrast protection and noise removal.
As shown in Figs. 11, 12 and 13, significant staircase effect can be seen in the results of the TV-based methods. Besides, in the original Cone image, there do not
seem to be two similar image blocks, which affects the performance of the MULOG+BM3D method. Intuitively, Fig. 11i–m demonstrate that our model can better fit the “inclined plane”.
In addition, we ran more tests and showed them in Figs. 14, 15, 16, 17, 18, 19, 20, 21 and 22. All the algorithms obtained reliable results. Although the MuLoG+BM3D algorithm can recover fine structures very well, artificial effect is inevitably caused by
stacking the image blocks in the BM3D method [47], please see Figs. 14f and 17f. Furthermore, the restored images obtained by our model have smoother background than the results obtained by the DD model, please see Fig. 17e, g and h. This is due to the curvature-driven regular term maintaining the rounded curve and the stronger regularity in the uniform region than the lower-order gegular term. In Figs. 17, 18, 19, 20, 21 and 22, whether it is the gray contrast between the chess pieces in the image ColoredChips or the pothole informa-
tion in the lower right of the image Moon, our algorithms can make it better protected.
Comparing the denoised results and the test image, we can clearly find that our method can reduce artificial effect, preserve the tiny geometrical structures and supresse the noise successfully. Also, as presented in Table 6, the proposed model delivers competitive results in terms of PSNR value and structure preservation.
6 Conclusion
In this paper, we study the restoration of the images polluted by multiplicative Gamma noise. Based on the gray level indicator, an adaptive Euler’s elastica regularization model is proposed to deal with the problem. The model is solved in the spatial domain, both two algorithms are effective and efficient. One is a fast algorithm based on PDE numerical solution, and the other is an optimization method.
Some experiments conducted on both the synthetic and real images indicated the effectiveness and efficiency of the proposed method. The visual impression shows that the proposed model performs well in protecting tiny structures, maintaining contrast, and restoring a smooth image background. In addition, the quantitative measurement on PSNR and SSIM, indicate that the proposed method possesses more remarkable capacity of noise reduction than some other classical methods.
References
Bioucas-Dias, J.M., Figueiredo, M.A.T.: Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19, 1720–1730 (2010)
Goodman, J.W.: Some fundamental properties of speckle. J. Opt. Soc. Am. 1917–1983(66), 1145–1150 (1976)
Tur, M., Chin, K.C., Goodman, J.W.: When is speckle noise multiplicative? Appl. Opt. 21, 1157 (1982)
Wagner, R.F., Smith, S.W., Sandrik, J.M., Lopez, H.: Statistics of speckle in ultrasound B-scans. IEEE Trans. Sonics Ultrasonics 30, 156–163 (1983)
Bamler, R.: Principles of synthetic aperture radar. Surv. Geophys. 21, 147–157 (2000)
Parrilli, S., Poderico, M., Angelino, C.V., Verdoliva, L.: A nonlocal SAR image denoising algorithm based on LLMMSE wavelet shrinkage. IEEE Trans. Geoence Remote Sens. 50, 606–616 (2012)
Huang, Y.M., Ng, M.K., Wen, Y.W.: A new total variation method for multiplicative noise removal. SIAM J. Imaging Sci. 2, 20–40 (2009)
Deledalle, C.A., Denis, L., Tupin, F.: Iterative weighted maximum likelihood denoising with probabilistic patch-based weights. IEEE Trans. Image Process. A Publ. IEEE Signal Process. Soc. 18, 1–12 (2009)
Dong, Y., Zeng, T.: A convex variational model for restoring blurred images with multiplicative noise. SIAM J. Imaging Sci. 6, 1598–1625 (2013)
Zhao, X.L., Wang, F., Ng, M.K.: A new convex optimization model for multiplicative noise and blur removal. SIAM J. Imaging Sci. 7, 456–475 (2014)
Zhou, Z., Guo, Z., Dong, G., Sun, J., Zhang, D.: A doubly degenerate diffusion model based on the gray level indicator for multiplicative noise removal. IEEE Trans. Image Process. 24, 249–260 (2015)
Wang, F., Zhao, X., Ng, M.K.: Multiplicative noise and blur removal by framelet decomposition and \(l_{1}\)-based L-curve method. IEEE Trans. Image Process. A Publ. IEEE Signal Process. Soc. 25, 4222–4232 (2016)
Deledalle, C., Denis, L., Tabti, S., Tupin, F.: MuLoG, or how to apply Gaussian denoisers to multi-channel SAR speckle reduction? IEEE Trans. Image Process. 26, 4389–4403 (2017)
Aubert, G., Aujol, J.F.: A variational approach to removing multiplicative noise. SIAM J. Appl. Math. 68, 925–946 (2008)
Chen, L., Liu, X., Wang, X., Zhu, P.: Multiplicative noise removal via nonlocal adaptive dictionary. J. Math. Imaging Vis. 54, 199–215 (2017)
Foucart, S., Rauhut, H.: An Invitation to Compressive Sensing. Springer, New York (2013)
Zhang, J., Chen, K.: A total fractional-order variation model for image restoration with nonhomogeneous boundary conditions and its numerical solution. SIAM J. Imaging Sci. 8, 1–26 (2015)
Wali, S., Zhang, H., Chang, H., Wu, C.: A new adaptive boosting total generalized variation (TGV) technique for image denoising and inpainting. J. Vis. Commun. Image Represent. 59, 39–51 (2019)
Strong, D.M., Chan, T.: F: Spatially and scale adaptive total variation based regularization and anisotropic diffusion in image processing. Diffusion in Image Processing, UCLA Math Department CAM Report (1996)
Dong, G., Guo, Z., Wu, B.: A convex adaptive total variation model based on the gray level indicator for multiplicative noise removal. Abstract and Applied Analysis, 2013-6-5, 1–21 (2013)
Malik, J.M., Perona, P.: Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 12, 629–639 (1990)
Yao, W., Guo, Z., Sun, J., Wu, B., Gao, H.: Multiplicative noise removal for texture images based on adaptive anisotropic fractional diffusion equations. SIAM J. Imaging Sci. 12, 839–873 (2019)
Shen, J., Kang, S.H., Chan, T.F.: Euler’s elastica and curvature-based inpainting. SIAM J. Appl. Math. 63, 564–592 (2003)
Guillemot, C., Meur, O.L.: Image inpainting: overview and recent advances. IEEE Signal Process. Mag. 31, 127–144 (2013)
Zhu, W., Tai, X.C., Chan, T.: Image segmentation using Euler’s elastica as the regularization. J. Sci. Comput. 57, 414–438 (2013)
Bae, E., Tai, X.C., Zhu, W.: Augmented lagrangian method for an Euler’s elastica based segmentation model that promotes convex contours. Inverse Probl. Imaging 11, 1–23 (2017)
Bae, E., Shi, J., Tai, X.C.: Graph cuts for curvature based image denoising. IEEE Trans. Image Process. A Publ. IEEE Signal Process. Soc. 20, 1199–1210 (2011)
Wali, S., Shakoor, A., Basit, A., Xie, L., Huang, C., Li, C.: An efficient method for Euler’s elastica based image deconvolution. IEEE Access 7, 61226–61239 (2019)
Shi, K., Guo, Z.: On the existence of weak solutions for a curvature driven elliptic system applied to image inpainting. Appl. Math. Lett. 99, 106003 (2019)
Wu, C., Tai, X.-C.: Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models. SIAM J. Imag. Sci. 3, 300–339 (2012)
Goldstein, T.: The split Bregman method for L1-regularized problems. SIAM J. Imag. Sci. 2, 1–21 (2009)
Dhingra, N.K., Khong, S.Z., Jovanović, M.R.: The proximal augmented Lagrangian method for nonsmooth composite optimization. IEEE Trans. Autom. Control 64, 2861–2868 (2019)
Tai, X.C., Hahn, J., Chung, G.J.: A fast algorithm for Euler’s elastica model using augmented Lagrangian method. SIAM J. Imaging Sci. 4, 313–344 (2011)
Zhang, J., Chen, R., Deng, C., Wang, S.: Fast linearized augmented Lagrangian method for Euler’s elastica model. Numer. Math. Theory Methods Appl. 10, 98–115 (2017)
Majee, S., Ray, R.K., Majee, A.K.: A gray level indicator-based regularized telegraph diffusion model: application to image despeckling. SIAM J. Imaging Sci. 13, 844–870 (2020)
Poynton, C.: Digital Video and HD: Algorithms and Interfaces. Morgan Kaufmann, San Francisco (2003)
Weickert, J., Romeny, B.M.T.H., Viergever, M.A.: Efficient and reliable schemes for nonlinear diffusion filtering. IEEE Trans. Image Process. 7, 398–410 (1998)
Ballester, C., Bertalmio, M., Caselles, V., Sapiro, G., Verdera, J.: Filling-in by joint interpolation of vector fields and gray levels. IEEE Trans. Image Process. A Publ. IEEE Signal Process. Soc. 10, 1200–1211 (2001)
Parikh, N., Boyd, S.: Proximal Algorithms. Now Publishers Inc., Hanover (2014)
Hao, X. Hua., Wang, J.H., Meng, F.Y., Pang, L.P.: An adaptive fixed-point proximity algorithm for solving total variation denoising models. Inf. Sci. 402, 69–81 (2017)
Chan, T.F., Esedoglu, S.: Aspects of total variation regularized L1 function approximation. SIAM J. Appl. Math. 65, 1817–1837 (2005)
Goldstein, T., Osher, S.: The split Bregman method for L1-regularized problems. SIAM J. Imaging Sci. 2, 1–21 (2009)
Chen, D.Q., Zhou, Y.: Multiplicative denoising based on linearized alternating direction method using discrepancy function constraint. J. Sci. Comput. 60, 483–504 (2014)
Jeong, T., Woo, H., Yun, S.: Frame-based Poisson image restoration using a proximal linearized alternating direction method. Inverse Probl. 29, 075007 (2013)
Durand, S., Fadili, J., Nikolova, M.: Multiplicative noise removal using L1 fidelity on frame coefficients. J. Math. Imaging Vis. 36, 201–226 (2010)
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 1–14 (2004)
Liu, J., Osher, S.: Block matching local SVD operator based sparsity and TV regularization for image denoising. J. Sci. Comput. 78, 1–18 (2019)
Funding
This work is supported by “the National Natural Science Foundation of China” (Grant Nos. 71773024, 12171123, 11871133, 11971131, U1637208, 61873071, 51476047), “the Natural Science Foundation of Hei longjiang Province of China” (Grant Nos. G2018006, LC2018001), “the Heilongjiang Postdoctoral Spmental Fund” (LBH-Q18064), “Guangdong Basic and Applied Basic Research Foundation” (2020B1515310010).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.
Availability of Data and Material
The datasets generated during and analysed during the current study are available from the corresponding author on reasonable request.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The second author is supported by “the National Natural Science Foundation of China” (Grant No. 71773024), “the Natural Science Foundation of Heilongjiang Province of China” (Grant No. G2018006), “the Heilongjiang Postdoctoral Scientific Research Developmental Fund” (LBHQ18064). The third author is supported by “the National Natural Science Foundation of China” (Grant Nos. 12171123, 11971131, 11871133), “the Natural Science Foundation of Heilongjiang Province of China” (Grant No. LC2018001), “Guangdong Basic and Applied Basic Research Foundation” (2020B1515310010). The last author is supported by “the National Natural Science Foundation of China” (Grant Nos. 11971131, U1637208, 61873071, 51476047), “Guangdong Basic and Applied Basic Research Foundation” (2020B1515310010)
Rights and permissions
About this article
Cite this article
Zhang, Y., Li, S., Guo, Z. et al. Image Multiplicative Denoising Using Adaptive Euler’s Elastica as the Regularization. J Sci Comput 90, 69 (2022). https://doi.org/10.1007/s10915-021-01721-7
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10915-021-01721-7