1 Introduction

Digital image completion or inpainting is used to complete or replace the corrupted and missing zones of an image by using the knowledge from the known regions, such that an unbiased viewer would not notice any transitions. There are diverse applications of the image inpainting models, such as restoration of damaged photos, removal of superimposed text and objects, image compression, and coding.

The image inpainting approaches are mainly branched into the following groups: exemplar-based image inpainting [7, 11], diffusion-based image inpainting [3, 4, 6, 8, 9, 12,13,14, 18,19,20, 23], and hybrid image inpainting [5]. Exemplar-based image inpainting technique repeatedly synthesizes the unknown area by the most identical patch in the known area. An influential exemplar-based inpainting approach was developed by Criminisi et al. [11]. Many other innovations improving the speed and efficacy of the Criminisi’s proposal have been amplified in the last 12 years [7].

Diffusion-based inpainting specifies to the technique of completing, which exploits the neighborhood information of the damaged regions to evaluate isophotes (level lines with constant gray value) and diffuses information from outside region to inside region by propagation. It utilizes the partial differential equation (PDE)-based and variational-based restoration methods. In order to perform the reconstruction process, the diffusion techniques follow isophote directions.

The first PDE-based image inpainting model was introduced by Bertalmio et al. [4]. It is easy to implement, but the results are not always stable and produce blurred results. The first variational model for the completion of the image was introduced by Masnou and Morel [14]. It is a very simple method. However, it is a very trivial method, and angles of the curves are not preserved. A recognized image inpainting model based on variational technique was proposed by Shen and Chan [18]. This variational technique fills in the damaged regions based on minimization of the total variational norm, while retaining approximately the ground truth image in the source regions. This method adopts an Euler–Lagrange (EL) equation and nonlinear propagation into the missing regions, which depends on the strength of isophotes. It introduces unintended effects such as staircase effects and edge blurring and also fails in connectivity principle.

Recently, researchers have integrated the geometric features of images into the image inpainting. The curvature of the image level curve [8], the mean curvature (MC) [9], and gauss curvature (GC) [12] of the image surface are the most commonly used local geometrical attributes, even though these methods have some issues on preserving contrast of the image, edges, and eliminating staircase effects. In particular, the diffusion-based high-order models often lead to over-smoothing effect, that is, make edges blurred.

Thus, in order to minimize these unintended effects and achieve good trade-off between image restoration and edge preservation, fractional-order PDEs are investigated in the field of computer vision. Fractional-order derivative [16] finds a major role in digital image processing [24]. It is the generalized form of integer-order derivative. Fractional-order derivative definition is not unique, and it is defined by many mathematicians like Riemann–Liouville, Grunwald–Letnikov, and Caputo. However, in this work, fractional-order derivative is utilized using DFT because, it is very easy to implement. It exhibits the nonlocal property, as the fractional-order derivative at a pixel depends on the whole image and not just the neighborhood pixel values. It is very useful for edge preservation and enhancement of the image.

In this article, two new techniques are considered, i.e., difference curvature (DC) [10]-driven fractional-order nonlinear diffusion model to fill the gaps and fractional-order variational model to denoise and deblur the image. Ramps and edges can be more effectively distinguished using DC rather than the MC and GC. The fractional-order derivative is capable to deal with the image edges and accomplishes a good trade-off between preserving edges and discarding staircase effect. The DC is large at near edges; hence, the fractional gradient flow is less for the image to retain the edges; the DC is small, distant from the edges (in smooth and ramp regions); fractional gradient flow is more in order to restrain staircase effect. Hence, the proposed model can improve the performance of image inpainting, while effectively differentiating between edges and ramps.

The rest of this article is organized as follows. Fractional-order derivative and difference curvature function are concisely reported in Sect. 2. Fractional-order nonlinear diffusion driven by adaptive difference curvature for image restoration model is presented in Sect. 3. Simulation results and comparison with some of the baseline methods for various applications are explained in Sect. 4. In Sect. 5, conclusions are presented.

2 Methodology

2.1 Fractional-Order Derivative Using DFT

The fractional-order derivative [16] is viewed as the process of generalization of the integer-order derivative. Many mathematicians introduced their own definitions for fractional-order derivative. Riemann–Liouville, Grunwald–Letnikov, and Caputo fractional-order derivative definitions are very popular. However, in this paper, fractional-order derivative definition using DFT [2] is presented because it is simple to implement. The 2D DFT of an image \( u\left( {x,y} \right) \) of \( M \times M \) size is

$$ \hat{u}\left( {\omega_{1} ,\omega_{2} } \right) = \frac{1}{{M^{2} }}\mathop \sum \limits_{x,y = 0}^{M - 1} u\left( {x,y} \right){\text{e}}^{{ - \frac{{j2\pi \left( {\omega_{1} x + \omega_{2} y} \right)}}{M}}} . $$
(1)

For 2D DFT, the shifting property in spatial domain can be denoted as

$$ u\left( {x - x_{0} ,y - y_{0} } \right)\mathop \leftrightarrow \limits^{F} {\text{e}}^{{ - \frac{{j2\pi \left( {\omega_{1} x_{0} + \omega_{2} y_{0} } \right)}}{M}}} \hat{u}\left( {\omega_{1} ,\omega_{2} } \right) $$
(2)

where F is 2D DFT. Thus, the first-order partial difference in x-direction can be denoted as

$$ D_{x} u\left( {x,y} \right) = u\left( {x,y} \right) - u\left( {x - 1,y} \right) $$
(3)
$$ D_{x} u\left( {x,y} \right)\mathop \leftrightarrow \limits^{F} \left( {1 - {\text{e}}^{{ - \frac{{j2\pi \omega_{1} }}{M}}} } \right)\hat{u}\left( {\omega_{1} ,\omega_{2} } \right) $$
(4)

and the DFT of fractional-order partial difference in x-direction can be denoted as

$$ D_{x}^{\alpha } u\left( {x,y} \right)\mathop \leftrightarrow \limits^{F} \left( {1 - {\text{e}}^{{ - \frac{{j2\pi \omega_{1} }}{M}}} } \right)^{\alpha } \hat{u}\left( {\omega_{1} ,\omega_{2} } \right). $$
(5)

Similarly, the DFT of fractional-order partial difference in \( y \)-direction can be denoted as

$$ D_{y}^{\alpha } u\left( {x,y} \right)\mathop \leftrightarrow \limits^{F} \left( {1 - {\text{e}}^{{ - \frac{{j2\pi \omega_{2} }}{M}}} } \right)^{\alpha } \hat{u}\left( {\omega_{1} ,\omega_{2} } \right). $$
(6)

In general, the fractional-order derivative operator of 2D signal can be denoted as

$$ D^{\alpha } u\left( {x,y} \right) = \left( {D_{x}^{\alpha } u\left( {x,y} \right), D_{y}^{\alpha } u\left( {x,y} \right)} \right) $$
(7)

and

$$ \left| {D^{\alpha } u\left( {x,y} \right)} \right| = \sqrt {\left( {D_{x}^{\alpha } u\left( {x,y} \right)} \right)^{2} + \left( {D_{y}^{\alpha } u\left( {x,y} \right)} \right)^{2} } . $$
(8)

In practical computations, to calculate the fractional-order difference, the central difference method is very useful. This is equivalent to shifting of \( D_{x}^{\alpha } u\left( {x,y} \right) \) and \( D_{y}^{\alpha } u\left( {x,y} \right) \) by \( \alpha /2 \) units.

$$ \tilde{D}_{x}^{\alpha } u\left( {x,y} \right) = F^{ - 1} \left( {\left( {1 - {\text{e}}^{{ - \frac{{j2\pi \omega_{1} }}{M}}} } \right)^{\alpha } {\text{e}}^{{\frac{{j\pi \alpha \omega_{1} }}{M}}} \hat{u}\left( {\omega_{1} ,\omega_{2} } \right)} \right) $$
(9)
$$ \tilde{D}_{y}^{\alpha } u\left( {x,y} \right) = F^{ - 1} \left( {\left( {1 - {\text{e}}^{{ - \frac{{j2\pi \omega_{2} }}{M}}} } \right)^{\alpha } {\text{e}}^{{\frac{{j\pi \alpha \omega_{2} }}{M}}} \hat{u}\left( {\omega_{1} ,\omega_{2} } \right)} \right) $$
(10)

where \( F^{ - 1} \) is the 2D inverse discrete Fourier transform (IDFT). The operator \( \tilde{D}_{x}^{\alpha } \) has the form \( \left[ {F^{ - 1} } \right]\left[ {K_{1} } \right]\left[ F \right] \), where [.] is a matrix operator and

$$ K_{1} = {\text{diag}}\left( {\left( {1 - {\text{e}}^{{ - \frac{{j2\pi \omega_{1} }}{M}}} } \right)^{\alpha } {\text{e}}^{{\frac{{j\pi \alpha \omega_{1} }}{M}}} } \right). $$
(11)

The adjoint operator \( \tilde{D}_{x}^{{\alpha^{*} }} \) of \( \tilde{D}_{x}^{\alpha } \) can be computed by the following conception:

$$ \tilde{D}_{x}^{{\alpha^{ *} }} = \left( {\left[ {F^{ - 1} } \right]\left[ {K_{1} } \right]\left[ F \right]} \right)^{ *} = \left[ {F^{ - 1} } \right]^{ *} \left[ {K_{1}^{ *} } \right]\left[ F \right]^{ *} = \left[ F \right]\left[ {K_{1}^{ *} } \right]\left[ {F^{ - 1} } \right]. $$
(12)

Since \( K_{1} \) is purely diagonal operator, \( K_{1}^{*} \) is the complex conjugation of \( K_{1} \). The same procedure can be used for the calculations of \( \tilde{D}_{y}^{{\alpha^{*} }} \) and \( \tilde{D}_{y}^{\alpha } \).

$$ \tilde{D}_{y}^{{\alpha^{ *} }} = \left[ F \right]\left[ {K_{2}^{ *} } \right]\left[ {F^{ - 1} } \right] $$
(13)

2.2 Difference Curvature

The edge and ramps cannot be differentiated effectively by the image gradient. In order to distinguish these, a popular feature descriptor difference curvature (DC) [10, 22, 25] is considered. It is denoted as

$$ {\text{DC}} = \left| {\left| {u_{\eta \eta } } \right| - \left| {u_{\xi \xi } } \right|} \right| $$
(14)

where \( u_{\eta \eta } \) and \( u_{\xi \xi } \) denote the second-order derivatives in tangential and perpendicular directions of the image and the operator |.| denotes the absolute value.

$$ u_{\eta \eta } = \frac{{u_{x}^{2} u_{xx} + 2u_{x} u_{y} u_{xy} + u_{y}^{2} u_{yy} }}{{u_{x}^{2} + u_{y}^{2} }} $$
(15)
$$ u_{\xi \xi } = \frac{{u_{y}^{2} u_{xx} - 2u_{x} u_{y} u_{xy} + u_{x}^{2} u_{yy} }}{{u_{x}^{2} + u_{y}^{2} }} $$
(16)

At the object edges, the value of DC is large, while in smooth and ramp regions, it is small; hence, ramps and edges can easily be distinguished. The property analysis of this descriptor is as follows.

  • DC is large at edges, since \( \left| {u_{\eta \eta } } \right| \) is large and \( \left| {u_{\xi \xi } } \right| \) is small.

  • DC is small in smooth and ramp regions, since \( \left| {u_{\eta \eta } } \right| \) and \( \left| {u_{\xi \xi } } \right| \) are both small.

3 Proposed Model

Given an image, \( u_{0} \in L^{2} \left( \varOmega \right) \), with \( \varOmega \subset R^{2} \). The inpainting domain \( \varOmega \) possesses its boundary \( \delta \varOmega \). The problem is to reconstruct \( u\left( {x,y} \right) \) from \( u_{0} \left( {x,y} \right) \). The degraded image is written as

$$ u\left( {x,y} \right) = h\left( {x,y} \right)*u\left( {x,y} \right) + n\left( {x,y} \right) $$
(17)

where \( h\left( {x,y} \right) \) is a bounded linear shift-invariant operator with known kernel \( \frac{1}{{2\pi \sigma^{2} }}\exp \left( {\frac{{ - \left( {x^{2} + y^{2} } \right)}}{{2\sigma^{2} }}} \right), \) * represents the convolution operator, and \( n\left( {x,y} \right) \) is an additive Gaussian noise with zero mean and standard deviation σ. Now, the problem is to denoise, deblur, and inpaint for the reconstruction of the image, \( u\left( {x,y} \right). \) The inpainting domain is known in advance.

Inspired by the earlier works [2, 12, 23, 25], a kind of image restoration model with fractional calculus is proposed. In the image inpainting domain, difference curvature-driven fractional-order nonlinear diffusion is remodeled to drive the diffusion process and an adaptive conductance coefficient is proposed. The diffusion process driven by DC will preserve the structures even with nonzero mean curvature values.

When the image is also degraded due to Gaussian noise and blur in the non-inpainting region, fractional-order variational model [26] is applied for the image restoration. In this model, the implementation of fractional-order derivative is based on 2D DFT. Hence, the proposed model is more flexible for selecting the fractional order in the inpainting region as well as in the non-inpainting region. This model restores the missing regions and eliminates the noise and blur without affecting the edges and ramps. The proposed model can be mathematically modeled (using gradient-descent minimization) as:

$$ \frac{\partial u}{\partial t} = - D^{\alpha } \left( {\psi_{\varOmega } D^{\alpha } u} \right) + \lambda_{\varOmega } h\left( {h*u - u_{0} } \right) $$
(18)

The first term is regularization term, the second term is data fitting term, and \( \psi_{\varOmega } \) is conductance coefficient, which is defined as

$$ \psi_{\varOmega } = \left\{ {\begin{array}{*{20}l} {f\left( {\text{DC}} \right), } \hfill & {\left( {x,y} \right) \in \varOmega } \hfill \\ {\frac{1}{{|\tilde{D}^{\alpha } u|}},} \hfill & { \left( {x,y} \right) \notin \varOmega } \hfill \\ \end{array} } \right. $$
(19)

Here, \( \alpha \) is fractional order and is more flexible in the inpainting and other regions; it is denoted as

$$ \alpha = \left\{ {\begin{array}{*{20}l} {\alpha_{1} ,} \hfill & {\left( {x,y} \right) \in \varOmega } \hfill \\ {\alpha_{2} , } \hfill & {\left( {x,y} \right) \notin \varOmega } \hfill \\ \end{array} } \right. $$
(20)

and \( \lambda_{\varOmega } \) is regularization parameter, which is used to tune regularization term and data fitting term of model (18), when the image is degraded due to noise and blur. It is represented as

$$ \lambda_{\varOmega } = \left\{ {\begin{array}{*{20}l} {0,} \hfill & {\left( {x,y} \right) \in \varOmega } \hfill \\ {\lambda ,} \hfill & {\left( {x,y} \right) \notin \varOmega } \hfill \\ \end{array} } \right. $$
(21)

The model (18) obeys the Neumann’s boundary condition, and it can be re-written as

$$ \frac{\partial u}{\partial t} = - \tilde{D}_{x}^{{\alpha^{*} }} \left( {\psi_{\varOmega } \tilde{D}_{x}^{\alpha } u} \right) - \tilde{D}^{\alpha } \left( {\psi_{\varOmega } \tilde{D}^{\alpha } u} \right) + \lambda_{\varOmega } h\left( {h*u - u_{0} } \right). $$
(22)

In this model, two different conductance coefficients are applied for the restoration process. In the inpainting domain, the conductance coefficient is a function of DC. For more robustness, the bi-weight conduction coefficient is proposed which is a non-increasing nonlinear function and is adaptively varied. It is defined as

$$ f\left( {\text{DC}} \right) = \left\{ {\begin{array}{*{20}l} {\frac{1}{{1 + \left( {\frac{\text{DC}}{k}} \right)^{2} }}, } \hfill & {\left| {\text{DC}} \right| < k} \hfill \\ {0,} \hfill & {\text{otherwise}} \hfill \\ \end{array} } \right. $$
(23)

Here, \( k \) is the edge threshold value. Generally, the edge threshold value is constant which is proposed by the authors [15], and it is based on image gradient. This threshold value is set manually and fine tuning process which is done according to the image as originally proposed by the authors [15]. As per the analysis given in Sect. 2, DC is considered in the place of image gradient.

In order to make the diffusion process adaptive in nature, in this article, the variable value of edge threshold k is proposed. The value of k is set to minimum absolute deviation (MAD) of the fractional-order derivative of an image. The value of k is estimated as

$$ k = 1.4826*{\text{MAD}}\left( {\left| {D^{\alpha } u} \right|} \right) = 1.4826*median\left( {\left| {D^{\alpha } u} \right| - median\left( {\left| {D^{\alpha } u} \right|} \right)} \right). $$
(24)

The constant is derived from the fact that the MAD of a zero-mean normal distribution with unit variance is 0.6745 = \( \frac{1}{1.4826} \). The computation of \( median \) is replacing the value of current pixel by the median of the intensity levels in the neighborhood of that pixel.

In the smooth regions and ramps, \( \left| {\text{DC}} \right| < k \) and conductance function is nonzero; hence, nonlinear diffusion is conducted. At image edges, the \( \left| {\text{DC}} \right| > k \); therefore, the diffusion is zero. Hence, edge preservation is done.

The proposed model comprises the following different cases:

  • If \( \alpha_{1} \in {\mathbb{R}}^{ + } \) and \( \alpha_{2} = 0 \), the proposed model behaves like a fractional-order nonlinear diffusion driven by adaptive DC to fill the missing regions.

  • If \( \alpha_{1} = 0 \) and \( \alpha_{2} \in {\mathbb{R}}^{ + } \), the proposed model behaves like a fractional-order variational model to remove the noise.

  • If \( \alpha_{1} ,\alpha_{2} \in {\mathbb{R}}^{ + } \), the proposed model is the combination of above two models.

The overall restoration process of the proposed model is summarized and presented in the following algorithm.

  1. 1.

    Initialization:

    1. 1.1

      \( u^{\left( 0 \right)} = u,\;\;\Delta t = 0.01,\;\;\lambda^{\left( 0 \right)} = 0,\;\;DC^{\left( 0 \right)} = 0,\;\;\alpha = \left\{ {\begin{array}{*{20}l} {\alpha_{1} ,} \hfill & {\left( {x,y} \right) \in \varOmega } \hfill \\ {\alpha_{2} ,} \hfill & {\left( {x,y} \right) \notin \varOmega } \hfill \\ \end{array} } \right. \)

    2. 1.2

      Calculate the discrete Fourier transform \( \hat{u}^{\left( 0 \right)} \) of \( u^{\left( 0 \right)} . \)

  2. 2.

    Iteration:

    For \( m = 0,1,2,3, \ldots \), Calculate \( \hat{u}^{{\left( {m + 1} \right)}} \) by the following steps

    1. 2.1

      Calculate the fractional-order central partial differences \( \tilde{D}_{x}^{{\alpha_{1} }} u^{\left( m \right)} ,\tilde{D}_{y}^{{\alpha_{1} }} u^{\left( m \right)} \) and \( \tilde{D}_{x}^{{\alpha_{2} }} u^{\left( m \right)} ,\tilde{D}_{y}^{{\alpha_{2} }} u^{\left( m \right)} \) using (9) and (10).

    2. 2.2

      Calculate diffusion coefficient \( f\left( {{\text{DC}}^{\left( m \right)} } \right) \) using (23)

    3. 2.3

      Calculate \( \left| {\tilde{D}^{{\alpha_{2} }} u^{\left( m \right)} } \right| = \sqrt {\left( {\tilde{D}_{x}^{{\alpha_{2} }} u^{\left( m \right)} } \right)^{2} + \left( {\tilde{D}_{y}^{{\alpha_{2} }} u^{\left( m \right)} } \right)^{2} + \varepsilon } \); \( \varepsilon \) is a small value to avoid divide by zero.

    4. 2.4

      In the inpainting region, \( l_{xi}^{\left( m \right)} = f\left( {{\text{DC}}^{\left( m \right)} } \right)\tilde{D}_{x}^{{\alpha_{1} }} u^{\left( m \right)} \) and \( l_{yi}^{\left( m \right)} = f\left( {{\text{DC}}^{\left( m \right)} } \right)\tilde{D}_{y}^{{\alpha_{1} }} u^{\left( m \right)} \) in spatial domain.

    5. 2.5

      In the non-inpainting region, calculate \( l_{xn}^{\left( m \right)} = \left| {D^{{\alpha_{2} }} u^{\left( m \right)} } \right|^{ - 1} D_{x}^{{\alpha_{2} }} u^{\left( m \right)} \) and \( l_{yn}^{\left( m \right)} = \left| {\tilde{D}^{{\alpha_{2} }} u^{\left( m \right)} } \right|^{ - 1} \tilde{D}_{y}^{{\alpha_{2} }} u^{\left( m \right)} \) in spatial domain.

    6. 2.6

      Calculate \( \hat{g}^{\left( m \right)} = \left\{ {\begin{array}{*{20}c} {\left[ {K_{1}^{*} } \right]\left[ {F\left( {l_{xi}^{\left( m \right)} } \right)} \right] + \left[ {K_{2}^{*} } \right]\left[ {F\left( {l_{yi}^{\left( m \right)} } \right)} \right],\quad \left( {x,y} \right) \in \varOmega } \\ {\left[ {K_{1}^{*} } \right]\left[ {F\left( {l_{xn}^{\left( m \right)} } \right)} \right] + \left[ {K_{2}^{*} } \right]\left[ {F\left( {l_{yn}^{\left( m \right)} } \right)} \right],\quad \left( {x,y} \right) \notin \varOmega } \\ \end{array} } \right. \)

    7. 2.7

      Calculate \( \hat{u}^{{\left( {m + 1} \right)}} = \hat{u}^{\left( m \right)} - \hat{g}^{\left( m \right)} \Delta t -{\lambda}^{\left( m \right)} \hat{h}\left( {\hat{h}\hat{u}^{\left( m \right)} - \hat{u}^{\left( 0 \right)} } \right)\Delta t \).

    8. 2.8

      Calculate inverse discrete Fourier transform of \( \hat{u}^{{\left( {m + 1} \right)}} \), i.e., \( u^{{\left( {m + 1} \right)}} \)

  3. 3.

    Parameter updating:

    1. 3.1

      Update difference curvature function \( {\text{DC}}^{{\left( {m + 1} \right)}} \) using (14).

    2. 3.2

      Update regularization parameter, \( \lambda^{{\left( {m + 1} \right)}} = \frac{{{\text{Variance}}\left( {u^{{\left( {m + 1} \right)}} } \right)}}{{{\text{Mean}}\left( {u^{{\left( {m + 1} \right)}} } \right)}} \)

    3. 3.3

      If PSNR (\( u^{{\left( {m + 1} \right)}} ,u_{0} ) \) > PSNR (\( u^{\left( m \right)} ,u_{0} \)), then set \( m = m + 1 \) and go to step 2; else stop.

Hence, this model behaves like a fractional-order nonlinear diffusion driven by adaptive DC in the inpainting regions and works like a fractional-order variational model in the other regions to denoise and deblur the image.

4 Simulation Results

The described fractional-order inpainting model is successfully applied on different images collected from USC-SIPI image database. Some of the test images are presented in Fig. 1. All simulations are carried out using MATLAB R2016a on an Intel i3 processor with 4G RAM. In order to evaluate the proposed model, the standard performance metrics, viz., peak signal-to-noise ratio (PSNR), mean structural similarity (MSSIM) [21], and figure of merit (FoM) [1] are considered.

Fig. 1
figure 1

Test images a peppers, b Elaine, c boat, d Lena, e building, f woman

PSNR is an effective statistical measuring standard to quantify the performance of a reconstruction method under consideration. MSSIM is a new framework for the image quality measure, which is based on the assumption that the human visual system is highly adapted to extract structural information from the viewing field. FoM is a useful quantitative metric to find the preservation of edges and finer details. As usual, the larger the performance metrics, the better the quality results.

In the first experiment, the removal of the superimposed text from various images for different values of fractional order is conducted and the simulation results of peppers image are presented in Fig. 2. As per the simulation results, when the fractional order is 1.8, the inpainting capacity is good.

Fig. 2
figure 2

Performance of the proposed model with respect to fractional order for text removal on peppers image a PSNR (dB), b MSSIM, c FoM

When \( \alpha_{1} = 1 \), the resultant model is second-order diffusion model and it produces staircase effects. When \( \alpha_{1} = 2 \), the resultant model is fourth-order diffusion model overcomes staircase effects; however, it produces over-smoothing effect. Taking the perceptual quality of the inpainted image, the fractional order in between 1 and 2 is experimentally found to be appropriate.

The proposed model is compared with some of the state-of-the-art models such as higher-order variational model (HVM) [6], total variational (TV) model using operator splitting method [13], and GCDD model proposed by Jidesh and George [12]. The simulation results for the text removal from peppers image are presented in Fig. 3. For better understanding, the residual images of these results from the original image are also displayed in Fig. 3f–i. The residual part is more in HVM, TV, and GCDD model as compared to the proposed model. From Fig. 3i, one could observe that, the curvy edges and ramps are restored effectively than other models since the diffusion coefficient is adaptively changed based on the image characteristics. The zoomed versions of the inpainted images are also presented in Fig. 3j–m. The TV model produces staircase effects and fails in connectivity principle which is clearly observed in Fig. 3k. The other integer-order inpainting models do not solve the connectivity principle.

Fig. 3
figure 3

a Peppers image with text; b HVM; c TV; d GCDD; e proposed; residual of inpainting models from original image f HVM; g TV; h GCDD; i proposed; zoomed versions of j HVM; k TV; l GCDD; m proposed

The ability of the proposed model is tested on 15 images from USC-SIPI image database with the same superimposed text. The performance metrics on some of the images are recorded in Table 1, and the average values of PSNR, MSSIM, and FoM of various image inpainting models are presented in Table 2. The average PSNR of the proposed model is enhanced by 1.3 dB compared with the GCDD model.

Table 1 Comparison of inpainting models for text removal from different images
Table 2 Average values of PSNR (dB), MSSIM, and FoM of inpainting models for text removal

In the second experiment, the restoration of the image due to random loss of pixels is discussed. The experiment is done on various images due to 10% to 90% random loss of pixels. The comparison results of TV model [13], GCDD [12], and proposed model on Elaine image are shown in Fig. 4. From Fig. 4, one could observe that, as the loss of pixels is more, the restoration capability of the proposed model is improved. In Fig. 4b, c the preservation of structures and edges is enhanced. The comparisons of inpainting models for 40% random loss of pixels on different images are presented in Table 3.

Fig. 4
figure 4

Comparison of inpainting models for the random loss of pixels

Table 3 Comparison of inpainting models for 40% of random loss of pixels in different images

In the third experiment, the proposed model is tested to remove the text and Gaussian noise and Gaussian blur. The blurring kernel is chosen to be a Gaussian kernel with standard deviation σ = 4 and Gaussian noise standard deviation of 20%. When the fractional orders are \( \alpha_{1} = 1.8 \) and \( \alpha_{2} = 1.9 \), then the proposed model gives good results. For comparison of the models, the simulation results of TV [13], CDD [9], GCDD [12], and proposed model on woman image are shown in Fig. 5. These models are introducing staircase effect because these three models utilized second-order variational model in the non-inpainting regions as proposed by Rudin et al. [17]. As it can be observed from Fig. 5f–h, the lips of the woman image are not restored effectively. The staircase effects can be observed at the cheeks of the woman’s face. In the case of proposed model, staircase effects are clearly eliminated and producing more visual quality.

Fig. 5
figure 5

Inpainting results comparison on woman image a degraded image due to text, Gaussian noise (σ = 20%) and blur; b TV; c CDD; d GCDD; e proposed model; zoom of woman face of f TV; g CDD; h GCDD; i proposed model

The performance metrics of the proposed model for text, Gaussian blur and noise removal for standard deviation of 10% and 20% in woman and peppers images are given in Tables 4 and 5. Therefore, the proposed model not only fills the missing regions but also removes the noise and blur effectively without introducing any unintended effects.

Table 4 Text and Gaussian noise and blur removal results of woman image
Table 5 Text and Gaussian noise and blur removal results of peppers image

In all the integer-order diffusion models, the propagation of the pixel information into the missing regions is based on the boundary pixels of the missing regions. Whereas in the fractional-order diffusion models, the propagation of the pixel information not only depends on the neighborhood pixels but also on the entire image. This nonlocal property helps for edge preservation and enhancement of the image.

5 Conclusions

In this proposed work, two different concepts have been applied for image restoration, i.e., difference curvature-driven fractional-order nonlinear diffusion to inpaint the missing regions and fractional-order variational model to denoise and deblur the image. A fractional-order derivative possesses a nonlocal property and can effectively preserve significant features such as corners, edges, and discarding blocky effects. A new adaptive bi-weight diffusion coefficient is also proposed to fill the missing regions and preserve the edges. Fractional-order variational model can effectively denoise and deblur the image, while eliminating staircase and speckle effects. The simulation results present that the proposed model is able to fill the missing regions and remove the noise and blur in the other regions simultaneously.

The proposed model works well for structure-based images, and the fractional-order selection is done manually. Adaptive fractional order may be introduced to reduce the user interaction.