1 Introduction

Since the late 1990s image inpainting has attracted much research attention with the aim of repairing damaged images in an indiscernible way [2, 3, 5, 6]. Traditional image restoration attempts to reduce noise or restore image degradation such as motion effects or blurring in order to obtain an optimal (in some sense) estimate of the original. Image inpainting, on the other hand, is to imitate human professionals in repairing pictures using some mathematical models and computer algorithms to recreate the missing content in the image. The objective is to produce visually satisfactory or acceptable results, rather than a good estimate of the original since there is simply no original. Inpainting of digital images has found applications in such areas as restoration of historical photographs, filling in or removing chosen areas in images, and wiping out visible watermarks. Recently some researchers have applied inpainting techniques in de-interlacing [1], image compression [10] and repairing missing blocks of JPEG images due to transmission over poor channels [13]. Evaluation of the quality of repaired images is generally done with subjective assessment because no original undamaged version is available in reality, while in research we can always generate damaged images from some known originals, and compare the repaired version with the original ones.

Some typical techniques use partial differential equations (PDE) such as the approaches proposed by Bertalmio et al. [2, 3]. They established a mathematical model of image inpainting by borrowing ideas from classic fluid dynamics to treat image problems. By iteratively solving the numerical representation of a PDE, they managed to smoothly propagate information of gray values from surrounding areas into the region Ω to be inpainted along isophotes. Isophotes are traces on which gray values are equal. The inpainting process is terminated until the gray values in the computation domain reach a steady state. Guided by the connectivity principle of human visual perception, Chan et al. [5] proposed a non-texture inpainting method using the third-order PDE based on a total variation model [6]. This method in fact represents an anisotropic diffusion process. To satisfy the human visual requirements, intensity of diffusion is related to the curvature. When the disconnected remaining object parts are separated far apart by the inpainting domain, this method can still give satisfactory results.

Interpolation techniques can also be used in image inpainting. Shi et al. proposed an adaptive inpainting algorithm which is equivalent to nonlinear interpolation [15]. The repairing procedure checks the surrounding information of a damaged pixel and determines the size of the reference window that can be used to compute an interpolated color. This method cannot produce satisfactory results when the repaired pixels are close to edges. Mairal et al. [11] established dictionaries for color images by learning, and then used sparse representation and generalized K-means clustering, termed K-SVD, to handle noise and restore low quality images. An inpainting method for recovering an original scene from degraded images was proposed in [19]. It consists of a new inter-pixel relationship function and the respective refinement to synthesize missing pixels from existing spatially co-related pixels.

There are some inpainting methods using patch-based way. Criminisi et al. proposed an inpainting method for region filling and object removal [7]. The method performs the synthesis task through a best-first filling strategy that depends entirely on the priority values assigned to each patch. After finding the patch with the maximum priority, the most similar patch is chosen from the intact region to replace it, and then the priority values are updated to continue the above steps repeatedly. An image inpainting method using patch sparsity was introduced in [18]. This method investigates the sparsity of image patches, and measures the confidence of the patch located at structure region by the sparseness of its nonzero similarities to the neighboring patches. The patch with larger structure sparsity will be assigned higher priority for further inpainting. The patch-based methods can deal with relatively larger region than PDE-based methods, but will lead to some artificial seams between patches.

Indeed, most of the efficient structure inpainting methods to date are based on PDE. But the major problem of current PDE models is that they lack the ability of reconstructing textures in the damaged regions. Therefore textures have to be generated in a separate process. The method proposed in [4] decomposes image into structure and texture layers, and reconstructs each layer respectively with a PDE-based algorithm [3] and a texture synthesis algorithm [8]. The retouched image is obtained by combining these two sub-images. Hsu et al. proposed a hybrid algorithm for region filling with an artifact detection mechanism [9]. In the inpainting procedure, color texture distribution analysis is used to choose whether the subpatch texture synthesis technique or the weighted interpolation method should be applied. All these methods use non-PDE methods to treat textures, and the image decomposition for obtaining texture layer may be rather complicated therefore time-consuming.

In this work, we propose a compact and fast PDE-based inpainting method using anisotropic heat transfer model, which can propagate both the structure and texture information from surrounding region into damaged region simultaneously. There are two terms in proposed PDE, which are represented for structure and texture inpainting respectively. In the numerical implementation for the structure term, we adaptively choose an appropriate form of the finite difference according to the curvature value. In this way, precision and efficiency of the process are significantly improved. Compared with other PDE-based techniques, the proposed method can effectively repair damaged images containing both structure and texture by only one PDE. Unlike the layer-based inpainting methods, our method eliminates image decomposition process, so it is more efficient.

The rest of the paper is arranged as follows. We first express the analogy between image inpainting and heat transfer model in Section 2. The simultaneous structure and texture inpainting method using anisotropic model is described in Section 3, and its numerical implementation is given in Section 4. In Section 5, experimental results and discussion are presented. Finally, conclusions are drawn in Section 6.

2 Analogy for image inpainting and heat transfer

We analogize image inpainting with a heat transfer process, let u be a damaged image, Ω the region to be inpainted in u, and ∂Ω the boundary of Ω. We treat image inpainting as propagating the information of valid pixels from the exterior to the interior of Ω. Since no elasticity is involved, analogy to wave and vibration phenomena is generally inappropriate. As a simple solution, we use a heat transfer model for homogenous medium, and consider the image being equivalent to a temperature field with the pixel value u(x, y) corresponding to temperature. So, change of pixel values in the damaged region is modeled as variation of temperature caused by heat conduction due to the external heat sources.

We start with the simplest one-dimensional heat transfer problem as shown in Fig. 1. The physical foundation is the Fourier’s heat transfer law expressed in the following equation where v(x, t) is the heat flow through a unit length per unit time, which is proportional to the changing rate of temperature.

Fig. 1
figure 1

One dimensional heat transfer problem

$$ v(x,t) = \mathop {{\lim }}\limits_{\Delta x \to 0} \,\left[ {\sigma \frac{{u - (u + \Delta u)}}{{\Delta x}}} \right] = - \sigma \frac{{\partial u(x,t)}}{{\partial x}} $$
(1)

It is easy to extend Eq. 1 to the two-dimensional situation. Let the gradient ∇u(x, y; t) = G(x, y; t), whose two components are:

$$ \left\{ {\begin{array}{*{20}{c}} {{G_x}(x,y;t) = {{{\partial u(x,y;t)}} \left/ {{\partial x}} \right.}} \hfill \\{{G_y}(x,y;t) = {{{\partial u(x,y;t)}} \left/ {{\partial y}} \right.}} \hfill \\\end{array} } \right. $$
(2)

So the heat flow vector is:

$$ {\mathbf{v}}(x,y;t) = - \sigma \,\nabla u(x,y;t) = - \sigma \,{\mathbf{G}}(x,y;t) = - \sigma [{G_x}(x,y;t)\;{\mathbf{i}} + {G_y}(x,y;t)\;{\mathbf{j}}] $$
(3)

where i and j are unit directional vectors.

The heat transfer process also satisfies the law of energy conservation. If no heat source exists in a region, the difference of inflowing and outflowing heat leads to the change of temperature. Therefore we can obtain the changing rate of temperature [14]:

$$ \frac{{\partial u(x,y;t)}}{{\partial t}} = \frac{\sigma }{{\rho \,s}}\nabla \cdot {\mathbf{G}}(x,y;t) = \kappa \left[ {\frac{{\partial {G_x}(x,y;t)}}{{\partial x}} + \frac{{\partial {G_y}(x,y;t)}}{{\partial y}}} \right] $$
(4)

where the constant κ consists of three physical constants: thermal conductivity coefficient σ, density of the medium ρ and specific heat s. Equation 4 specifies the field of gray values changing with time in the inpainting process. Because the physical quantity under consideration is the pixel value, and the result of propagation leads to the change of pixels in the target region, κ may be set to 1. This assumption is to be validated in experiments. To solve the PDE of Eq. 4, we use a finite difference method in a computation domain enclosing the damaged regions, and force the undamaged pixels intact after each iteration step so that the Dirichlet boundary condition can be satisfied.

The heat transfer analogy has a concise mathematical form and low computation complexity. Compared with other methods such as fluid dynamic model using the Navier-Stokes equations, this model needs fewer iteration steps in image inpainting and has faster processing speed. However, the direction of information propagation is not taken into account in the treatment. This isotropic nature may cause unsatisfactory inpainting results since images usually contain non-uniformity and some directional properties. In next section, we propose a new image inpainting method based on anisotropic heat transfer model for better performance.

3 Simultaneous structure and texture image inpainting

3.1 Anisotropic heat transfer model for inpainting

We observe that, in the isotropic inpainting model, the two directions of information propagating are always kept horizontally and vertically, and propagating strength of these two directions also maintains constant. This is irrelevant in many cases, especially near edges where blurring effects may occur to degrade the quality of inpainting. To solve this problem, we let the way of gray value propagation adaptively change with image content, leading to the anisotropic heat transfer model for inpainting.

In the new PDE model, we decompose the gray-level propagation into two orthogonal directions which are changed with the image contents. For good inpainting results of the structure information, one of the propagation directions is made consistent with the isophotes, which is always normal to gradient direction. Let the propagation strength of this direction be fixed. Along the gradient direction, on the other hand, the strength is changeable based on the image contents. Since gray values change drastically in the edge region, the magnitude of gradient is large. To avoid edge blurring, propagation strength along the gradient direction should be small near edges. On the contrary, gradient is small in smooth regions, and therefore the propagation strength along the gradient direction can be large. In summary, the strength of propagation along gradient directions should be made inversely proportional to the magnitude of gradient. This way, we can improve efficiency of structure inpainting and obtain satisfactory results near edges.

3.2 Structure image inpainting using anisotropic heat transfer model

Based on the above consideration, we introduce a spatially variable and content-dependent coordinate system O-ξη to replace the fixed Cartesian system O-xy. Let the unit coordinate vectors in the O-xy system be i and j, thus any point in the space may be expressed by a vector r = x i + y j. This becomes r = ξ p + η q in the O-ξη system where ξ and η are the two components in the isophote and gradient directions respectively, and p and q are the two orthogonal unit vectors:

$$ {\mathbf{p}} = \frac{1}{{\left| {\nabla u} \right|}}\left( {\frac{{\partial u}}{{\partial y}}\,{\mathbf{i}} - \frac{{\partial u}}{{\partial x}}{\mathbf{j}}} \right),{\mathbf{q}} = \frac{1}{{\left| {\nabla u} \right|}}\left( {\frac{{\partial u}}{{\partial x}}\,{\mathbf{i}} + \frac{{\partial u}}{{\partial y}}{\mathbf{j}}} \right) $$
(5)

Thus we obtain the anisotropic heat transfer model for structure image inpainting as expressed by the following PDE:

$$ \frac{{\partial u\left( {x,y;t} \right)}}{{\partial t}} = \frac{{{\partial^2}u\left( {x,y;t} \right)}}{{\partial {\xi^2}}} + {c^2}\frac{{{\partial^2}u\left( {x,y;t} \right)}}{{\partial {\eta^2}}},\quad \left( {x,y} \right) \in \Omega $$
(6)

where x and y are related to ξ and η through ∇u:

$$ \left[ {\begin{array}{*{20}{c}} x \\y \\\end{array} } \right] = \left[ {\begin{array}{*{20}{c}} {\cos \theta } & { - \sin \theta } \\{\sin \theta } & {\cos \theta } \\\end{array} } \right]\left[ {\begin{array}{*{20}{c}} \xi \\\eta \\\end{array} } \right],\theta = \arctan \left( {{{{\frac{{\partial u}}{{\partial y}}}} \left/ {{\frac{{\partial u}}{{\partial x}}}} \right.}} \right) $$
(7)

and the weight c in Eq. 6 is

$$ c = \sqrt {{\exp \left[ { - \frac{1}{K}\left| {\nabla u\left( {x,y;t} \right)} \right|} \right]}} $$
(8)

From Eq. 8, we see that c goes to zero when the magnitude of gradient |∇u(x, y; t)| tends to infinite, and when |∇u(x, y; t)| = 0, c is equal to 1. Following the definition given in [12], we let K be a predetermined threshold to differentiate smooth and fluctuating regions. The propagation strength c 2 along q varies spatially, and it is different with the strength along p when the magnitude of gradient is not zero, so Eq. 6 represents an anisotropic propagation model.

3.3 Incorporate texture inpainting

In case there are rich textures in the damaged image region, however, only using Eq. 6 for inpainting will not produce satisfactory results because it has not taken the periodicity of texture into account. In order to improve the inpainting effect, the texture feature must be included in the model so that the method can generate coherent texture in the inpainted region while repairing the structure. Consider the situation of Fig. 2. We can make use of the periodicity to propagate the texture information along the directions of texture and its perpendicular direction respectively while propagating the structure information using Eq. 6.

Fig. 2
figure 2

Propagating direction of texture information

Let α ∈ [0, π ] be the angle between the texture direction and the horizontal line, and d the scale of texture periodicity. We incorporate the texture ingredient into Eq. 6, resulting in the new PDE in Eq. 9 for simultaneous structure and texture inpainting.

$$ \frac{{\partial u\left( {x,y;t} \right)}}{{\partial t}} = A{\Delta_s}u\left( {x,y;t} \right) + B{\Delta_t}u\left( {x,y;\alpha, d,t} \right) \,,\quad \left( {x,y} \right) \in \Omega $$
(9)

where A and B are weights for structure and texture respectively, A, B ∈ [0, 1], and A + B ≡ 1. The structure term Δ s u(x, y; t) denotes the right part of equal sign in Eq. 6. If B = 0, Eq. 9 is reduced to Eq. 6, meaning that only the structure is inpainted. The texture term Δ t u(x, y; α, d, t) can be expressed as:

$$ {\Delta_t}u\left( {x,y;\alpha, d,t} \right) = \frac{{{\partial^2}u\left( {x,y;d,t} \right)}}{{\partial {\xi_\alpha }^2}} + \frac{{{\partial^2}u\left( {x,y;d,t} \right)}}{{\partial {\eta_\alpha }^2}} $$
(10)
$$ \left[ {\begin{array}{*{20}{c}} x \\y \\\end{array} } \right] = \left[ {\begin{array}{*{20}{c}} {\cos \alpha } & { - \sin \alpha } \\{\sin \alpha } & {\cos \alpha } \\\end{array} } \right]\left[ {\begin{array}{*{20}{c}} {{\xi_\alpha }} \\{{\eta_\alpha }} \\\end{array} } \right] $$
(11)

where ξ α and η α correspond to the texture direction and its perpendicular direction respectively.

Since the gray values show repetition along the texture and its perpendicular direction, Eq. 10 reflects gray value difference between the damaged pixels with intervals related to the texture periodicity. Therefore the model represented by Eq. 9 can simultaneously propagate structure and texture information into the damaged region.

4 Numerical implementation

In this section, we give the detailed numerical implementation for the proposed PDE. We know that the parabolic PDE can be formulated as the discrete iterated form. So the recursion formula of Eq. 9 is:

$$ {u^{\left( {n + 1} \right)}}\left( {x,y} \right) = {u^{(n)}}\left( {x,y} \right) + \lambda u_1^{(n)}\left( {x,y} \right)\quad \;n = 1,2, \cdots, T\quad \left( {x,y} \right) \in \Omega $$
(12)
$$ u_1^{(n)}(x,y) = A\,{\Delta_s}{u^{(n)}}(x,y) + B\,{\Delta_t}{u^{(n)}}(x,y;\alpha, d) $$
(13)

where the superscript (n) is a time index, viz., number of iteration steps, T is the total iteration steps, u (n)(x, y) represents the pixel value after n iteration steps, u 1 (n)(x, y) is the updating increment of each iteration step, and λ denotes the updating speed. The increment u 1 (n)(x, y) is composed of structure term Δ s u (n)(x, y) and texture term Δ t u (n)(x, y; α, d). The numerical calculations for these two terms are given respectively in the following.

4.1 Structure term

For the anisotropic model in Section 3.2, the two orthogonal propagation directions p and q are rotated by an angle θ with respect to the horizontal and vertical directions. Because of the anisotropic property, propagation strengths along the two orthogonal directions are different. Denote the rotation matrix in Eq. 7 as R and write the weight c in a diagonal matrix form C:

$$ {\mathbf{R}} = \left[ {\begin{array}{*{20}{c}} {\cos \theta } & { - \sin \theta } \\{\sin \theta } & {\cos \theta } \\\end{array} } \right]\,,\quad \;{\mathbf{C}} = \left[ {\begin{array}{*{20}{c}} 1 & 0 \\0 & c \\\end{array} } \right] $$
(14)

For simplicity, we can express the structure term Δ s u(x, y) in the form of 3 × 3 convolution mask:

$$ {\Delta_s}u(x,y) = \frac{\kappa }{{{h^2}}}\left[ {\begin{array}{*{20}{c}} { - {a_{12}}/2} & {{a_{22}}} & {{a_{12}}/2} \\{{a_{11}}} & { - 2\left( {{a_{11}} + {a_{22}}} \right)} & {{a_{11}}} \\{{a_{12}}/2} & {{a_{22}}} & { - {a_{12}}/2} \\\end{array} } \right] * \;\left[ {\begin{array}{*{20}{c}} {u\left( {x - h,y - h} \right)} & {u\left( {x - h,y} \right)} & {u\left( {x - h,y + h} \right)} \\{u\left( {x,y - h} \right)} & {u\left( {x,y} \right)} & {u\left( {x,y + h} \right)} \\{u\left( {x + h,y - h} \right)} & {u\left( {x + h,y} \right)} & {u\left( {x + h,y + h} \right)} \\\end{array} } \right] $$
(15)

where a 11, a 12 and a 22 are the three distinct elements of the symmetric matrix A = {a i,j } [16, 17]:

$$ {\mathbf{A}} = {{\mathbf{R}}^T}{{\mathbf{C}}^T}{\mathbf{CR}} = \left[ {\begin{array}{*{20}{c}} {{{\cos }^2}\theta + {c^2}{{\sin }^2}\theta } & {\left( {{c^2} - 1} \right)\cos \theta \sin \theta } \\{\left( {{c^2} - 1} \right)\cos \theta \sin \theta } & {{c^2}{{\cos }^2}\theta + {{\sin }^2}\theta } \\\end{array} } \right] $$
(16)

The step size h of the finite difference operation in Eq. 15 may either be set to 2, corresponding to the one-point central difference. The constant κ is equal to 1 as stated in Section 2. Or, h may be set to 1, corresponding to the half-point central difference. In this case κ should be chosen to be one quarter of the above value, that is, κ = 0.25, since in the partial differential equations, second order derivatives of u with respect to the spatial coordinates are involved. The latter discrete implementation gives better precision but slower convergence.

We use the geometric information of current pixel to adaptively choose one-point or half-point central difference when evaluating Eq. 15. The curvature values of isophote indicate the shape information of the image and complexity of the edge structure. When the shape of isophote is complicated and curvature is large, the finer half-point difference format is used, i.e., h = 1. This way, the inpainted quality of image structure will be finer. Contrarily, using the one-point difference, i.e., h = 2, the processing speed is faster. Therefore, we obtain the relation between κ and h with the curvature ω of every pixel (i, j) in the damaged region:

$$ \omega = \nabla \cdot \left[ {\frac{{\nabla u}}{{\left| {\nabla u} \right|}}} \right] = \frac{{{u_{xx}}u_y^2 - 2{u_x}{u_y}{u_{xy}} + {u_{yy}}u_x^2}}{{{{\left( {u_x^2 + u_y^2} \right)}^{{{3} \left/ {2} \right.}}}}} $$
(17)
$$ h = \left\{ {\begin{array}{*{20}{c}} 1 \hfill & {\kappa = 0.25} \hfill & {{\hbox{if }}\left| { \omega \left( {x,\,y} \right) } \right| \geqslant W} \hfill \\2 \hfill & {\kappa = 1} \hfill & {{\hbox{if }}\left| { \omega \left( {x,\,y} \right) } \right| < W} \hfill \\\end{array} } \right. $$
(18)

Details of numerical computation of the curvature in Eq. 17 can be referred to [5]. W in Eq. 18 is a positive constant, which can be set as the medium value of the curvature’s magnitude for all pixels in the image.

4.2 Texture term

The numerical implementation for the texture term Δ t u(x, y; α, d) can be realized by the following Eqs. 1922, where α denotes the angle of texture direction, and d is the scale of texture periodicity.

$$ {\Delta_t}u(x,y;\alpha, d) = 0.25 \times \left[ {\frac{{{\partial^2}u\left( {x,y;d} \right)}}{{\partial {\xi_\alpha }^2}} + \frac{{{\partial^2}u\left( {x,y;d} \right)}}{{\partial {\eta_\alpha }^2}}} \right] $$
(19)
$$ \frac{{{\partial^2}u(x,y;d)}}{{\partial {\xi_\alpha }^2}} = u(x + {d_y},y - {d_x}) + u(x - {d_y},y + {d_x}) - 2 \cdot u(x,y) $$
(20)
$$ \frac{{{\partial^2}u(x,y;d)}}{{\partial {\eta_\alpha }^2}} = u(x + {d_x},y + {d_y}) + u(x - {d_x},y - {d_y}) - 2 \cdot u(x,y) $$
(21)
$$ {d_x} = {\hbox{round}}(d\cos \alpha ) \,,\quad {d_y} = {\hbox{round}}(d\sin \alpha ) $$
(22)

It is clear that the larger the damaged region is, the more iteration steps T in Eq. 12 for inpainting are needed. T may be pre-determined on a trial-and-error basis, or the iteration may terminate when | u 1 (n)(i, j)−0| is less than a given small positive number.

5 Experimental results and discussion

Experiments were carried out on a group of structure and texture images with different resolution. Damages in the test images include random scratches, block impairment, scattered spots, and superimposed watermark. For color images, inpainting is done on the R, G, and B channels. The obtained channels are then combined to give the final results. The updating speed λ in Eq. 12 is set to 1 for numerical implementation.

5.1 Comparison with typical structure inpainting

We first use the proposed method for structure image inpainting, i.e., B = 0 in Eq. 9, and compare with some typical structure inpainting methods. The comparison results of the present method using anisotropic heat transfer model with the isotropic method are given in Fig. 3. We can see that the anisotropic model does effectively avoid edge blurring.

Fig. 3
figure 3

Comparison between the isotropic and anisotropic models. From left to right: damaged image, inpainting result obtained using the isotropic model with PSNR = 44.1 dB, where blurring at edges are visible, and result obtained using the present method with PSNR = 50.2 dB and improved visual quality

We then compare our method with several typical PDE-based methods as listed in Table 1. Some images are given in Figs. 4 and 5. By subjective observation, we can find that the proposed anisotropic method gives satisfactory output. The PSNR values of inpainting results are also given in Table 2. Compared with other PDE-based methods, our method produces better structure repairing quality and better performance in terms of PSNR.

Table 1 Comparison of structure inpainting methods
Fig. 4
figure 4

Image inpainting results of proposed method

Fig. 5
figure 5

Comparison with other PDE-based methods. The leftmost is input damaged images, the 2nd through the 4th are the repaired results with BSCB method, total variation inpainting and CDD-based method respectively. The rightmost is the result of the present method

Table 2 Comparison of PSNR values between the proposed method and the methods in Table 1

We also compare performance of the fixed finite difference implementation for structure term and the implementation in which the step size h is chosen adaptively according to the curvature. The result is given in Table 3. The value of T in Table 3 is the minimum of all N values satisfying the following equation, which is the smallest iteration numbers by which PSNR reaches stability.

$$ \left| {\,PSNR\left( {{u^{(n)}}} \right) - PSNR\left( {{u^{(N)}}} \right)\,} \right| \leqslant \varepsilon, \quad \forall \,n > N $$
(23)

where u (n) is the inpainting result after n iteration steps, and ε is a small positive constant. We can see from Table 3 that, when using the half-point scheme (h = 1), the average PSNR values are the highest, but with more iteration steps. The structure inpainting quality obtained by adaptively choosing h is very close to that using a fixed h = 1. The difference in PSNR is below 1 dB. It should be noted that, the adaptive scheme not only can reduce iteration steps for saving processing, but also can give higher PSNR compared with one-point finite difference, i.e., h = 2.

Table 3 Comparison of adaptive and fixed finite difference

5.2 Results of simultaneous structure and texture inpainting

The present method introduces the texture term into the anisotropic inpainting model, so the method can simultaneously propagate structure and texture information. We carry out some experiments on lots of images containing texture component. An example is the test image Barbara with rich texture as shown in Fig. 6. Suppose the region to be inpainted is located in a texture-rich area marked with white blocks in the figure. The parameters used in the experiment are: weight of texture information B = 0.5, texture direction α = 4π/9, texture periodicity d = 4, which are measured from the test image.

Fig. 6
figure 6

Simultaneous structure and texture inpainting results. The images in the first row are the original, the damaged and the repaired result using the proposed method with PSNR = 54.0 dB. The images in the second row are the inpainted results by BSCB method with PSNR = 42.5 dB, total variation method PSNR = 46.3 dB, CDD method with PSNR = 37.6 dB, and the layered method in [4] with PSNR = 49.1 dB respectively

The inpainted regions in Fig. 6 are magnified to make the repaired details clear. The images in the first row of Fig. 6 are the original, the damaged and the repaired result using the proposed method with PSNR = 54.0 dB. The first three images in the second row of Fig. 6 are the inpainted results by some typical PDE-based methods: BSCB method with PSNR = 42.5 dB, TV method with PSNR = 46.3 dB and CDD method with PSNR = 37.6 dB respectively. We can clearly see that our PDE-based method repairs the structure and texture information successfully, while the other methods listed in Table 1 can’t inpaint the texture. The last image in the second row of Fig. 6 is the inpainting result of the layered method in [4] with PSNR = 49.1 dB, which can restore the texture information by a separate synthesizing process. It is observed that the method proposed in the present work produces better visual appearance.

In addition, since our method is based on the heat transfer model, it is basically a second-order PDE approach, therefore more concise than the other higher order methods [2, 3, 5, 6] mathematically. The proposed method has lower computation complexity as experiments shows that less iteration steps are required to reach stability in the numerical implementation, as illustrated in Fig. 7.

Fig. 7
figure 7

Performance comparisons between the proposed method and the reported methods in Table 1

The proposed method is also more efficient than the layered method in [4]. We executed our Matlab codes on a computer with 2.94 GHz processor and 4 G memory under Windows Vista, for Fig. 6 took less than 230 s, while for the method of [4], it took more than 364 s because of the image decomposition process and exhaustive searching for texture synthesis.

6 Conclusions

We analogize image inpainting with heat transfer process and establish an anisotropic inpainting model based on PDE for structure inpainting. But most of reported PDE-based methods can’t produce texture information in the inpainted region. In order to solve this problem, we introduce a texture term in the anisotropic model, which can simultaneously propagate structure and texture information. In structure inpainting, the two components of propagation are the isophote direction and its orthogonal direction, and the propagation intensity along the isophote direction is unchanged while intensity along the orthogonal direction of isophote is inversely proportional to the magnitude of gradient. In texture inpainting, the added texture term reflects periodicity along the texture and its perpendicular direction, which can successfully propagate regular texture information.

Compared with other high order PDE-based methods, our method is more concise mathematically. In numerical implementation for structure term, we adaptively choose the step size of the finite difference according to the curvature. In places where curvature is large, the refined half-point finite difference is used, while the one-point finite difference suffices elsewhere. In this way, the proposed method can produce satisfactory image quality and reduce computational complexity.

In current stage, parameters such as the texture direction, periodicity and weights need to be pre-determined. Further investigation is needed to find ways for automatic, or semi-automatic, determination of these parameters. One limitation of the present method is that the texture must have a dominant direction. Future improvement to be made is therefore to make the model more general so that irregular textures can effectively be treated.

We also assume that, as in other works reported thus far, the locations of damaged region are known. But for real applications, detecting and locating the damaged region is important, therefore deserving in-depth investigations.